Network Theory (Part 21)

Recently we’ve been talking about ‘reaction networks’, like this:

The letters are names of chemicals, and the arrows are chemical reactions. If we know how fast each reaction goes, we can write down a ‘rate equation’ describing how the amount of each chemical changes with time.

In Part 17 we met the deficiency zero theorem. This is a powerful tool for finding equilibrium solutions of the rate equation: in other words, solutions where the amounts of the various chemicals don’t change at all. To apply this theorem, two conditions must hold. Both are quite easy to check:

• Your reaction network needs to be ‘weakly reversible’: if you have a reaction that takes one bunch of chemicals to another, there’s a series of reactions that takes that other bunch back to the one you started with.

• A number called the ‘deficiency’ that you can compute from your reaction network needs to be zero.

The first condition makes a lot of sense, intuitively: you won’t get an equilibrium with all the different chemicals present if some chemicals can turn into others but not the other way around. But the second condition, and indeed the concept of ‘deficiency’, seems mysterious.

Luckily, when you work through the proof of the deficiency zero theorem, the mystery evaporates. It turns out that there are two equivalent ways to define the deficiency of a reaction network. One makes it easy to compute, and that’s the definition people usually give. But the other explains why it’s important.

In fact the whole proof of the deficiency zero theorem is full of great insights, so I want to show it to you. This will be the most intense part of the network theory series so far, but also the climax of its first phase. After a little wrapping up, we will then turn to another kind of network: electrical circuits!

Today I’ll just unfold the concept of ‘deficiency’ so we see what it means. Next time I’ll show you a slick way to write the rate equation, which is crucial to proving the deficiency zero theorem. Then we’ll start the proof.

Reaction networks

Let’s recall what a reaction network is, and set up our notation. In chemistry we consider a finite set of ‘species’ like C, O2, H2O and so on… and then consider reactions like

CH4 + 2O2 $\longrightarrow$ CO2 + 2H2O

On each side of this reaction we have a finite linear combination of species, with natural numbers as coefficients. Chemists call such a thing a complex.

So, given any finite collection of species, say $S,$ let’s write $\mathbb{N}^S$ to mean the set of finite linear combinations of elements of $S,$ with natural numbers as coefficients. The complexes appearing in our reactions will form a subset of this, say $K.$

We’ll also consider a finite collection of reactions—or in the language of Petri nets, ‘transitions’. Let’s call this $T$. Each transition goes from some complex to some other complex: if we want a reaction to be reversible we’ll explicitly include another reaction going the other way. So, given a transition $\tau \in T$ it will always go from some complex called its source $s(\tau)$ to some complex called its target $t(\tau).$

All this data, put together, is a reaction network:

Definition. A reaction network $(S,\; s,t : T \to K)$ consists of:

• a finite set $S$ of species,

• a finite set $T$ of transitions,

• a finite set $K \subset \mathbb{N}^S$ of complexes,

source and target maps $s,t : T \to K.$

We can draw a reaction network as a graph with complexes as vertices and transitions as edges:

The set of species here is $S = \{A,B,C,D,E\},$ and the set of complexes is $K = \{A,B,D,A+C,B+E\}.$

But to write down the ‘rate equation’ describing our chemical reactions, we need a bit more information: constants $r(\tau)$ saying the rate at which each transition $\tau$ occurs. So, we define:

Definition. A stochastic reaction network is a reaction network together with a map $r: T \to (0,\infty)$ assigning a rate constant to each transition.

Let me remind you how the rate equation works. At any time we have some amount $x_i \in [0,\infty)$ of each species $i.$ These numbers are the components of a vector $x \in \mathbb{R}^S,$ which of course depends on time. The rate equation says how this vector changes:

$\displaystyle{ \frac{d x_i}{d t} = \sum_{\tau \in T} r(\tau) \left(t_i(\tau) - s_i(\tau)\right) x^{s(\tau)} }$

Here I’m writing $s_i(\tau)$ for the $i$th component of the vector $s(\tau),$ and similarly for $t_i(\tau).$ I should remind you what $x^{s(\tau)}$ means, since here we are raising a vector to a vector power, which is a bit unusual. So, suppose we have any vector

$x = (x_1, \dots, x_k)$

and we raise it to the power of

$s = (s_1, \dots, s_k)$

Then by definition we get

$\displaystyle{ x^s = x_1^{s_1} \cdots x_k^{s_k} }$

Given this, I hope the rate equation makes intuitive sense! There’s one term for each transition $\tau.$ The factor of $t_i(\tau) - s_i(\tau)$ shows up because our transition destroys $s_i(\tau)$ things of the $i$th species and creates $t_i(\tau)$ of them. The big product

$\displaystyle{ x^{s(\tau)} = x_1^{s_1(\tau)} \cdots x_k^{s_k(\tau)} }$

shows up because our transition occurs at a rate proportional to the product of the numbers of things it takes as inputs. The constant of proportionality is the reaction rate $r(\tau).$

The deficiency zero theorem says lots of things, but in the next few episodes we’ll prove a weak version, like this:

Deficiency Zero Theorem (Baby Version). Suppose we have a weakly reversible reaction network with deficiency zero. Then for any choice of rate constants there exists an equilibrium solution $x \in (0,\infty)^S$ of the rate equation. In other words:

$\displaystyle{ \sum_{\tau \in T} r(\tau) \left(t_i(\tau) - s_i(\tau)\right) x^{s(\tau)} = 0}$

An important feature of this result is that all the components of the vector $x$ are positive. In other words, there’s actually some chemical of each species present!

But what do the hypotheses in this theorem mean?

A reaction network is weakly reversible if for any transition $\tau \in T$ going from a complex $\kappa$ to a complex $\kappa',$ there is a sequence of transitions going from $\kappa'$ back to $\kappa.$ But what about ‘deficiency zero’? As I mentioned, this requires more work to understand.

So, let’s dive in!

Deficiency

In modern math, we like to take all the stuff we’re thinking about and compress it into a diagram with a few objects and maps going between these objects. So, to understand the deficiency zero theorem, I wanted to isolate the crucial maps. For starters, there’s an obvious map

$Y : K \to \mathbb{N}^S$

sending each complex to the linear combination of species that it is.

Indeed, we can change viewpoints a bit and think of $K$ as an abstract set equipped with this map saying how each complex is made of species. From now on this is what we’ll do. Then all the information in a stochastic reaction network sits in this diagram:

This is fundamental to everything we’ll do for the next few episodes, so take a minute to lock it into your brain.

We’ll do lots of different things with this diagram. For example, we often want to use ideas from linear algebra, and then we want our maps to be linear. For example, $Y$ extends uniquely to a linear map

$Y : \mathbb{R}^K \to \mathbb{R}^S$

sending real linear combinations of complexes to real linear combinations of species. Reusing the name $Y$ here won’t cause confusion. We can also extend $r,$ $s,$ and $t$ to linear maps in a unique way, getting a little diagram like this:

Linear algebra lets us talk about differences of complexes. Each transition $\tau$ gives a vector

$\partial(\tau) = t(\tau) - s(\tau) \in \mathbb{R}^K$

saying the change in complexes that it causes. And we can extend $\partial$ uniquely to a linear map

$\partial : \mathbb{R}^T \to \mathbb{R}^K$

defined on linear combinations of transitions. Mathematicians would call $\partial$ a boundary operator.

So, we have a little sequence of linear maps

$\mathbb{R}^T \stackrel{\partial}{\to} \mathbb{R}^K \stackrel{Y}{\to} \mathbb{R}^S$

This turns a transition into a change in complexes, and then a change in species.

If you know fancy math you’ll be wondering if this sequence is a ‘chain complex’, which is a fancy way of saying that $Y \partial = 0.$ The answer is no. This equation means that every linear combination of reactions leaves the amount of all species unchanged. Or equivalently: every reaction leaves the amount of all species unchanged. This only happens in very silly examples.

Nonetheless, it’s possible for a linear combination of reactions to leave the amount of all species unchanged.

For example, this will happen if we have a linear combination of reactions that leaves the amount of all complexes unchanged. But this sufficient condition is not necessary. And this leads us to the concept of ‘deficiency zero’:

Definition. A reaction network has deficiency zero if any linear combination of reactions that leaves the amount of every species unchanged also leaves the amount of every complex unchanged.

In short, a reaction network has deficiency zero iff

$Y (\partial \rho) = 0 \implies \partial \rho = 0$

for every $\rho \in \mathbb{R}^T.$ In other words—using some basic concepts from linear algebra—a reaction network has deficiency zero iff $Y$ is one-to-one when restricted to the image of $\partial.$ Remember, the image of $\partial$ is

$\mathrm{im} \partial = \{ \partial \rho \; : \; \rho \in \mathbb{R}^T \}$

Roughly speaking, this consists of all changes in complexes that can occur due to reactions.

In still other words, a reaction network has deficiency zero if 0 is the only vector in both the image of $\partial$ and the kernel of $Y:$

$\mathrm{im} \partial \cap \mathrm{ker} Y = \{ 0 \}$

Remember, the kernel of $Y$ is

$\mathrm{ker} Y = \{ \phi \in \mathbb{R}^K : Y \phi = 0 \}$

Roughly speaking, this consists of all changes in complexes that don’t cause changes in species. So, ‘deficiency zero’ roughly says that if a reaction causes a change in complexes, it causes a change in species.

(All this ‘roughly speaking’ stuff is because in reality I should be talking about linear combinations of transitions, complexes and species. But it’s a bit distracting to keep saying that when I’m trying to get across the basic ideas!)

Now we’re ready to understand deficiencies other than zero, at least a little. They’re defined like this:

Definition. The deficiency of a reaction network is the dimension of $\mathrm{im} \partial \cap \mathrm{ker} Y.$

How to compute the deficiency

You can compute the deficiency of a reaction network just by looking at it. However, it takes a little training. First, remember that a reaction network can be drawn as a graph with complexes as vertices and transitions as edges, like this:

There are three important numbers we can extract from this graph:

• We can count the number of vertices in this graph; let’s call that $|K|,$ since it’s just the number of complexes.

• We can count the number of pieces or ‘components’ of this graph; let’s call that $\# \mathrm{components}$ for now.

• We can also count the dimension of the image of $Y \partial.$ This space, $\mathrm{im} Y \partial,$ is called the stochiometric subspace: vectors in here are changes in species that can be accomplished by transitions in our reaction network, or linear combinations of transitions.

These three numbers, all rather easy to compute, let us calculate the deficiency:

Theorem. The deficiency of a reaction network equals

$|K| - \# \mathrm{components} - \mathrm{dim} \left( \mathrm{im} Y \partial \right)$

Proof. By definition we have

$\mathrm{deficiency} = \dim \left( \mathrm{im} \partial \cap \mathrm{ker} Y \right)$

but another way to say this is

$\mathrm{deficiency} = \mathrm{dim} (\mathrm{ker} Y|_{\mathrm{im} \partial})$

where we are restricting $Y$ to the subspace $\mathrm{im} \partial,$ and taking the dimension of the kernel of that.

The rank-nullity theorem says that whenever you have a linear map $T: V \to W$ between finite-dimensional vector spaces, then

$\mathrm{dim} \left(\mathrm{ker} T \right) \; = \; \mathrm{dim}\left(\mathrm{dom} T\right) \; - \; \mathrm{dim} \left(\mathrm{im} T\right)$

where $\mathrm{dom} T$ is the domain of $T,$ namely the vector space $V.$ It follows that

$\mathrm{dim}(\mathrm{ker} Y|_{\mathrm{im} \partial}) = \mathrm{dim}(\mathrm{dom} Y|_{\mathrm{im} \partial}) - \mathrm{dim}(\mathrm{im} Y|_{\mathrm{im} \partial})$

The domain of $Y|_{\mathrm{im} \partial}$ is just $\mathrm{im} \partial,$ while its image equals $\mathrm{im} Y \partial,$ so

$\mathrm{deficiency} = \mathrm{dim}(\mathrm{im} \partial) - \mathrm{dim}(\mathrm{im} Y \partial)$

The theorem then follows from this:

Lemma. $\mathrm{dim} (\mathrm{im} \partial) = |K| - \# \mathrm{components}$

Proof. In fact this holds whenever we have a finite set of complexes and a finite set of transitions going between them. We get a diagram

We can extend the source and target functions to linear maps as usual:

and then we can define $\partial = t - s.$ We claim that

$\mathrm{dim} (\mathrm{im} \partial) = |K| - \# \mathrm{components}$

where $\# \mathrm{components}$ is the number of connected components of the graph with $K$ as vertices and $T$ as edges.

This is easiest to see using an inductive argument where we start by throwing out all the edges of our graph and then put them back in one at a time. When our graph has no edges, $\partial = 0$ and the number of components is $|K|,$ so our claim holds: both sides of the equation above are zero. Then, each time we put in an edge, there are two choices: either it goes between two different components of the graph we have built so far, or it doesn’t. If goes between two different components, it increases $\mathrm{dim} (\mathrm{im} \partial)$ by 1 and decreases the number of components by 1, so our equation continues to hold. If it doesn’t, neither of these quantities change, so our equation again continues to hold.   █

Examples

This reaction network is not weakly reversible, since we can get from $B + E$ and $A + C$ to $D$ but not back. It becomes weakly reversible if we throw in another transition:

Taking a reaction network and throwing in the reverse of an existing transition never changes the number of complexes, the number of components, or the dimension of the stoichiometric subspace. So, the deficiency of the reaction network remains unchanged. We computed it back in Part 17. For either reaction network above:

• the number of complexes is 5:

$|K| = |\{A,B,D, A+C, B+E\}| = 5$

• the number of components is 2:

$\# \mathrm{components} = 2$

• the dimension of the stoichiometric subspace is 3. For this we go through each transition, see what change in species it causes, and take the vector space spanned by these changes. Then we find a basis for this space and count it:

$\mathrm{im} Y \partial =$

$\langle B - A, A - B, D - A - C, D - (B+E) , (B+E)-(A+C) \rangle$

$= \langle B -A, D - A - C, D - A - E \rangle$

so

$\mathrm{dim} \left(\mathrm{im} Y \partial \right) = 3$

As a result, the deficiency is zero:

$\begin{array}{ccl} \mathrm{deficiency} &=& |K| - \# \mathrm{components} - \mathrm{dim} \left( \mathrm{im} Y \partial \right) \\ &=& 5 - 2 - 3 \\ &=& 0 \end{array}$

Now let’s throw in another complex and some more transitions:

Now:

• the number of complexes increases by 1:

$|K| = |\{A,B,D,E, A+C, B+E\}| = 6$

• the number of components is unchanged:

$\# \mathrm{components} = 2$

• the dimension of the stoichiometric subspace increases by 1. We never need to include reverses of transitions when computing this:

$\mathrm{im} Y \partial =$

$\langle B-A, E-B, D - (A+C),$
$D - (B+E) , (B+E)-(A+C) \rangle$

$= \langle B -A, E-B, D - A - C, D - B - E \rangle$

so

$\mathrm{dim} \left(\mathrm{im} Y \partial \right) = 4$

As a result, the deficiency is still zero:

$\begin{array}{ccl} \mathrm{deficiency} &=& |K| - \# \mathrm{components} - \mathrm{dim} \left( \mathrm{im} Y \partial \right) \\ &=& 6 - 2 - 4 \\ &=& 0 \end{array}$

Do all reaction networks have deficiency zero? That would be nice. Let’s try one more example:

• the number of complexes is the same as in our last example:

$|K| = |\{A,B,D,E, A+C, B+E\}| = 6$

• the number of components is also the same:

$\# \mathrm{components} = 2$

• the dimension of the stoichiometric subspace is also the same:

$\mathrm{im} Y \partial =$

$\langle B-A, D - (A+C), D - (B+E) ,$
$(B+E)-(A+C), (B+E) - B \rangle$

$= \langle B -A, D - A - C, D - B , E\rangle$

so

$\mathrm{dim} \left(\mathrm{im} Y \partial \right) = 4$

So the deficiency is still zero:

$\begin{array}{ccl} \mathrm{deficiency} &=& |K| - \# \mathrm{components} - \mathrm{dim} \left( \mathrm{im} Y \partial \right) \\ &=& 6 - 2 - 4 \\ &=& 0 \end{array}$

It’s sure easy to find examples with deficiency zero!

Puzzle 1. Can you find an example where the deficiency is not zero?

Puzzle 2. If you can’t find an example, prove the deficiency is always zero. If you can, find 1) the smallest example and 2) the smallest example that actually arises in chemistry.

Note that not all reaction networks can actually arise in chemistry. For example, the transition $A \to A + A$ would violate conservation of mass. Nonetheless a reaction network like this might be useful in a very simple model of amoeba reproduction, one that doesn’t take limited resources into account.

Different kinds of graphs

I’ll conclude with some technical remarks that only a mathematician could love; feel free to skip them if you’re not in the mood. As you’ve already seen, it’s important that a reaction network can be drawn as a graph:

But there are many kinds of graph. What kind of graph is this, exactly?

As I mentioned in Part 17, it’s a directed multigraph, meaning that the edges have arrows on them, more than one edge can go from one vertex to another, and we can have any number of edges going from a vertex to itself. Not all those features are present in this example, but they’re certainly allowed by our definition of reaction network!

After all, we’ve seen that a stochastic reaction network amounts to a little diagram like this:

If we throw out the rate constants, we’re left with a reaction network. So, a reaction network is just a diagram like this:

If we throw out the information about how complexes are made of species, we’re left with an even smaller diagram:

And this precisely matches the slick definition of a directed multigraph: it’s a set $E$ of edges, a set $V$ of vertices, and functions $s,t : E \to V$ saying where each edge starts and where it ends: its source and target.

Since this series is about networks, we should expect to run into many kinds of graphs. While their diversity is a bit annoying at first, we must learn to love it, at least if we’re mathematicians and want everything to be completely precise.

There are at least 23 = 8 kinds of graphs, depending on our answers to three questions:

• Do the edges have arrows on them?

• Can more than one edge can go between a specific pair of vertices?

and

• Can an edge can go from a vertex to itself?

We get directed multigraphs if we answer YES, YES and YES to these questions. Since they allow all three features, directed multigraphs are very important. For example, a category is a directed multigraph equipped with some extra structure. Also, what mathematicians call a quiver is just another name for a directed multigraph.

We’ve met two other kinds of graph so far:

• In Part 15 and Part 16 we described circuits made of resistors—or, in other words, Dirichlet operators—using ‘simple graphs’. We get simple graphs when we answer NO, NO and NO to the three questions above. The slick definition of a simple graph is that it’s a set $V$ of vertices together with a subset $E$ of the collection of 2-element subsets of $V$.

• In Part 20 we described Markov processes on finite sets—or, in other words, infinitesimal stochastic operators—using ‘directed graphs’. We get directed graphs when we answer YES, NO and YES to the three questions. The slick definition of a directed graph is that it’s a set $V$ of vertices together with a subset $E$ of the ordered pairs of $V$: $E \subseteq V \times V$.

There is a lot to say about this business, but for now I’ll just note that you can use directed multigraphs with edges labelled by positive numbers to describe Markov processes, just as we used directed graphs. You don’t get anything more general, though! After all, if we have multiple edges from one vertex to another, we can replace them by a single edge as long as we add up their rate constants. And an edge from a vertex to itself has no effect at all.

In short, both for Markov processes and reaction networks, we can take ‘graph’ to mean either ‘directed graph’ or ‘directed multigraph’, as long as we make some minor adjustments. In what follows, we’ll use directed multigraphs for both Markov processes and reaction networks. It will work a bit better if we ever get around to explaining how category theory fits into this story.

35 Responses to Network Theory (Part 21)

1. Qiaochu Yuan says:

Can I convince you to switch your notation for finite linear combinations of elements of a set $S$ to $\mathbb{N}[S]$? Your notation $\mathbb{N}^S$ ought to refer to functions $S \to \mathbb{N}$; in particular, it ought to refer to a contravariant operation on sets, whereas $\mathbb{N}[S]$ refers to a covariant operation, and it also specializes correctly to e.g. a monoid algebra.

• John Baez says:

I’m afraid you can’t convince me to change my notation, at least not in this series of articles.

Privately I always use $\mathbb{N}[S]$, just as you suggest, for the set of finite linear combinations of elements of a set $S$ with natural number coefficients. As you can see from this post, it’s also crucial to switch to real coefficients, and then the right notation is $\mathbb{R}[S]$.

However I’m aiming these posts (and the ensuing book) at people who aren’t übermathematicians: that’s why I’m doing things like reminding people what the kernel of a linear map is.

I have a feeling that engineers and chemists and physicists and biologists will be more comfortable with expressions like $\mathbb{N}^S$ and $\mathbb{R}^S$—especially the latter, since it looks like $\mathbb{R}^n$, and means essentially the same thing when $S$ has $n$ elements.

But for this cutting-corners strategy to be honest, it’s crucial that the proof of the deficiency zero theorem only needs the case where $S$ is finite! As you note, there are two functors from Set to Vect, a covariant one

$S \mapsto \mathbb{R}[S]$

and a contravariant one

$S \mapsto \mathbb{R}^S$

They give isomorphic vector spaces only for finite sets. My plan is to only write $\mathbb{R}^S$ but mean $\mathbb{R}[S]$ when these two are isomorphic.

If I ever got into studying morphisms between stochastic reaction networks, these issues could become more important.

Of course, the great thing about blog posts, as compared to a book, is that we can have side conversations where the übermathematicians point out subtleties that don’t show up in the main text!

the term:

\begin{array}{ccl} \mathrm{im} Y d &=& \langle B-A, A-B, D – A – C, D – (B+E) , (B+E)-(A+C) \rangle \\ &=& \langle B -A, D – A – C, D – A – E \rangle \end{array}

is not fully visible, however I don’t know whether this is due to firefox.

this seems somehow hidden in your proof but I may have somehow missed that you mentioned explicitly that components have to be distinguishable or shouldnt be contained in each other, since in principle one could also choose e.g. the “two” components A -> B and A -> B (or B -> A) (and get a deficiency of -1). If the components have to be distinguishable with respect to its ingrediences than one could take e.g. the triangle A ->B->C->A (imagine this to be a closed triangle) as one component A->B the other and also get a deficiency of -1, if I understood correctly. so one would eventually want to exclude these cases explicitly or is this self-understood?

A->B and B ->C have a deficiency of -1 if I understood correctly.

there seems to be a misprint in:
linear algebra—a reaction network has deficiency zero iff Y is one-to-one when restricted to the image of d. Remember, the image of

• John Baez says:

the term […] is not fully visible,

Thanks! Actually three different terms like that were not completely visible—I’ve fixed them now. The problem is the skinny-column format of this blog, combined with the fact that it renders LaTeX as gifs, combined with the fact that if the gif is too long to fit, the end just becomes invisible. The equations usually look better on my website, and you can Part 21 over there now.

I may have somehow missed that you mentioned explicitly that components have to be distinguishable or shouldnt be contained in each other, since in principle one could also choose e.g. the “two” components A -> B and A -> B (or B -> A) …

Those aren’t components. You don’t get to choose the components; they’re determined.

A reaction network is a graph with complexes as vertices and transitions as edges. A ‘component’ is roughly a maximal connected subgraph of this—or even more roughly, a ‘chunk’ of your graph that’s as big as possible without falling into separate pieces.

So, if you have only two complexes $A$ and $B$ and only two transitions $A \to B$ and $B \to A$, we get the following reaction network:

$A {\rightarrow \atop \leftarrow} B$

and this has exactly one component: the whole graph. On the other hand, this reaction network:

has exactly 2 components.

I said ‘roughly’ because there are different kinds of graphs, and at least two concepts of ‘component’. The concept of component we need seems really intuitive to me, but let me be more precise.

Our graphs are directed multigraphs, defined as in the blog entry. There are two famous concepts of ‘component’ for these graphs: strongly connected components and connected components. We want to use connected components.

Two vertices $x, y$ lie in the same strongly connected component iff you can find a directed path of edges from $x$ to $y.$ and also one from $y$ to $x.$ The crucial word here is ‘directed’: a directed path of edges from $x$ to $y$ is one that follows the direction of the arrows, like this:

$x \to a \to b \to c \to y$

This path from $x$ to $y$ is not directed:

$x \to a \leftarrow b \to c \to y$

Two vertices $x$ and $y$ lie in the same connected component iff you can find a path of edges, possibly not directed, going from $x$ to $y.$ (And in this case, there will automatically also be a path of this sort going from $y$ back to $x.$)

So, $x$ and $y$ lie in the same connected component here, but not the same strongly connected component:

$x \to a \leftarrow b \to c \to y$

Here’s a graph with one connected component and 3 strongly connected components, which are marked in blue:

For the theory we’re looking at now, we only care about connected components, not strongly connected components!

However, for weakly reversible reaction networks, the connected components are the same as the strongly connected components. (This is a good exercise, to make sure one understands all the definitions.)

3. Blake Stacey says:

Any luck since last November in making the relationship between deficiency and Euler characteristic make sense?

• John Baez says:

Thanks for bringing that up again! I know a lot more about this now. There’s a sense in which it does make sense. But I still don’t see it being useful.

First, in the theory of simplicial sets, we identify the standard n-simplex with an ordered (n+1)-element set, since an n-simplex has n+1 corners. But this raises the puzzle: what about the empty set? This corresponds to a (-1)-simplex. But what’s the use of that?

It turns out that if you crank through the details, it makes a lot of sense to identify the (-1)-simplices in a simplicial set with its connected components

So, there’s a mutant version of the Euler characteristic of a graph which goes like this:

– #components + #vertices – #edges – #faces + …

It’s still additive, like the usual Euler characteristic, but now it vanishes for a simplicial complex with just one point. For a graph, this mutant Euler characteristic:

– #components + #vertices – #edges

looks a bit like the deficiency of a reaction network, which is:

– #components + #vertices – dim(stoichiometric subspace)

Remember that the stochiometric subspace is a vector space spanned by one vector for each edge in our graph. The edge is a chemical reaction, and the vector is the change in species caused by that reaction. So instead of merely counting the edges, we are counting the dimension of a certain vector space spanned by one vector per edge! The answer could be smaller, since these vectors can be linearly dependent.

So, the deficiency of a reaction network can be seen as a special case of the reduced Euler characteristic of a ‘graph internal to Vect’. I don’t think I want to bore you with the definition of the quoted phrase, or how all the details work. I’d be glad to bore anyone who asks, though.

But the problem is: now that I’m understanding the proof of the zero deficiency theorem, I still don’t see how this way of thinking about deficiency is helpful! What seems a lot more helpful is this:

Definition. The deficiency of a reaction network is the dimension of $\mathrm{im} d \cap \mathrm{ker} Y$.

You’ll see why when we get to the actual proof!

But maybe I haven’t dug deep enough yet. Besides a deficiency zero theorem, Feinberg proved a deficiency one theorem!

• Martin Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors: I. The deficiency zero and deficiency one theorems, Chemical Engineering Science 42 (1987), 2229-2268.

I’m just studying this now, and perhaps when I get everything straightened out in my mind I’ll know better whether it’s helpful to think of deficiency as a sneaky sort of Euler characteristic.

• John Baez says:

By the way, this weird business of treating connected components as (-1)-dimensional entities also shows up in electrostatics. The electric field is $E$ is a 1-form, which is sort of like a vector field. The electrostatic potential $\phi$ is a 0-form , which is really just a fancy name for a function. If our manifold describing space doesn’t have any holes in it we can write

$E = d \phi$

However, if we add to $\phi$ a function $f$ that’s constant on each connected component, $E$ doesn’t change, since

$d f = 0$

So in fact we can invent (-1)-forms, and say the space of (-1)-forms has a basis given by the connected components of our manifold. Then, given a function $f$ that’s constant on each connected component, we can declare

$f = d \omega$

for some (-1)-form $\omega.$ The fact that

$d f = 0$

is then a consequence of

$d d \omega = 0$

This sounds strange, I know, but it’s a special case of ‘gauge transformations of gauge transformations’, which is something that shows up more significantly in the higher $p$-form generalizations of electromagnetism, like the Kalb–Ramond field in string theory.

4. Walter Blackstock says:

The methane/oxygen reaction as written is not balanced: 2 moles of oxygen not 3 per mole of methane.

• John Baez says:

It’s a good thing we’ve got an actual chemist in the joint! I was more concerned about getting subscripts and arrows in an equation that’s not in LaTeX (because all the letters should be roman) than in getting the reaction to balance.

“Those aren’t components. You don’t get to choose the components; they’re determined.”

so now I understood from what you say that A->B and B ->C would in your world be
the same as A->B->C (i.e. deficiency 0), is that correct?
I ask twice because in the world of RDF triples
(see e.g. this)
it seems that A->B and B ->C are not automatically identified with A->B->C
if A,B,C are socalled URI’s
within that community.

• John Baez says:

Sure. But now I think your puzzlement lies somewhere else. A reaction network consists of a set $S$ of species, a set $K \subseteq \mathbb{N}^S$ of complexes, and then a directed multigraph with $K$ as its set of vertices.

So, you don’t get to use the same complex as a vertex twice!

This is not a reaction network:

$A \to B \qquad \qquad B \to C$

This is:

$A \to B \to C$

We are not taking a directed graph and labelling its vertices with complexes. We are considering a directed graph whose vertices are complexes.

6. […] last time, we started by thinking of as a subset of and thus of the vector space Back then, we wrote the […]

7. Arrow says:

I thought a bit about the puzzle but I found the complex and species definition somewhat too vague.

For example what about these reactions:
A ↔ B
A+C ↔ B+C

where C is en enzyme/catalyst meaning the two reactions are not equivalent as they have very different rates. Does it count? On one hand if A goes to B using the first reaction and then B+C goes to A+C using the second we will end up with the same amount of A, B and C, but one more complex A+C and one less B+C then we started with.

That is if by complex we mean just one side of one reaction, if by complex A + C we mean amount of A and C in the reaction mixture then of course the amount of complex A + C won’t change.

In that second case we could then modify the interpretation of equations and argue that there are two pools of A and B molecules – those freely moving in solution, represented by first reaction, and those bound to enzymes C represented by the second.

Then the amount of complex A + C would change even though the amount of all species remained the same since one free species reacted and one bound.

However one could also argue that such bound A + C is not a complex but a species which should be represented by a different letter say D and then we will have AB and DE and the example won’t count.

So what I would like to know is what exactly is meant by complex here, and it cannot be the same thing as in chemistry since in chemistry not all entities which can be found on one side of reaction equation form complexes, for example AFAIK CH4 + 2O2 isn’t a complex in chemical terms. Chemical complexes have to be weakly bound, see for example here http://www.chemicool.com/definition/complex.html)

Second picture caption also has a typo, A+B should be A+C.

• Arrow says:

The arrows were eaten, there should be a double sided arrow between A and B and A+C and B+C.

• John Baez says:

I tried to fix it, but I had to guess whether you wanted A → B or A ↔ B.

To get arrows in HTML you write

&rarr; for →
&larr; for ←
&harr; for ↔

so that’s what I write when I want arrows here, unless I get fancy and use LaTeX. Maybe you tried just pasting in a UNICODE character? Maybe that doesn’t work? If all this is too high-tech, there’s always good old –>.

• John Baez says:

Arrow wrote:

I thought a bit about the puzzle but I found the complex and species definition somewhat too vague.

It’s not at all vague; it’s just very general—which in math, unlike normal life, is regarded as a completely different thing. I said:

So, given any finite collection of species, say $S$, let’s write $\mathbb{N}^S$ to mean the set of finite linear combinations of elements of $S$, with natural numbers as coefficients. The complexes appearing in our reactions will form a subset of this, say $K$.

and then more formally:

Definition. A reaction network $(S,\; s,t : T \to K)$ consists of:

• a finite set $S$ of species,

• a finite set $T$ of transitions,

• a finite set $K \subset \mathbb{N}^S$ of complexes,

source and target maps $s,t : T \to K.$

Boldface means the term is being defined and you should assume no other meaning for it than what’s given here. If the word happens to remind you of some identical-looking word you know, ignore that fact!

In particular, while the deficiency zero theorem was proved by chemists, we’ve seen lots of other applications of the same math earlier in the series. And even though this theorem was proved by chemists, they were mathematical chemists, so the word ‘complex’ is not required to mean what it usually does in chemistry. It really means nothing here except ‘a linear combination with natural number coefficients of elements of some set $S$‘.

So, we can take any finite set whatsoever to be our set of species, for example:

$S = \{ \mathrm{hungrycat}, \mathrm{fullcat}, \mathrm{mouse} \}$

and then our set of complexes could be any finite subset of $\mathbb{N}^S$, for example:

$K = \{ \mathrm{hungrycat} + \mathrm{mouse}, \mathrm{fullcat}, \mathrm{hungrycat} \}$

Then we take some set of ‘transitions’, and declare that each has some complex as its ‘source’ and some complex as its ‘target’. We usually write these transitions as arrows going from some complex (its source) to another (its target). In the example I’m doing now, we could do this:

$\mathrm{hungrycat} + \mathrm{mouse} \to \mathrm{fullcat} \to \mathrm{hungrycat}$

So, a hungry cat can eat a mouse and get full, or a full cat can get hungry.

For example what about these reactions:

A ↔ B

A+C ↔ B+C

where C is en enzyme/catalyst meaning the two reactions are not equivalent as they have very different rates. Does it count?

Does it count as what? An reaction network with deficiency other than zero? Maybe, let me see!

The number of complexes is 4:

$K = \{A, B, A+C, B+C \}$

The number of components is 2, since your reaction network consists of two separate pieces.

The dimension of the stoichiometric subspace is 1, since it’s

$\langle B - A, (B+ C) - (A+C) \rangle = \langle B - A \rangle$

which is 1-dimensional. In other words, even though we have two transitions, they both effect the same change in species: an $A$ turns into a $B$ (or vice versa).

So, the deficiency is

$4 - 2 - 1 = 1$

So yeah, it has deficiency 1, not zero!

And this looks like a ‘chemically realistic example’, too—given what I know. So good, you clobbered it.

This is an interesting example because my intuition tells me the rate equation will have stable equilibrium solutions obeying many of the conclusions of the Deficiency Zero Theorem even though the reaction network is not deficiency zero. Feinberg proved a Deficiency One Theorem and someday soon I hope to understand it well enough to see what it says about examples like this!

8. John Baez says:

I’ve edited this article a bit. Instead of using $d$ to stand for the all-important map

$t - s : \mathbb{R}^T \to \mathbb{R}^K$

I’m now using $\partial$. I hope I succeeded in making this change everywhere.

The reason is that $\partial$ is what mathematicians call a ‘boundary operator’ that goes from 1-dimensional things (transitions) to 0-dimensional things (complexes), and we prefer to use $d$ for a ‘coboundary operator’ that goes the other way.

Later, when I get to electrical circuits, this will become really important. So, I realized I’d better straighten out my notation now.

• Dan says:

FYI: I see two places where the change didn’t go through as planned. The first is shortly after you introduce the boundary map, you say

“Mathematicians would call $\partial$ a boundary operator.”

using normal LaTeX rather than wordpress LaTeX. The second is where you’re reminding us what the image of the boundary map is. The sentence containing the displayed equation still uses a “d”, namely, “The image of d is…”.

And I just have to say, if those are the only two places you missed, I for one am really impressed.

• John Baez says:

Thanks a million! I’m impressed that you caught those!

John, does your definition of the natural numbers include zero?
Because if then A+0*C -> B + 0*C would be a valid complex, if I understood correctly (?) and then the second question would be wether A+ 0*C = A in your definition of complexes. Since if this would be the case then it seems there would be a rather disturbing discontinuity in the (definition of) deficiency and reaction networks.

• John Baez says:

John, does your definition of the natural numbers include zero?

Sure! It’s insane to not include zero. When you include zero, $\mathbb{N}$ is the free monoid on one generator, the free commutative monoid on one generator, and also the free rig on no generators, and the free commutative rig on no generators. It’s also the decategorification of the category of finite sets. None of these nice properties hold if you leave out zero.

It’s because $\mathbb{N}$ is a commutative rig that we can do linear algebra over it, and work with ‘finite linear combinations of species with natural coefficients’ using the usual rules of linear algebra, as long as we avoid subtraction and division.

Because if then A+0*C -> B + 0*C would be a valid complex, if I understood correctly …

No, a complex is a finite linear combination of species with natural number coefficients. In other words, the set of complexes is $\mathbb{N}^S$, where $S$ is the set of species.

So A + 0*C = A is a complex, and B + 0*C = B is a complex. But A+0*C -> B + 0*C is not a complex: that arrow there is a transition between complexes.

then the second question would be wether A+ 0*C = A in your definition of complexes.

Yes, that’s how linear combinations work: zero times something is zero, and adding zero to something doesn’t change it.

Since if this would be the case then it seems there would be a rather disturbing discontinuity in the (definition of) deficiency and reaction networks.

What’s the ‘discontinuity’? Nothing is varying continuously here, everything is discrete. But I don’t even see what you might talking about.

I wrote:

Because if then A+0*C -> B + 0*C would be a valid complex, if I understood correctly

Sorry I meant the “would be a valid complex” to refer to the A+0*C expression only and not to the whole expression, I apparently found this as being enough evident from the context, but now I understand this can be misleading, I should have written this clearer.

Arrow’s suggested network: A -> B (lets assume he meant a right arrow) and A+C ->B+C is a valid network with deficiency 1 if I understood correctly. However the network A->B. A->B (which would have deficiency -1, as I said above and if the calculations where right) seems to be not allowed – as what I understood from what you said:

So, you don’t get to use the same complex as a vertex twice!

So taking everything together what you said the network A->B, A+0*C -> B + 0*C would not be allowed (as a 2 component network, but only as the one component network A->B), but the network A->B, A+1*C -> B + 1*C is allowed. On the other hand if one would allow the 2 component network A->B, A+0*C -> B + 0*C then the deficiency would jump from -1 to +1 for the network A->B, A+1*C -> B + 1*C. In particular the family of networks A->B, A+l*C -> B+l*C have all deficiency 1 for l \neq 0 and so only for l=0 the deficiency would jump to -1 (if the network would be allowed as a 2 component network). This is what I meant with “discontinuity” that is within the family A->B, A+l*C -> B+l*C , l \in \N you either have a jump in the number of components or in the deficiency if l gets 0.

I am sorry that I forgot again what the use-latex command on this blog was. Was it $latex….$ ? I could imagine that it could be useful if you could place the use-latex command very visible on the side of the blog so that people could look it up.

• John Baez says:

the network A->B. A->B (which would have deficiency -1, as I said above and if the calculations where right) seems to be not allowed…

Right, it’s not allowed.

By the way, I’m not making up rules as I go: I’m just using the definition of reaction network which I gave at the start of this post. If there’s anything about that definition that seems mysterious, maybe you should ask.

Also by the way, the definition of ‘deficiency’ makes it impossible that the deficiency could be negative: it’s the dimension of the vector space

$\textrm{im } \partial \cap \textrm{ker} Y$

This is what I meant with “discontinuity” that is within the family A->B, A+l*C -> B+l*C , l \in \N you either have a jump in the number of components or in the deficiency if l gets 0.

Okay. When I is nonzero you have a reaction network with two components, like this:

$A \to B \qquad A + C \to B + C$

When I is zero you have a reaction network with 1 component:

$A {\to \atop \to} B$

I think the first reaction network has deficiency 1, while the second has deficiency 0.

I am sorry that I forgot again what the use-latex command on this blog was. Was it $latex….$ ?

Almost, except it’s crucial that there be a space between the ‘latex’ and the math symbols afterwards.

I could imagine that it could be useful if you could place the use-latex command very visible on the side of the blog so that people could look it up.

That’s a good idea. I don’t see anything indicating that I have the power to do this, but I’ll try to see if it’s possible.

If there’s anything about that definition that seems mysterious, maybe you should ask.

but thats what I did! So if I understood you now correctly you assume that there is for every pair of complexes at most two transitions, one is denoted by the symbol “rightarrow” and the other (which is as you wrote is to be interpreted as the reverse transition) with one with the symbol “leftarrow”, respectively.
I could have interpreted the arrows also as “some kind of transition” (which could possibly be different, even if the complexes would be the same) which let to my questions and as already said this information is probably hidden in your proof.

I don’t see anything indicating that I have the power to do this, but I’ll try to see if it’s possible.

I don’t know wether you have the usual administration possibilities which come with a usual wordpress package. So maybe check out wether you have on your administration page under “appearances” (bar on the right side) the topic “widgets”, if yes then go there and check wether you can activate a socalled text widget (by dragging it to the right side). If yes then double click on this and you should be able to insert text, which should appear in the side bar.

• John Baez says:

John wrote:

If there’s anything about that definition that seems mysterious, maybe you should ask.

but that’s what I did!

Definition. A reaction network consists of:

• a finite set $S$ of species,

• a finite set $T$ of transitions,

• a finite set $K \subset \mathbb{N}^S$ of complexes,

source and target maps $s,t : T \to K.$

Maybe I’m just too much of a mathematician, but it seems to me that the beauty of definitions is that they answer an infinite number of questions in a very short time. After one understands this definition, it’s pretty easy to answer all the questions you’ve been asking. Before understanding it, there’s an infinite list of questions one could ask about what counts as a reaction network, without ever finding out for sure what all the rules are. All the rules are in the definition above.

Of course there could be ways in which this definition needs clarification: for example, it only makes sense to people who know that $\mathbb{N}^S$ means the set of functions from $S$ to $\mathbb{N}$, and that $\mathbb{N}$ means the natural numbers, and that I define the natural numbers to be the set $\{0,1,2,3,\dots\}$. So if anything like this is unclear, I should definitely explain it.

So if I understood you now correctly you assume that there is for every pair of complexes at most two transitions, one is denoted by the symbol “rightarrow” and the other (which is as you wrote is to be interpreted as the reverse transition) with one with the symbol “leftarrow”, respectively.

Let’s try to answer this question starting from the definition. Can you, or anyone here, see what the definition says about this issue?

I don’t know whether you have the usual administration possibilities which come with a usual wordpress package. So maybe check out whether you have on your administration page under “appearances” (bar on the right side) the topic “widgets”, if yes then go there and check wether you can activate a so-called text widget (by dragging it to the right side). If yes then double click on this and you should be able to insert text, which should appear in the side bar.

I have all the usual powers people get with a free WordPress blog. A lot of fun ‘plugins’ require paying extra. At some point I will start paying and improve this blog in a bunch of ways. But for now, let me see if I can get a text widget. Thanks.

• John Baez says:

Okay, Nadja—I added a text widget explaining how to use LaTeX in comments here. I’ll keep it on top for a while so people are more likely to see it, then move it down a bit lower so the new posts and comments are back on top. Thanks for the suggestion!—somehow I wouldn’t have looked under Appearance/Widgets/Text to solve this problem without your help.

Since if this would be the case then it seems there would be a rather disturbing discontinuity in the (definition of) deficiency and reaction networks.

that is I hope there were no miscomputations of the deficiencies.

• John Baez says:

I also hope so. Which computation are you worrying about?

Yes because I had the feeling that something was missing and I wanted to find out what you were having in mind.

All the rules are in the definition above.

Let’s try to answer this question starting from the definition. Can you, or anyone here, see what the definition says about this issue?

Frankly speaking I still think that you want to add in the definition that for every ordered pair there should be at most two transitions, which are the reverse of each other.

I think the definition doesn’t rule out that there could be e.g. three “different arrows” in the same direction between two complexes
or in other words by the definition you could have e.g. three different transitions between two complexes (s and t could be surjective).

On the other hand what I understood from what you had said here in the comments you want to have only at most two types of arrows between complexes and thus you intrinsically seem to want to have this in the definition.

But it is 35°C here and may be I just don’t have my brain in gear.

• John Baez says:

Frankly speaking I still think that you want to add in the definition that for every ordered pair there should be at most two transitions, which are the reverse of each other.

No, I don’t want that—but this is an interesting issue, which I’ve been thinking about quite a lot. I addressed it at the end of this post, but let me say more.

There are at least 23 = 8 kinds of graphs, depending on our answers to three questions:

• Do the edges have arrows on them?

• Can more than one edge can go from a vertex $v$ to a vertex $w$?

and

• Can an edge can go from a vertex $v$ to itself?

So: which kind of graph should we use for reaction networks?

We need the edges to have arrows on them to be able to write down the rate equation, so we’re down to four options.

An edge going from a vertex to itself means a chemical reaction that turns some complex into itself. Such reactions don’t affect the rate equation, so we can exclude them if we want. But, we can also include them if we want!

If we have several edges going from one vertex to another, we have several chemical reactions that turn one complex into another. We can combine these into one reaction without changing the rate equation, as long as we add their rate constants. So, we can exclude the possibility of multiple edges going from one vertex to another if we want. But, we can also include it if we want!

A naive notion of efficiency urges us to leave out anything that’s not required, but in mathematics we eventually learn that leaving things out is often more work than leaving them in. Leaving out possibilities often requires extra clauses in our definitions that explicitly exclude these things. This makes it more tedious to reason with these definitions. It also makes the resulting category of mathematical gadgets have worse properties, and be definable only in a more restricted context.

So, I have chosen to study reaction networks using the type of graph that we get when we answer YES, YES and YES to the above three questions. Some people call this kind of graph a directed multigraph, where ‘multi’ means that we can have more than one edge from one vertex to another. This may sound complicated, but it’s incredibly simple: it’s just a pair of functions

$s, t: E \to V$

It’s worth noting that of all types of graph, this is the only one we can define internal to any category: that is, with objects of that category replacing the sets $E$ and $V$, and morphisms replacing the functions $s$ and $t$. All the other 7 kinds of graph can only be defined in restricted kinds of categories.

It’s a fun exercise to see exactly how this works. For example, suppose we’re working internal to a category $C$ and we want to define graphs with the extra property that there’s at most one edge from any vertex to any other. Then we need to require that

$(s,t) : E \to V \times V$

be a monomorphism. This makes things more complicated, but also it demands that $C$ have binary products, so we can define $V \times V.$ Or at the very least, we need the product $V \times V$ to exist.

It’s also worth noting that any category gives a directed multigraph with objects as vertices and morphisms as edges. To study categories, we definitely need to allow more than one morphism going from one object to another. And we definitely need to allow morphisms going from an object to itself!

So, this subject will mesh with category theory most easily if we use directed multigraphs. And it’s better for chemistry too. Directed multigraphs, being more general, allow for more operations. For example, we can add new edges to a directed multigraph without having to check if we’re violating a rule saying that at most one edge can go from one vertex to another.

One more thing: when our reaction network has a transition going between two complexes, adding more transitions (going either way!) between these complexes will not change the deficiency. Why? Because it won’t change the number of complexes, or the number of components, or the dimension of the stoichiometric subspace.

Similarly, adding a transition going from a complex to itself will not change the deficiency.

Thanks for giving me an excuse to talk about this stuff! There are lots of ‘design decisions’ going into these posts, which I normally don’t talk about, because I think it would be distracting. But this particular one is more interesting than most.

Also by the way, the definition of ‘deficiency’ makes it impossible that the deficiency could be negative: it’s the dimension of the vector space

$\textrm{im } \partial \cap \textrm{ker} Y$

Right. For the case I was talking about, one would thus actually need to count multiplicities and and in the the turn wouldn’t get negative deficiencies and and hence wouldn’t run into this what had disturbed me.

Thanks for giving me an excuse to talk about this stuff! There are lots of ‘design decisions’ going into these posts, which I normally don’t talk about, because I think it would be distracting.

Thanks for explaining these details. I understand now that you really want this definition. Maybe I have read too many fuzzy papers and thus got too suspicious about definitions.

It could be helpful if your graphical examples (the arrow pictures) would contain double arrows in the same direction and selfdirected arrows. That would sort of confirm your definitions.

Given the reflections and not too big resolutions of my laptop monitor, my glasses etc. it seems I had actually read the symbols:

$A {\to \atop \to} B$

$A {\to \atop \leftarrow} B$

and had falsely interpreted it as some kind of sloppiness.

It’s worth noting that of all types of graph, this is the only one we can define internal to any category: that is, with objects of that category replacing the sets E and V, and morphisms replacing the functions s and t. All the other 7 kinds of graph can only be defined in restricted kinds of categories.

Frankly speaking I am still not so enthusiastic about category theory as you, so the above argumentation is for me less of an argument. The discussion is however interesting for me because as I had said above I try to infiltrate the RDF community to talk about the graph you get when you identify parts of triples (i.e. (subject, predicate, object) with “identical” URI’S (labels) (in your language the triples are (complex, transition, complex), where the directions should be taken into account). Canonically one would there start with identifying the subject and objects with identical URI’s and then as you pointed out above eventually also think about identifying transitions.

The problem is that for RDF triples there are often subjects which have a different URI but have the same meaning.
There is a project called SILK here in Berlin where (as I understood) these “synonyms” are being tracked (partially by genetic algorithms) and interlinked by a “same as” transition. These efforts are important for reducing unnecessary (and storage intensive) dublicates and as I think eventually also for the serialization of the RDF triples.

So far the RDF community speaks about the triples themselves as “the RDF Graph” (i.e. a highly disconnected graph) (that’s what I understood) and not about the graph you have when you look at the corresponding sets. But the discussion here had let me to think that may be they would agree to the term “RDF multigraph.”

• John Baez says:

Frankly speaking I am still not so enthusiastic about category theory as you, so the above argumentation is for me less of an argument.

I believe that understanding category theory helps me ‘follow the tao of mathematics’ and lets me do things better than I would otherwise. But I’ve spent a lot of time writing stuff that advocates the virtues of category theory and I’m pretty much done with that now; at this point it makes more sense for me to come up with good results rather than talk about how I go after them.

I believe that understanding category theory helps me ‘follow the tao of mathematics’

Just curious-where have you seen the “tao of mathematics” been written down?

But speaking of historical connotations – John, you probably saw this already – but concerning your chemical reaction networks – you may eventually dig out also something interesting in: here.

Since I don’t have a library account I can’t easily follow the journal links so in particular I couldn’t really understand Sylvester’s definition of a graph in particular I couldn’t figure out whether Sylvester used the word “graph” as being defined on a set or on a multiset. He seemed to have derived the word graph from the word “chemicograph.”

(A multiset is a set where elements can appear with finite mulitpicities; according to W.D. Blizard this term seems to have been discussed first in Dedekind’s treatise on numbers (last chapter nr. 172) however the term multiset is not by him in particular if I understand him correctly Dedekind called a set “ein System” but the german word for set is “Menge”, so here the expressions have changed, )

• John Baez says:

Just curious-where have you seen the “tao of mathematics” been written down?

As Chuang-Tze said, “The Tao that can be spoken is not the eternal Tao”.

I knew Sylvester is often credited with having invented graph theory, but I haven’t read his original work on that subject. I imagine the concept of ‘set’ may have been a bit murky back then, but I don’t know the early history of that subject either!

The first link you gave doesn’t work here in Singapore. That’s true of many Google Books links.

14. Arjun Jain says:

For Puzzle 1:
$A \to A+A \to A+A+A$ has deficiency 1. If we go on adding multiples of A we can keep on increasing the deficiency, as only the number of complexes increases.

Is the following correct ? :
Also, if there is only one component, the maximum dimension of the stoichiometric space is $|K|-1$, where the number of complexes involved is $|K|$ (as if there are $n$ vertices, if we draw an edge between each without joining the first and last vertices, we need $n-1$ edges). So, the deficiency of a 1- component reaction has to be $\geq 0.$

Now, if we add more components, as the same complexes cannot appear in different components, the no. of complexes in each component can be directly added to give the total no. of complexes.

Also, due to possible linear dependence, the dim. of the stoichiometric space of the reaction is $\leq$ the sum of the dimensions of the stoichiometric spaces of each of the components.

So we get

$\displaystyle{ \mathrm{deficiency}(\mathrm{reaction}) \geq \sum_{\mathrm{components} \ k} \mathrm{deficiency}(k) \geq 0 }$

Of course, the deficiency is always less than the no. of complexes – the no. of components.

This site uses Akismet to reduce spam. Learn how your comment data is processed.