The rough idea is to treat the existence of a stationary state for the *master equation* that’s a product of Poisson distributions as analogous to the ‘complex balanced’ condition for the *rate* equation.

She consider this reaction network, the **Edelstein network**, which is famous for exhibiting bistability:

It’s always hard to understand the *structure* of chemical reaction networks in biology, and also the *parameters* (rate constants).

Luckily there are some theorems that help us understand structure: certain networks can’t exhibit certain behavior regardless of their rate constants.

She’s going to talk about estimating rate constants for networks that exhibit bistability.

Abstract.The process of building useful mathematical models of cellular processes is usually hampered by high levels of uncertainty both structural and parametric. One of the main challenges of systems biology is developing methods and tools helping to overcome this problem, and this includes results connecting structure and dynamic behaviour.Chemical reaction network theory exploits the particular structure of biochemical networks to derive results linking structural features to long term dynamic properties (most of them are related to the presence or not of multiple steady states, and apply regardless of parametric values). In this way, CRNT results can be used directly for model discrimination (since they allow discarding mechanistic hypothesis with long term dynamics contradicting experimental observations).

Our aim is to develop methods that exploit inherent structural properties of biochemical reaction networks helping to identify the parameters of kinetic models. The approach is based on the so called equilibrium manifold, an algebraic variety derived within the CRNT framework, and makes use of the particular way in which the manifold equations depend on the kinetic parameters.

In case of experimental evidence of bistability, the feasible parameter space can be drastically reduced without the need of quantitative experimental data, by ruling out those regions of the parameter space where the equilibrium manifold does not fulfil a condition for multiplicity of steady states. Moreover, quantitative information about the parameters of the bistable network can be inferred from (quantitative) experimental dose response data.

I will discuss how these results can be extended to design specific experiments that, with a reduced experimental effort, will provide valuable information about the kinetic parameters, both qualitative and quantitative, helping to improve the parametric identifiability not only for bistable switches but also for networks with one single steady state and thus contributing to facilitate the parameter estimation task in combination with standard methods.

Her talk is based on this paper:

• Irene Otero-Muras, Julio R. Banga and Antonio A. Alonso, Characterizing multistationarity regimes in biochemical reaction networks, *PLOS One*, 3 July 2012.

For fast cycles you get certain quantities conserved by the fast reactions, but which change slowly since they’re not conserved by *all* reactions.

‘Elementary modes’ and ‘approximate conserved quantities’ are a way to take the degrees of freedom of a subsystem and split them into the rapidly changing ones and the slowly changing ones. Ideally only the slowly changing ones are coupled to the environment.

]]>How can we determine the slow/fast decomposition? See:

• O. Radulescu, A. N. Gorban, A. Zinovyev and A. Lilienbaum, Robust simplifications of multiscale biochemical networks, *BMC Systems Biology* **2** (2008).

**Theorem.** The multiscale approximation of an arbitrary Markov process with rate constants is a Markov process of this sort without loops and without more than one edge leaving any node.

The reaction networks in biology are too big for efficient parameter estimation, so we want to use **model reduction** to simplify them, e.g. lumping together several ‘fast’ reactions into a single reaction. But there aren’t just two time scales, fast and slow—there are *many* time scales, differing by several orders of magnitude.

In 2008 Gorban and Radulescu looked at ‘monomolecular’ reaction networks, which are really just Markov processes, with rate constants equal to where the reaction is labelled by an integer

In other words, time scales are integer powers of a time scale

• A. N. Gorban and O. Radulescu, Dynamic and static limitation in reaction networks, revisited, *Advances in Chemical Engineering* **34** (2008), 103–173.

However, I know of no relation between this circle of ideas and the funny analogy between probabilities and amplitudes that my talk was about.

*That* analogy really amounts to

probability theory : quantum theory :: L^{1} : L^{2}

and I like to joke that the next revolution in physics will involve L^{3} spaces.

(I don’t believe that: it’s just a joke, though you should look at Smolin’s paper.)

]]>(Sorry, I don’t have HTML-angles on this hilariously stupid “netbook” I’m using.)

]]>The symplectic structure is even more interesting and more useful when applied to the modern (post-classical) versions. Hamilton-Jacobi theory is amusing, but it’s less useful than out-and-out QM, and not significantly simpler.

2) Stat mech is basically the analytic continuation of QM, continued in the direction of imaginary time. This point and its ramifications are discussed in e.g. Feynman and Hibbs **Quantum Mechanics and Path Integrals** (1965). The classical limit of QM is obtained by the method of stationary phase, whereas the classical limit of stat mech is obtained by the method of steepest descent … so the two subjects are very nearly but not quite identical.

Again: If we’re going to make connections, it is even more interesting and more useful to connect the modern (post-classical) versions.

FWIW note that Planck invented QM as an outgrowth from stat mech … not directly from classical mechanics. So the connections are there, and have been since Day One.

3) Many (but not all) of the familiar formulas of thermodynamics can usefully be translated to the language of differential forms. In many cases all that is required is a re-interpretation of the symbols, leaving the form of the formula unchanged; for instance we interpret dE = T dS – P dV as a *vector* equation.

I say “not all” formulas because more than a few of the formulas you see in typical thermodynamics books are nonsense. This includes (almost) all expressions involving “dQ” or “dW”. Such things simply do not exist (except in trivial cases). Daniel Schroeder in **An Introduction to Thermal Physics** (1999) rightly calls them a crime against the laws of mathematics. With a modicum of self-discipline it is straightforward to do thermodynamics without committing such crimes.

Differential forms make thermodynamics simpler and more visually intuitive … and simultaneously more sophisticated, more powerful, and more correct.

Although there are fat books on the subject of differential topology, only the tiniest fraction of that is necessary for present purposes. An introductory discussion (including pictures) can be found at https://www.av8n.com/physics/thermo-forms.htm and the application to thermodynamics is worked out in some detail at https://www.av8n.com/physics/thermo/.

]]>