Kinetic Networks: From Topology to Design

16 April, 2015

Here’s an interesting conference for those of you who like networks and biology:

Kinetic networks: from topology to design, Santa Fe Institute, 17–19 September, 2015. Organized by Yoav Kallus, Pablo Damasceno, and Sidney Redner.

Proteins, self-assembled materials, virus capsids, and self-replicating biomolecules go through a variety of states on the way to or in the process of serving their function. The network of possible states and possible transitions between states plays a central role in determining whether they do so reliably. The goal of this workshop is to bring together researchers who study the kinetic networks of a variety of self-assembling, self-replicating, and programmable systems to exchange ideas about, methods for, and insights into the construction of kinetic networks from first principles or simulation data, the analysis of behavior resulting from kinetic network structure, and the algorithmic or heuristic design of kinetic networks with desirable properties.

Information and Entropy in Biological Systems (Part 3)

6 April, 2015

I think you can watch live streaming video of our workshop on Information and Entropy in Biological Systems, which runs Wednesday April 8th to Friday April 10th. Later, videos will be made available in a permanent location.

To watch the workshop live, go here. Go down to where it says

Investigative Workshop: Information and Entropy in Biological Systems

Then click where it says live link. There’s nothing there now, but I’m hoping there will be when the show starts!

Below you can see the schedule of talks and a list of participants. The hours are in Eastern Daylight Time: add 4 hours to get Greenwich Mean Time. The talks start at 10 am EDT, which is 2 pm GMT.

Schedule

There will be 1½ hours of talks in the morning and 1½ hours in the afternoon for each of the 3 days, Wednesday April 8th to Friday April 10th. The rest of the time will be for discussions on different topics. We’ll break up into groups, based on what people want to discuss.

Each invited speaker will give a 30-minute talk summarizing the key ideas in some area, not their latest research so much as what everyone should know to start interesting conversations. After that, 15 minutes for questions and/or coffee.

Here’s the schedule. You can already see slides or other material for the talks with links!

Wednesday April 8

• 9:45-10:00 — the usual introductory fussing around.
• 10:00-10:30 — John Baez, Information and entropy in biological systems.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Chris Lee, Empirical information, potential information and disinformation.
• 11:30-11:45 — questions.

• 11:45-1:30 — lunch, conversations.

• 1:30-2:00 — John Harte, Maximum entropy as a foundation for theory building in ecology.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Annette Ostling, The neutral theory of biodiversity and other competitors to the principle of maximum entropy.
• 2:45-3:00 — questions, coffee.
• 3:00-5:30 — break up into groups for discussions.

• 5:30 — reception.

Thursday April 9

• 10:00-10:30 — David Wolpert, The Landauer limit and thermodynamics of biological organisms.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Susanne Still, Efficient computation and data modeling.
• 11:30-11:45 — questions.

• 11:45-1:30 — group photo, lunch, conversations.

• 1:30-2:00 — Matina Donaldson-Matasci, The fitness value of information in an uncertain environment.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Roderick Dewar, Maximum entropy and maximum entropy production in biological systems: survival of the likeliest?
• 2:45-3:00 — questions, coffee.
• 3:00-6:00 — break up into groups for discussions.

Friday April 10

• 10:00-10:30 — Marc Harper, Information transport and evolutionary dynamics.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Tobias Fritz, Characterizations of Shannon and Rényi entropy.
• 11:30-11:45 — questions.

• 11:45-1:30 — lunch, conversations.

• 1:30-2:00 — Christina Cobbold, Biodiversity measures and the role of species similarity.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Tom Leinster, Maximizing biological diversity.
• 2:45-3:00 — questions, coffee.
• 3:00-6:00 — break up into groups for discussions.

Participants

Here are the confirmed participants. This list may change a little bit:

• John Baez – mathematical physicist.

• Romain Brasselet – postdoc in cognitive neuroscience knowledgeable about information-theoretic methods and methods of estimating entropy from samples of probability distributions.

• Katharina Brinck – grad student at Centre for Complexity Science at Imperial College; did masters at John Harte’s lab, where she extended his Maximum Entropy Theory of Ecology (METE) to trophic food webs, to study how entropy maximization on the macro scale together with MEP on the scale of individuals drive the structural development of model ecosystems.

• Christina Cobbold – mathematical biologist, has studied the role of species similarity in measuring biodiversity.

• Troy Day – mathematical biologist, works with population dynamics, host-parasite dynamics, etc.; influential and could help move population dynamics to a more information-theoretic foundation.

• Roderick Dewar – physicist who studies the principle of maximal entropy production.

• Barrett Deris – MIT postdoc studying the studying the factors that influence evolvability of drug resistance in bacteria.

• Charlotte de Vries – a biology master’s student who studied particle physics to the master’s level at Oxford and the Perimeter Institute. Interested in information theory.

• Matina Donaldson-Matasci – a biologist who studies information, uncertainty and collective behavior.

• Chris Ellison – a postdoc who worked with James Crutchfield on “information-theoretic measures of structure and memory in stationary, stochastic systems – primarily, finite state hidden Markov models”. He coauthored “Intersection Information based on Common Randomness”, http://arxiv.org/abs/1310.1538. The idea: “The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of “the same information” two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory.” Works on mutual information between organisms and environment (along with David Krakauer and Jessica Flack), and also entropy rates.

• Cameron Freer – MIT postdoc in Brain and Cognitive Sciences working on maximum entropy production principles, algorithmic entropy etc.

• Tobias Fritz – a physicist who has worked on “resource theories” and haracterizations of Shannon and Rényi entropy and on resource theories.

• Dashiell Fryer – works with Marc Harper on information geometry and evolutionary game theory.

• Michael Gilchrist – an evolutionary biologist studying how errors and costs of protein translation affect the codon usage observed within a genome. Works at NIMBioS.

• Manoj Gopalkrishnan – an expert on chemical reaction networks who understands entropy-like Lyapunov functions for these systems.

• Marc Harper – works on evolutionary game theory using ideas from information theory, information geometry, etc.

• John Harte – an ecologist who uses the maximum entropy method to predict the structure of ecosystems.

• Ellen Hines – studies habitat modeling and mapping for marine endangered species and ecosystems, sea level change scenarios, documenting of human use and values. Her lab has used MaxEnt methods.

• Elizabeth Hobson – behavior ecology postdoc developing methods to quantify social complexity in animals. Works at NIMBioS.

• John Jungk – works on graph theory and biology.

• Chris Lee – in bioinformatics and genomics; applies information theory to experiment design and evolutionary biology.

• Maria Leites – works on dynamics, bifurcations and applications of coupled systems of non-linear ordinary differential equations with applications to ecology, epidemiology, and transcriptional regulatory networks. Interested in information theory.

• Tom Leinster – a mathematician who applies category theory to study various concepts of ‘magnitude’, including biodiversity and entropy.

• Timothy Lezon – a systems biologist in the Drug Discovery Institute at Pitt, who has used entropy to characterize phenotypic heterogeneity in populations of cultured cells.

• Maria Ortiz Mancera – statistician working at CONABIO, the National Commission for Knowledge and Use of Biodiversity, in Mexico.

• Yajun Mei – statistician who uses Kullback-Leibler divergence and how to efficiently compute entropy for the two-state hidden Markov models.

• Robert Molzon – mathematical economist who has studied deterministic approximation of stochastic evolutionary dynamics.

• David Murrugarra – works on discrete models in mathematical biology; interested in learning about information theory.

• Annette Ostling – studies community ecology, focusing on the influence of interspecific competition on community structure, and what insights patterns of community structure might provide about the mechanisms by which competing species coexist.

• Connie Phong – grad student at Chicago’s Institute of Genomics and System biology, working on how “certain biochemical network motifs are more attuned than others at maintaining strong input to output relationships under fluctuating conditions.”

• Petr Plechak – works on information-theoretic tools for estimating and minimizing errors in coarse-graining stochastic systems. Wrote “Information-theoretic tools for parametrized coarse-graining of non-equilibrium extended systems”.

• Blake Polllard – physics grad student working with John Baez on various generalizations of Shannon and Renyi entropy, and how these entropies change with time in Markov processes and open Markov processes.

• Timothee Poisot – works on species interaction networks; developed a “new suite of tools for probabilistic interaction networks”.

• Richard Reeve – works on biodiversity studies and the spread of antibiotic resistance. Ran a program on entropy-based biodiversity measures at a mathematics institute in Barcelona.

• Rob Shaw – works on entropy and information in biotic and pre-biotic systems.

• Matteo Smerlak – postdoc working on nonequilibrium thermodynamics and its applications to biology, especially population biology and cell replication.

• Susanne Still – a computer scientist who studies the role of thermodynamics and information theory in prediction.

• Alexander Wissner-Gross – Institute Fellow at the Harvard University Institute for Applied Computational Science and Research Affiliate at the MIT Media Laboratory, interested in lots of things.

• David Wolpert – works at the Santa Fe Institute on i) information theory and game theory, ii) the second law of thermodynamics and dynamics of complexity, iii) multi-information source optimization, iv) the mathematical underpinnings of reality, v) evolution of organizations.

• Matthew Zefferman – works on evolutionary game theory, institutional economics and models of gene-culture co-evolution. No work on information, but a postdoc at NIMBioS.

Categorical Foundations of Network Theory

4 April, 2015

Jacob Biamonte got a grant from the Foundational Questions Institute to run a small meeting on network theory:

It’s being held 25-28 May 2015 in Turin, Italy, at the ISI Foundation. We’ll make slides and/or videos available, but the main goal is to bring a few people together, exchange ideas, and push the subject forward.

The idea

Network theory is a diverse subject which developed independently in several disciplines. It uses graphs with additional structure to model everything from complex systems to theories of fundamental physics.

This event aims to further our understanding of the mathematical theory underlying the relations between seemingly different networked systems. It’s part of the Azimuth network theory project.

Timetable

With the exception of the first day (Monday May 25th) we will kick things off with a morning talk, with plenty of time for questions and interaction. We will then break for lunch at 1:00 p.m. and return for an afternoon work session. People are encouraged to give informal talks and to present their ideas in the afternoon sessions.

Monday May 25th, 10:30 a.m.

Jacob Biamonte: opening remarks.

For Jacob’s work on quantum networks visit www.thequantumnetwork.org.

John Baez: network theory.

For my stuff see the Azimuth Project network theory page.

Tuesday May 26th, 10:30 a.m.

David Spivak: operadic network design.

Operads are a formalism for sticking small networks together to form bigger ones. David has a 3-part series of articles sketching his ideas on networks.

Wednesday May 27th, 10:30 a.m.

Eugene Lerman: continuous time open systems and monoidal double categories.

Eugene is especially interested in classical mechanics and networked dynamical systems, and he wrote an introductory article about them here on the Azimuth blog.

Thursday May 28th, 10:30 a.m.

Tobias Fritz: ordered commutative monoids and theories of resource convertibility.

Tobias has a new paper on this subject, and a 3-part expository series here on the Azimuth blog!

Location and contact

ISI Foundation
Via Alassio 11/c
10126 Torino — Italy

Phone: +39 011 6603090
Email: isi@isi.it
Theory group details: www.TheQuantumNetwork.org

Higher-Dimensional Rewriting in Warsaw (Part 1)

18 February, 2015

This summer there will be a conference on higher-dimensional algebra and rewrite rules in Warsaw. They want people to submit papers! I’ll give a talk about presentations of symmetric monoidal categories that arise in electrical engineering and control theory. This is part of the network theory program, which we talk about so often here on Azimuth.

There should also be interesting talks about combinatorial algebra, homotopical aspects of rewriting theory, and more:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland. Co-located with the RDP, RTA and TLCA conferences. Organized by Yves Guiraud, Philippe Malbos and Samuel Mimram.

Description

Over recent years, rewriting methods have been generalized from strings and terms to richer algebraic structures such as operads, monoidal categories, and more generally higher dimensional categories. These extensions of rewriting fit in the general scope of higher-dimensional rewriting theory, which has emerged as a unifying algebraic framework. This approach allows one to perform homotopical and homological analysis of rewriting systems (Squier theory). It also provides new computational methods in combinatorial algebra (Artin-Tits monoids, Coxeter and Garside structures), in homotopical and homological algebra (construction of cofibrant replacements, Koszulness property). The workshop is open to all topics concerning higher-dimensional generalizations and applications of rewriting theory, including

• higher-dimensional rewriting: polygraphs / computads, higher-dimensional generalizations of string/term/graph rewriting systems, etc.

• homotopical invariants of rewriting systems: homotopical and homological finiteness properties, Squier theory, algebraic Morse theory, coherence results in algebra and higher-dimensional category theory, etc.

• linear rewriting: presentations and resolutions of algebras and operads, Gröbner bases and generalizations, homotopy and homology of algebras and operads, Koszul duality theory, etc.

• applications of higher-dimensional and linear rewriting and their interactions with other fields: calculi for quantum computations, algebraic lambda-calculi, proof nets, topological models for concurrency, homotopy type theory, combinatorial group theory, etc.

• implementations: the workshop will also be interested in implementation issues in higher-dimensional rewriting and will allow demonstrations of prototypes of existing and new tools in higher-dimensional rewriting.

Submitting

Important dates:

• Submission: April 15, 2015

• Notification: May 6, 2015

• Final version: May 20, 2015

• Conference: 28-29 June, 2015

Submissions should consist of an extended abstract, approximately 5 pages long, in standard article format, in PDF. The page for uploading those is here. The accepted extended abstracts will be made available electronically before the
workshop.

Organizers

Program committee:

• Vladimir Dotsenko (Trinity College, Dublin)

• Yves Guiraud (INRIA / Université Paris 7)

• Jean-Pierre Jouannaud (École Polytechnique)

• Philippe Malbos (Université Claude Bernard Lyon 1)

• Paul-André Melliès (Université Paris 7)

• Samuel Mimram (École Polytechnique)

• Tim Porter (University of Wales, Bangor)

• Femke van Raamsdonk (VU University, Amsterdam)

Sensing and Acting Under Information Constraints

30 October, 2014

I’m having a great time at a workshop on Biological and Bio-Inspired Information Theory in Banff, Canada. You can see videos of the talks online. There have been lots of good talks so far, but this one really blew my mind:

• Naftali Tishby, Sensing and acting under information constraints—a principled approach to biology and intelligence, 28 October 2014.

Tishby’s talk wasn’t easy for me to follow—he assumed you already knew rate-distortion theory and the Bellman equation, and I didn’t—but it was great!

It was about the ‘action-perception loop’:

This is the feedback loop in which living organisms—like us—take actions depending on our goals and what we perceive, and perceive things depending on the actions we take and the state of the world.

How do we do this so well? Among other things, we need to balance the cost of storing information about the past against the payoff of achieving our desired goals in the future.

Tishby presented a detailed yet highly general mathematical model of this! And he ended by testing the model on experiments with cats listening to music and rats swimming to land.

It’s beautiful stuff. I want to learn it. I hope to blog about it as I understand more. But for now, let me just dive in and say some basic stuff. I’ll start with the two buzzwords I dropped on you. I hate it when people use terminology without ever explaining it.

Rate-distortion theory

Rate-distortion theory is a branch of information theory which seeks to find the minimum rate at which bits must be communicated over a noisy channel so that the signal can be approximately reconstructed at the other end without exceeding a given distortion. Shannon’s first big result in this theory, the ‘rate-distortion theorem’, gives a formula for this minimum rate. Needless to say, it still requires a lot of extra work to determine and achieve this minimum rate in practice.

For the basic definitions and a statement of the theorem, try this:

• Natasha Devroye, Rate-distortion theory, course notes, University of Chicago, Illinois, Fall 2009.

One of the people organizing this conference is a big expert on rate-distortion theory, and he wrote a book about it.

• Toby Berger, Rate Distortion Theory: A Mathematical Basis for Data Compression, Prentice–Hall, 1971.

Unfortunately it’s out of print and selling for \$259 used on Amazon! An easier option might be this:

• Thomas M. Cover and Joy A. Thomas, Elements of Information Theory, Chapter 10: Rate Distortion Theory, Wiley, New York, 2006.

The Bellman equation

The Bellman equation reduces the task of finding an optimal course of action to choosing what to do at each step. So, you’re trying to maximize the ‘total reward’—the sum of rewards at each time step—and Bellman’s equation says what to do at each time step.

If you’ve studied physics, this should remind you of how starting from the principle of least action, we can get a differential equation describing the motion of a particle: the Euler–Lagrange equation.

And in fact they’re deeply related. The relation is obscured by two little things. First, Bellman’s equation is usually formulated in a context where time passes in discrete steps, while the Euler–Lagrange equation is usually formulated in continuous time. Second, Bellman’s equation is really the discrete-time version not of the Euler–Lagrange equation but a more or less equivalent thing: the ‘Hamilton–Jacobi equation’.

Ah, another buzzword to demystify! I was scared of the Hamilton–Jacobi equation for years, until I taught a course on classical mechanics that covered it. Now I think it’s the greatest thing in the world!

Briefly: the Hamilton–Jacobi equation concerns the least possible action to get from a fixed starting point to a point $q$ in space at time $t.$ If we call this least possible action $W(t,q),$ the Hamilton–Jacobi equation says

$\displaystyle{ \frac{\partial W(t,q)}{\partial q_i} = p_i }$

$\displaystyle{ \frac{\partial W(t,q)}{\partial t} = -E }$

where $p$ is the particle’s momentum at the endpoint of its path, and $E$ is its energy there.

If we replace derivatives by differences, and talk about maximizing total reward instead of minimizing action, we get Bellman’s equation:

Bellman equation, Wikipedia.

Markov decision processes

Bellman’s equation can be useful whenever you’re trying to figure out an optimal course of action. An important example is a ‘Markov decision process’. To prepare you for Tishby’s talk, I should say what this is.

In a Markov process, something randomly hops around from state to state with fixed probabilities. In the simplest case there’s a finite set $S$ of states, and time proceeds in discrete steps. At each time step, the probability to hop from state $s$ to state $s'$ is some fixed number $P(s,s').$

This sort of thing is called a Markov chain, or if you feel the need to be more insistent, a discrete-time Markov chain.

A Markov decision process is a generalization where an outside agent gets to change these probabilities! The agent gets to choose actions from some set $A.$ If at a given time he chooses the action $\alpha \in A,$ the probability of the system hopping from state $s$ to state $s'$ is $P_\alpha(s,s').$ Needless to say, these probabilities have to sum to one for any fixed $s.$

That would already be interesting, but the real fun is that there’s also a reward $R_\alpha(s,s').$ This is a real number saying how much joy or misery the agent experiences if he does action $\alpha$ and the system hops from $s$ to $s'.$

The problem is to choose a policy—a function from states to actions—that maximizes the total expected reward over some period of time. This is precisely the kind of thing Bellman’s equation is good for!

If you’re an economist you might also want to ‘discount’ future rewards, saying that a reward $n$ time steps in the future gets multiplied by $\gamma^n,$ where $0 < \gamma \le 1$ is some discount factor. This extra tweak is easily handled, and you can see it all here:

Markov decision process, Wikipedia.

Partially observable Markov decision processes

There’s a further generalization where the agent can’t see all the details of the system! Instead, when he takes an action $\alpha \in A$ and the system hops from state $s$ to state $s',$ he sees something: a point in some set $O$ of observations. He makes the observation $o \in O$ with probability $\Omega_\alpha(o,s').$

(I don’t know why this probability depends on $s'$ but not $s.$ Maybe it ultimately doesn’t matter much.)

Again, the goal is to choose a policy that maximizes the expected total reward. But a policy is a bit different now. The action at any time can only depend on all the observations made thus far.

Partially observable Markov decision processes are also called POMPDs. If you want to learn about them, try these:

Partially observable Markov decision process, Wikipedia.

• Tony Cassandra, Partially observable Markov decision processes.

The latter includes an introduction without any formulas to POMDPs and how to choose optimal policies. I’m not sure who would study this subject and not want to see formulas, but it’s certainly a good exercise to explain things using just words—and it makes certain things easier to understand (though not others, in a way that depends on who is trying to learn the stuff).

The action-perception loop

I already explained the action-perception loop, with the help of this picture from the University of Bielefeld’s Department of Cognitive Neuroscience:

Nafthali Tishby has a nice picture of it that’s more abstract:

We’re assuming time comes in discrete steps, just to keep things simple.

At each time $t$

• the world has some state $W_t,$ and
• the agent has some state $M_t.$

Why the letter $M$? This stands for memory: it can be the state of the agent’s memory, but I prefer to think of it as the state of the agent.

At each time

• the agent takes an action $A_t,$ which affects the world’s next state, and

• the world provides a sensation $S_t$ to the agent, which affect’s the agent’s next state.

This is simplified but very nice. Note that there’s a symmetry interchanging the world and the agent!

We could make it fancier by having lots of agents who all interact, but there are a lot of questions already. The big question Tishby focuses on is optimizing how much the agent should remember about the past if they

• get a reward depending on the action taken and the resulting state of the world

but

• pay a price for the information stored from sensations.

Tishby formulates this optimization question as something like a partially observed Markov decision process, uses rate-distortion theory to analyze how much information needs to be stored to achieve a given reward, and uses Bellman’s equation to solve the optimization problem!

So, everything I sketched fits together somehow!

I hope what I’m saying now is roughly right: it will take me more time to get the details straight. If you’re having trouble absorbing all the information I just threw at you, don’t feel bad: so am I. But the math feels really natural and good to me. It involves a lot of my favorite ideas (like generalizations of the principle of least action, and relative entropy), and it seems ripe to be combined with network theory ideas.

For details, I highly recommend this paper:

• Naftali Tishby and Daniel Polani, Information theory of decisions and actions, in Perception-Reason-Action Cycle: Models, Algorithms and System. Vassilis, Hussain and Taylor, Springer, Berlin, 2010.

I’m going to print this out, put it by my bed, and read it every night until I’ve absorbed it.

Entropy and Information in Biological Systems (Part 2)

4 July, 2014

John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend:

Information and entropy in biological systems, National Institute for Mathematical and Biological Synthesis, Knoxville Tennesee, Wednesday-Friday, 8-10 April 2015.

Click the link, read the stuff and scroll down to “CLICK HERE” to apply. The deadline is 12 November 2014.

Financial support for travel, meals, and lodging is available for workshop attendees who need it. We will choose among the applicants and invite 10-15 of them.

The idea

Information theory and entropy methods are becoming powerful tools in biology, from the level of individual cells, to whole ecosystems, to experimental design, model-building, and the measurement of biodiversity. The aim of this investigative workshop is to synthesize different ways of applying these concepts to help systematize and unify work in biological systems. Early attempts at “grand syntheses” often misfired, but applications of information theory and entropy to specific highly focused topics in biology have been increasingly successful. In ecology, entropy maximization methods have proven successful in predicting the distribution and abundance of species. Entropy is also widely used as a measure of biodiversity. Work on the role of information in game theory has shed new light on evolution. As a population evolves, it can be seen as gaining information about its environment. The principle of maximum entropy production has emerged as a fascinating yet controversial approach to predicting the behavior of biological systems, from individual organisms to whole ecosystems. This investigative workshop will bring together top researchers from these diverse fields to share insights and methods and address some long-standing conceptual problems.

So, here are the goals of our workshop:

• To study the validity of the principle of Maximum Entropy Production (MEP), which states that biological systems – and indeed all open, non-equilibrium systems – act to produce entropy at the maximum rate.

• To familiarize all the participants with applications to ecology of the MaxEnt method: choosing the probabilistic hypothesis with the highest entropy subject to the constraints of our data. We will compare MaxEnt with competing approaches and examine whether MaxEnt provides a sufficient justification for the principle of MEP.

• To clarify relations between known characterizations of entropy, the use of entropy as a measure of biodiversity, and the use of MaxEnt methods in ecology.

• To develop the concept of evolutionary games as “learning” processes in which information is gained over time.

• To study the interplay between information theory and the thermodynamics of individual cells and organelles.

For more details, go here.

If you’ve got colleagues who might be interested in this, please let them know. You can download a PDF suitable for printing and putting on a bulletin board by clicking on this:

Quantum Frontiers in Network Science

6 May, 2014

guest post by Jacob Biamonte

There’s going to be a workshop on quantum network theory in Berkeley this June. The event is being organized by some of my collaborators and will be a satellite of the biggest annual network science conference, NetSci.

A theme of the Network Theory series here on Azimuth has been to merge ideas appearing in quantum theory with other disciplines. Remember the first post by John which outlined the goal of a general theory of networks? Well, everyone’s been chipping away at this stuff for a few years now and I think you’ll agree that this workshop seems like an excellent way to push these topics even further, particularly as they apply to complex networks.

The event is being organized by Mauro Faccin, Filippo Radicchi and Zoltán Zimborás. You might recall when Tomi Johnson first explained to us some ideas connecting quantum physics with the concepts of complex networks (see Quantum Network Theory Part 1 and Part 2). Tomi’s going to be speaking at this event. I understand there is even still a little bit of space left to contribute talks and/or to attend. I suspect that those interested can sort this out by emailing the organizers or just follow the instructions to submit an abstract.

They have named their event Quantum Frontiers in Network Science or QNET for short. Here’s their call.

Quantum Frontiers in Network Science

This year the biggest annual network science conference, NetSci will take place in Berkeley California on 2-6 June. We are organizing a one-day Satellite Workshop on Quantum Frontiers in Network Science (QNET).

A grand challenge in contemporary complex network science is to reconcile the staple “statistical mechanics based approach” with a theory based on quantum physics. When considering networks where quantum coherence effects play a non-trivial role, the predictive power of complex network science has been shown to break down. A new theory is now being developed which is based on quantum theory, from first principles. Network theory is a diverse subject which developed independently in several disciplines to rely on graphs with additional structure to model complex systems. Network science has of course played a significant role in quantum theory, for example in topics such as tensor network states, chiral quantum walks on complex networks, categorical tensor networks, and categorical models of quantum circuits, to name only a few. However, the ideas of complex network science are only now starting to be united with modern quantum theory. From this respect, one aim of the workshop is to put in contact two big and generally not very well connected scientific communities: statistical and quantum physicists.

The topic of network science underwent a revolution when it was realized that systems such as social or transport networks could be interrelated through common network properties, but what are the relevant properties to consider when facing quantum systems? This question is particularly timely as there has been a recent push towards studying increasingly larger quantum mechanical systems, where the analysis is only beginning to undergo a shift towards embracing the concepts of complex networks.

For example, theoretical and experimental attention has turned to explaining transport in photosynthetic complexes comprising tens to hundreds of molecules and thousands of atoms using quantum mechanics. Likewise, in condensed matter physics using the language of “chiral quantum walks”, the topological structure of the interconnections comprising complex materials strongly affects their transport properties.

An ultimate goal is a mathematical theory and formal description which pinpoints the similarities and differences between the use of networks throughout the quantum sciences. This would give rise to a theory of networks augmenting the current statistical mechanics approach to complex network structure, evolution, and process with a new theory based on quantum mechanics.

Topics of special interest to the satellite include

• Quantum transport and chiral quantum walks on complex networks
• Detecting community structure in quantum systems
• Tensor algebra and multiplex networks
• Quantum information measures (such as entropy) applied to complex networks
• Quantum critical phenomena in complex networks
• Quantum models of network growth
• Quantum techniques for reaction networks
• Quantum algorithms for problems in complex network science
• Foundations of quantum theory in relation to complex networks and processes thereon
• Quantum inspired mathematics as a foundation for network science

Info

QNET will be held at the NetSci Conference venue at the Clark Kerr Campus of the University of California, on June 2nd in the morning (8am-1pm).

• Main conference page: NetSci2014
Call for abstracts and the program

It sounds interesting! You’ll notice that the list of topics seems reminiscent of some of the things we’ve been talking about right here on Azimuth! A general theme of the Network Theory Series has been geared towards developing frameworks to describe networked systems through a common language and then to map the use of tools and results across disciplines. It seems like a great place to talk about these ideas. Oh, and here’s a current list of the speakers:

Leonardo Banchi (UCL, London)
Ginestra Bianconi (London)
Silvano Garnerone (IQC, Waterloo)
Laetitia Gauvin (ISI Foundation)
Marco Javarone (Sassari)
Tomi Johnson (Oxford)

and again, the organizers are

Mauro Faccin (ISI Foundation)
Filippo Radicchi (Indiana University)
Zoltán Zimborás (UCL)

From the call, we can notice that a central discussion topic at QNET will be about contrasting stochastic and quantum mechanics. Here on Azimuth we like this stuff. You might remember that stochastic mechanics was formulated in the network theory series to mathematically resemble quantum theory (see e.g. Part 12). This formalism was then employed to produce several results, including a stochastic version of Noether’s theorem by John and Brendan in Parts 11 and 13—recently Ville has also written Noether’s Theorem: Quantum vs Stochastic. Several other results were produced by relating quantum field theory to Petri nets from population biology and to chemical reaction networks in chemistry (see the Network Theory homepage). It seems to me that people attending QNET will be interested in these sorts of things, as well as other related topics.

One of the features of complex network science is that it is often numerically based and geared directly towards interesting real-world applications. I suspect some interesting results should stem from the discussions that will take place at this workshop.

By the way, here’s a view of downtown San Francisco at dusk from Berkeley Hills California from the NetSci homepage: