Yesterday, Marco Grandis showed me a fascinating paper by his thesis advisor:

• Gabriele Darbo, Aspetti algebrico-categoriali della teoria dei dispotivi, *Symposia Mathematica* **IV** (1970), Istituto Nazionale di Alta Matematica, 303–336.

It’s closely connected to Brendan Fong’s thesis, but also different—and, of course, much older. According to Grandis, Darbo was the first person to work on category theory in Italy. He’s better known for other things, like ‘Darbo’s fixed point theorem’, but this piece of work is elegant, and, it seems to me, strangely ahead of its time.

The paper’s title translates as ‘Algebraic-categorical aspects of the theory of devices’, and its main concept is that of a ‘universe of devices’: *a collection of devices of some kind that can be hooked up using wires to form more devices of this kind*. Nowadays we might study this concept using operads—but operads didn’t exist in 1970, and Darbo did quite fine without them.

The key is the category which has finite sets as objects and ‘corelations’ as morphisms. I explained corelations here:

• Corelations in network theory, 2 February 2016.

Briefly, a corelation from a finite set to a finite set is a partition of the disjoint union of and We can get such a partition from a bunch of wires connecting points of and The idea is that two points lie in the same part of the partition iff they’re connected, directly or indirectly, by a path of wires. So, if we have some wires like this:

they determine a corelation like this:

There’s an obvious way to compose corelations, giving a category

Gabriele Darbo doesn’t call them ‘corelations’: he calls them ‘trasduttori’. A literal translation might be ‘transducers’. But he’s definitely talking about corelations, and like Fong he thinks they are basic for studying ways to connect systems.

Darbo wants a ‘universe of devices’ to assign to each finite set a set of devices having as their set of ‘terminals’. Given a device in and a corelation thought of as a bunch of wires, he wants to be able to attach these wires to the terminals in and get a new device with as its set of terminals. Thus he wants a map If you draw some pictures, you’ll see this should give a functor

Moreover, if we have device with a set of terminals and a device with a set of terminals, we should be able to set them side by side and get a device whose set of terminals form the set , meaning the disjoint union of and So, Darbo wants to have maps

If you draw some more pictures you can convince yourself that should be a lax symmetric monoidal functor… if you’re one of the lucky few who knows what that *means*. If you’re not, you can look it up in many places, such as Section 1.2 here:

• Brendan Fong, *The Algebra of Open and Interconnected Systems*, Ph.D. thesis, University of Oxford, 2016. (Blog article here.)

Darbo does not mention lax symmetric monoidal functors, perhaps because such concepts were first introduced by Mac Lane only in 1968. But as far as I can tell, Darbo’s definition is *almost* equivalent to this:

**Definition.** A **universe of devices** is a lax symmetric monoidal functor

One difference is that Darbo wants there to be exactly one device with no terminals. Thus, he assumes is a one-element set, say , while the definition above would only demand the existence of a map obeying a couple of axioms. That gives a particular ‘favorite’ device with no terminals. I believe we get Darbo’s definition from the above one if we further assume is the identity map. This makes sense if we take the attitude that ‘a device is determined by its observable behavior’, but not otherwise. This attitude is called ‘black-boxing’.

Darbo does various things in his paper, but the most exciting to me is his example connected to linear electrical circuits. He defines, for any pair of objects and in an abelian category a particular universe of devices. He calls this the **universe of linear devices** having as the **object of potentials** and as the **object of currents**.

If you don’t like abelian categories, think of as the category of finite-dimensional real vector spaces, and let Electric potential and electric current are described by real numbers so this makes sense.

The basic idea will be familiar to Fong fans. In an electrical circuit made of purely conductive wires, when two wires merge into one we *add the currents* to get the current on the wire going out. When one wire splits into two we *duplicate the potential* to get the potentials on the wires going out. Working this out further, any corelation between finite set determines two linear relations, one

relating the currents on the wires coming in to the currents on the wires going out, and one

relating the potentials on the wires going out to the potentials on the wires coming in. Here is the direct sum of copies of and so on; the funky arrow indicates that we have a linear relation rather than a linear map. Note that goes forward while goes backward; this is mainly just conventional, since you can turn linear relations around, but we’ll see it’s sort of nice.

If we let be the set of linear relations between two objects we can use the above technology to get a universe of devices where

In other words, a device of this kind is simply *a linear relation between the potentials and currents at its terminals!*

How does get to be a functor ? That’s pretty easy. We’ve defined it on objects (that is, finite sets) by the above formula. So, suppose we have a morphism (that is, a corelation) How do we define

To answer this question, we need a function

Luckily, we’ve got linear relations

and

So, given any linear relation we just define

Voilà!

People who have read Fong’s thesis, or my paper with Blake Pollard on reaction networks:

• John Baez and Blake Pollard, A compositional framework for reaction networks.

will find many of Darbo’s ideas eerily similar. In particular, the formula

appears in Lemma 16 of my paper with Blake, where we are defining a category of open dynamical systems. We prove that is a lax symmetric monoidal functor, which is just what Darbo proved—though in a different context, since our is not linear like his, and for a different purpose, since he’s trying to show is a ‘universe of devices’, while we’re trying to construct the category of open dynamical systems as a ‘decorated cospan category’.

In short: if this work of Darbo had become more widely known, the development of network theory could have been sped up by three decades! But there was less interest in a general theory of networks at the time, lax monoidal functors were brand new, operads unknown… and, sadly, few mathematicians read Italian.

Darbo has *other* papers, and so do his students. We should read them and learn from them! Here are a few open-access ones:

• Franco Parodi, Costruzione di un universo di dispositivi non lineari su una coppia di gruppi abeliani , *Rendiconti del Seminario Matematico della Università di Padova* **58** (1977), 45–54.

• Franco Parodi, Categoria degli universi di dispositivi e categoria delle T-algebre, *Rendiconti del Seminario Matematico della Università di Padova* **62** (1980), 1–15.

• Stefano Testa, Su un universo di dispositivi monotoni, *Rendiconti del Seminario Matematico della Università di Padova* **65** (1981), 53–57.

At some point I will scan in G. Darbo’s paper and make it available here.

]]>

and the Dynkin diagram:

When I recently gave a talk on this topic, I realized I didn’t understand it as well as I’d like. Since then I’ve been making progress with the help of this book:

• Alexander Kirillov Jr., *Quiver Representations and Quiver Varieties*, AMS, Providence, Rhode Island, 2016.

I now think I glimpse a way forward to a very concrete and vivid understanding of the relation between the icosahedron and E_{8}. It’s really just a matter of taking the ideas in this book and working them out concretely in this case. But it takes some thought, at least for me. I’d like to enlist your help.

The rotational symmetry group of the icosahedron is a subgroup of with 60 elements, so its double cover up in has 120. This double cover is called the **binary icosahedral group**, but I’ll call it for short.

This group is the star of the show, the link between the icosahedron and E_{8}. To visualize this group, it’s good to think of as the unit quaternions. This lets us think of the elements of as 120 points in the unit sphere in 4 dimensions. They are in fact the vertices of a 4-dimensional regular polytope, which looks like this:

It’s called the **600-cell**.

Since is a subgroup of it acts on and we can form the quotient space

This is a smooth manifold except at the origin—that is, the point coming from There’s a singularity at the origin, and this where is hiding! The reason is that there’s a smooth manifold and a map

that’s one-to-one and onto except at the origin. It maps 8 spheres to the origin! There’s one of these spheres for each dot here:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

The challenge is to find a nice concrete description of the map and these 8 spheres.

But first it’s good to get a mental image of Each point in this space is a orbit in meaning a set like this:

for some For this set is a single point, and that’s what I’ve been calling the ‘origin’. In all other cases it’s 120 points, the vertices of a 600-cell in This 600-cell is centered at the point but it can be big or small, depending on the magnitude of

So, as we take a journey starting at the origin in we see a point explode into a 600-cell, which grows and perhaps also rotates as we go. The origin, the singularity in is a bit like the Big Bang.

Unfortunately not every 600-cell centered at the origin is of the form I’ve shown:

It’s easiest to see this by thinking of points in 4d space as quaternions rather than elements of Then the points are unit quaternions forming the vertices of a 600-cell, and multiplying on the right by dilates this 600-cell and also rotates it… but we don’t get *arbitrary* rotations this way. To get an arbitrarily rotated 600-cell we’d have to use both a left and right multiplication, and consider

for a pair of quaternions

Luckily, there’s a simpler picture of the space It’s the space of all regular icosahedra centered at the origin in 3d space!

To see this, we start by switching to the quaternion description, which says

Specifying a point amounts to specifying the magnitude together with which is a unit quaternion, or equivalently an element of So, specifying a point in

amounts to specifying the magnitude together with a point in . But modulo the binary icosahedral group is the same as modulo the icosahedral group (the rotational symmetry group of an icosahedron). Furthermore, modulo the icosahedral group is just the space of unit-sized icosahedra centered at the origin of

So, specifying a point

amounts to specifying a nonnegative number together with a unit-sized icosahedron centered at the origin of But this is the same as specifying an icosahedron of arbitrary size centered at the origin of There’s just one subtlety: we allow the size of this icosahedron to be zero, but then the way it’s rotated no longer matters.

So, is the space of icosahedra centered at the origin, with the ‘icosahedron of zero size’ being a singularity in this space. When we pass to the smooth manifold we replace this singularity with 8 spheres, intersecting in a pattern described by the Dynkin diagram.

Points on these spheres are *limiting cases* of icosahedra centered at the origin. We can approach these points by letting an icosahedron centered at the origin shrink to zero size in a clever way, perhaps spinning about wildly as it does.

I don’t understand this last paragraph nearly as well as I’d like! I’m quite sure it’s true, and I know a lot of relevant information, but I don’t *see* it. There should be a vivid picture of how this works, not just an abstract argument. Next time I’ll start trying to assemble the material that I think needs to go into building this vivid picture.

]]>

I’m in Luxembourg, and I’ll be blogging a bit about this workshop:

• Dynamics, Thermodynamics and Information Processing in Chemical Networks, 13-16 June 2017, Complex Systems and Statistical Mechanics Group, University of Luxembourg. Organized by Massimiliano Esposito and Matteo Polettini.

I’ll do it in the comments!

I explained the idea of this workshop here:

• Information processing in chemical networks.

and now you can see the program here.

]]>

Next week, Blake Pollard and I will talk about our work on reaction networks. We’ll do this at Dynamics, Thermodynamics and Information Processing in Chemical Networks, a workshop at the University of Luxembourg organized by Massimiliano Esposito and Matteo Polettini. We’ll do it on Tuesday, 13 June 2017, from 11:00 to 13:00, in room BSC 3.03 of the Bâtiment des Sciences. If you’re around, please stop by and say hi!

Here are the slides for my talk:

• The mathematics of open reaction networks.

Abstract.To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, electrical circuit diagrams, signal-flow graphs, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework: the mathematics of monoidal categories. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Here we explain how various applications of reaction networks and Petri nets fit into this framework.

If you see typos or other problems please let me know now!

I hope to blog a bit about the workshop… it promises to be very interesting.

]]>

Here you can see the slides of a talk I’m giving:

• The dodecahedron, the icosahedron and E_{8}, Annual General Meeting of the Hong Kong Mathematical Society, Hong Kong University of Science and Technology.

It’ll take place on 10:50 am Saturday May 20th in Lecture Theatre G. You can see the program for the whole meeting here.

The slides are in the form of webpages, and you can see references and some other information tucked away at the bottom of each page.

In preparing this talk I learned more about the geometric McKay correspondence, which is a correspondence between the simply-laced Dynkin diagrams (also known as ADE Dynkin diagrams) and the finite subgroups of

There are different ways to get your hands on this correspondence, but the *geometric* way is to resolve the singularity in where is such a finite subgroup. The variety has a singularity at the origin–or more precisely, the point coming from the origin in To make singularities go away, we ‘resolve’ them. And when you take the ‘minimal resolution’ of this variety (a concept I explain here), you get a smooth variety with a map

which is one-to-one except at the origin. The points that map to the origin lie on a bunch of Riemann spheres. There’s one of these spheres for each dot in some Dynkin diagram—and two of these spheres intersect iff their two dots are connected by an edge!

In particular, if is the double cover of the rotational symmetry group of the dodecahedron, the Dynkin diagram we get this way is :

The basic reason is connected to the icosahedron is that the icosahedral group is generated by rotations of orders 2, 3 and 5 while the Dynkin diagram has ‘legs’ of length 2, 3, and 5 if you count right:

In general, whenever you have a triple of natural numbers obeying

you get a finite subgroup of that contains rotations of orders and a simply-laced Dynkin diagram with legs of length The three most exciting cases are:

• : the tetrahedron, and

• : the octahedron, and

• : the icosahedron, and

But the puzzle is this: why does resolving the singular variety gives a smooth variety with a bunch of copies of the Riemann sphere sitting over the singular point at the origin, with these copies intersecting in a pattern given by a Dynkin diagram?

It turns out the best explanation is in here:

• Klaus Lamotke, *Regular Solids and Isolated Singularities*, Vieweg & Sohn, Braunschweig, 1986.

In a nutshell, you need to start by blowing up at the origin, getting a space containing a copy of on which acts. The space has further singularities coming from the rotations of orders and in . When you resolve these, you get more copies of which intersect in the pattern given by a Dynkin diagram with legs of length and

I would like to understand this better, and more vividly. I want a really clear understanding of the minimal resolution For this I should keep rereading Lamotke’s book, and doing more calculations.

I do, however, have a nice vivid picture of the singular space For that, read my talk! I’m hoping this will lead, someday, to an equally appealing picture of its minimal resolution.

]]>

and there are about two dozen allotropes of sulfur, with a phase diagram like this:

So I should have guessed that sulfur and phosphorus combine to make many different compounds. But I never thought about this until yesterday!

I’m a great fan of diamonds, not for their monetary value but for the math of their crystal structure:

In a diamond the carbon atoms do not form a lattice in the strict mathematical sense (which is more restrictive than the sense of this word in crystallography). The reason is that there aren’t translational symmetries carrying any atom to any other. Instead, there are two lattices of atoms, shown as red and blue in this picture by Greg Egan. Each atom has 4 nearest neighbors arranged at the vertices of a regular tetrahedron; the tetrahedra centered at the blue atoms are ‘right-side up’, while those centered at the red atoms are ‘upside down’.

Having thought about this a lot, I was happy to read about adamantane. It’s a compound with 10 carbons and 16 hydrogens. There are 4 carbons at the vertices of a regular tetrahedron, and 6 along the edges—but the edges bend out in such a way that the carbons form a tiny piece of a diamond crystal:

or more abstractly, focusing on the carbons and their bonds:

Yesterday I learned that phosphorus decasulfide, P_{4}S_{10}, follows the same pattern:

The angles deviate slightly from the value of

that we’d have in a fragment of a mathematically ideal diamond crystal, but that’s to be expected.

It turns out there are lots of other phosphorus sulfides! Here are some of them:

**Puzzle 1.** Why do each of these compounds have exactly 4 phosphorus atoms?

I don’t know the answer! I can’t believe it’s impossible to form phosphorus–sulfur compounds with some other number of phosphorus atoms, but the Wikipedia article containing this chart says

All known molecular phosphorus sulfides contain a tetrahedral array of four phosphorus atoms. P

_{4}S_{2}is also known but is unstable above −30 °C.

All these phosphorus sulfides contain at most 10 sulfur atoms. If we remove one sulfur from phosphorus decasulfide we can get this:

This is the ‘alpha form’ of P_{4}S_{9}. There’s also a beta form, shown in the chart above.

Some of the phosphorus sulfides have pleasing symmetries, like the

alpha form of P_{4}S_{4}:

or the epsilon form of P_{4}S_{6}:

Others look awkward. The alpha form of P_{4}S_{5} is an ungainly beast:

They all seem to have a few things in common:

• There are 4 phosphorus atoms.

• Each phosphorus atom is connected to 3 or 4 atoms, at most one of which is phosphorus.

• Each sulfur atom is connected to 1 or 2 atoms, which must all be phosphorus.

The pictures seem pretty consistent about showing a ‘double bond’ when a sulfur atom is connected to just 1 phosphorus. However, they don’t show a double bond when a phosphorus atom is connected to just 3 sulfurs.

**Puzzle 2.** Can you draw molecules obeying the 3 rules listed above that aren’t on the chart?

Of all the phosphorus sulfides, P_{4}S_{10} is not only the biggest and most symmetrical, it’s also the most widely used. Humans make thousands of tons of the stuff! It’s used for producing organic sulfur compounds.

People also make P_{4}S_{3}: it’s used in strike-anywhere matches. This molecule is not on the chart I showed you, and it also violates one of the rules I made up:

Somewhat confusingly, P_{4}S_{10} is not only called phosphorus decasulfide: it’s also called phosphorus pentasulfide. Similarly, P_{4}S_{3} is called phosphorus sesquisulfide. Since the prefix ‘sesqui-’ means ‘one and a half’, there seems to be some kind of division by 2 going on here.

]]>

I have a new favorite molecule: adamantane. As you probably know, someone is said to be ‘adamant’ if they are unshakeable, immovable, inflexible, unwavering, uncompromising, resolute, resolved, determined, firm, rigid, or steadfast. But ‘adamant’ is also a legendary mineral, and the etymology is the same as that for ‘diamond’.

The molecule adamantane, shown above, features 10 carbon atoms arranged just like a small portion of a diamond crystal! It’s a bit easier to see this if you ignore the 16 hydrogen atoms and focus on the carbon atoms and bonds between those:

It’s a somewhat strange shape.

**Puzzle 1.** Give a clear, elegant description of this shape.

**Puzzle 2.** What is its symmetry group? This is really two questions: I’m asking about the symmetry group of this shape as an abstract graph, but also the symmetry group of this graph as embedded in 3d Euclidean space, counting both rotations and reflections.

**Puzzle 3.** How many ‘kinds’ of carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of vertices, how many orbits are there? (Again this is really two questions, depending on which symmetry group we use.)

**Puzzle 4.** How many kinds of bonds between carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of edges, how many orbits are there? (Again, this is really two questions.)

You can see the relation between adamantane and a diamond if you look carefully at a diamond crystal, as shown in this image by H. K. D. H. Bhadeshia:

or this one by Greg Egan:

Even with these pictures at hand, I find it a bit tough to see the adamantane pattern lurking in the diamond! Look again:

Adamantane has an interesting history. The possibility of its existence was first suggested by a chemist named Decker at a conference in 1924. Decker called this molecule ‘decaterpene’, and registered surprise that nobody had made it yet. After some failed attempts, it was first synthesized by the Croatian-Swiss chemist Vladimir Prelog in 1941. He later won the Nobel prize for his work on stereochemistry.

However, long before it was synthesized, adamantane was isolated from petroleum by the Czech chemists Landa, Machacek and Mzourek! They did it in 1932. They only managed to make a few milligrams of the stuff, but we now know that petroleum naturally contains between .0001% and 0.03% adamantane!

Adamantane can be crystallized:

but ironically, the crystals are rather soft. It’s all that hydrogen. It’s also amusing that adamantane has an *odor*: supposedly it smells like camphor!

Adamantane is just the simplest of the molecules called diamondoids.

These are a few:

1 is **adamantane**.

2 is called **diamantane**.

3 is called **triamantane**.

4 is called **isotetramantane**, and it comes in two mirror-image forms.

Here are some better pictures of diamantane:

People have done lots of chemical reactions with diamondoids. Here are some things they’ve done with the next one, **pentamantane**:

Many different diamondoids occur naturally in petroleum. Though the carbon in diamonds is not biological in origin, the carbon in diamondoids found in petroleum is. This was shown by studying ratios of carbon isotopes.

Eric Drexler has proposed using diamondoids for nanotechnology, but he’s talking about larger molecules than those shown here.

For more fun along these lines, try:

• Diamonds and triamonds, *Azimuth*, 11 April 2016.

]]>

You can see slides here:

• Biology as information dynamics.

Abstract.If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.

]]>

• David S. Wilson and Alan Kirman, editors, *Complexity and Evolution: Toward a New Synthesis for Economics*, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

**Agent_Zero** is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract.Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about *human* behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one *can* draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, *Evonomics*, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that

Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

]]>

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution *relative to another*, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be *used*, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

• Stanford complexity group, YouTube.

You can already see lots of cool talks at this location!

]]>