I’ve written about this work before:

• Complex Adaptive Systems: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6.

But we’ve done a lot more, and my blog articles are having trouble keeping up! So I’d like to sketch out the big picture as it stands today.

If I had to summarize, I’d say we’ve developed a formalism for *step-by-step compositional design and tasking, using commitment networks*. But this takes a while to explain.

Here’s a very simple example of a commitment network:

It has four nodes, which represent agents: a port, a helicopter, a UAV (an unmanned aerial vehicle, or drone) and a target. The edges between these notes describe relationships between these agents. Some of these relationships are ‘commitments’. For example, the edges labelled ‘SIR’ say that one agent should ‘search, intervene and rescue’ the other.

Our framework for dealing with commitment networks has some special features. It uses operads, but this isn’t really saying much. An ‘operad’ is a bunch of ways of sticking things together. An ‘algebra’ of the operad gives a particular collection of these things, and says what we get when we stick them together. These concepts are extremely general, so there’s a huge diversity of operads, each with a huge diversity of algebras. To say one is using operads to solve a problem is a bit like saying one is using math. What matters more is the specific *kind* of operad one is using, and how one is using it.

For our work, we needed to develop a new class of operads called **network operads**, which are described here:

• John Baez, John Foley, Joseph Moeller and Blake Pollard, Network models.

In this paper we mainly discuss communication networks. Subsequently we’ve been working on a special class of network operads that describe how to build commitment networks.

Here are some of key ideas:

• Using network operads we can build bigger networks from smaller ones by *overlaying* them. David Spivak’s operad of wiring diagrams only let us ‘wire together’ smaller networks to form bigger ones:

Here networks X_{1}, X_{2} and X_{3} are being wired together to form Y.

Network operads also let us wire together networks, but in addition they let us take one network:

and overlay another:

to create a larger network:

This is a new methodology for designing systems. We’re all used to building systems by wiring together subsystems: anyone who has a home stereo system has done this. But *overlaying* systems lets us do more. For example, we can take two plans of action involving the same collection of agents, and overlay them to get a new plan. We’ve all done this, too: you tell a bunch of people to do things… and then tell the same people, or an overlapping set of people, to do some other things. But lots of problems can arise if you aren’t careful. A mathematically principled approach can avoid some of these problems.

• The nodes of our networks represent agents of various types. The edges represent various relationships between agents. For example, they can represent communication channels. But more interestingly, they can represent *commitments*. For example, we can have an edge from A to B saying that agent A has to go rescue agent B. We call this kind of network a **commitment network**.

• By overlaying commitment networks, we can not only build systems out of smaller pieces but also build complicated plans by overlaying smaller pieces of plans. Since ‘tasking’ means telling a system what to do, we call this **compositional tasking**.

• If one isn’t careful, overlaying commitment networks can produce conflicts. Suppose we have a network with an edge saying that agent A has to rescue agent B. On top of this we overlay a network with an edge saying that A has to rescue agent C. If A can’t do both of these tasks at once, what should A do? There are various choices. We need to build a specific choice into the framework, so we can freely overlay commitment networks and get a well-defined result that doesn’t overburden the agents involved. We call this **automatic deconflicting**.

• Our approach to automatic deconflicting uses an idea developed by the famous category theorist Bill Lawvere: graphic monoids. I’ll explain these later, along with some of their side-benefits.

• Networks operads should let us do **step-by-step compositional tasking.** In other words, they should let us partially automate the process of tasking networks of agents, both

1) **compositionally**: tasking smaller networks and then sticking them together, e.g. by overlaying them, to get larger networks,

and

2) in a **step-by-step** way, starting at a low level of detail and then increasing the amount of detail.

To do this we need not just operads but their algebras.

• Remember, a network operad is a bunch of ways to stick together networks of some kind, e.g. by overlaying them. An *algebra* of this operad specifies a particular collection of networks of this kind, and says what we actually get when we stick them together.

So, a network operad can have one algebra in which things are described in a bare-bones, simplified way, and another algebra in which things are described in more detail. Indeed it will typically have *many* algebras, corresponding to *many* levels of detail, but for simplicity let’s just think about two.

When we have a ‘less detailed’ algebra and a ‘more detailed’ algebra they will typically be related by a map

which ‘forgets the extra details’. This sort of map is called a **homomorphism** of algebras. We give examples in our paper Network models.

But what we usually want to do, when designing a system, is not *forget* extra detail, but rather *add* extra detail to a rough specification. There is not always a systematic way to do this. If there *is*, then we may have a homomorphism

going back the other way. This lets us automate the process of filling in the details. But we can’t usually count on being able to do this. So, often we may have to start with an element of and search for an element of that is mapped to it by And typically we want this element to be optimal, or at least ‘good enough’, according to some chosen criteria. Expressing this idea formally helps us figure out how to automate the search. John Foley, in particular, has been working on this.

That’s an overview of our ideas.

Next, for the mathematically inclined, I want to give a few more details on one of the new elements not mentioned in our Network models paper: ‘graphic monoids’.

In our paper Network models we explain how the ‘overlay’ operation makes the collection of networks involving a given set of agents into a monoid. A **monoid** is a set M with a product that is associative and has an identity element 1:

In our application, this product is overlaying two networks.

A **graphic monoid** is one in which the **graphic identity**

holds for all

To understand this identity, let us think of the elements of the monoid as “commitments”. The product means “first committing to do then committing to do ”. The graphic identity says that if we first commit to do , then , and then again, it’s the same as first committing to do and then Committing to do *again* doesn’t change anything!

In particular, in any graphic monoid we have

so making the same commitment twice is the same as making it once. Mathematically we say every element of a graphic monoid is **idempotent**:

A commutative monoid obeying this law automatically obeys the graphic identity, since then

But for a noncommutative monoid, the graphic identity is stronger than . It says that *after committing to no matter what intervening commitments one might have made, committing to again has no further effect*. In other words: the intervening commitments did not undo the original commitment, so making the original commitment a second time has no effect! This captures the idea of how promises should behave.

As I said, for any network model, the set of all networks involving a fixed set of agents is a monoid. In a **commitment network model**, this monoid is required to be a graphic monoid. Joseph Moeller is writing a paper that shows how to construct a large class of commitment network models. We will follow this with a paper illustrating how to use these in compositional tasking.

For now, let me just mention a side-benefit. In any graphic monoid we can define a relation by

x a = y $ for some

This makes the graphic monoid into a partially ordered set, meaning that these properties hold:

reflexivity:

transitivity:

antisymmetry:

In the context of commitment networks, means that starting from we can reach by making some further commitment : that is, for some . So, as we ‘task’ a collection of agents by giving them more and more commitments, we move up in this partial order.

]]>I’m looking forward to this workshop organized by Spencer Breiner at the National Institute of Standards and Technology:

• Applied Category Theory: Bridging Theory & Practice, March 15–16, 2018, NIST, Gaithersburg, Maryland, USA.

It’s by invitation only, but it’s part of a movement I’m excited about, so I can’t resist mentioning its existence. Here’s the idea:

What:The Information Technology Laboratory at NIST is pleased to announce a workshop on Applied Category Theory to be held at NIST’s Gaithersburg, Maryland campus on March 15 & 16, 2018. The meeting will focus on practical avenues for introducing methods from category theory into real-world applications, with an emphasis on examples rather than theorems.

Who:The workshop aims to bring together two distinct groups. First, category theorists interested in pursuing applications outside of the usual mathematical fields. Second, domain experts and research managers from industry, government, science and engineering who have in mind potential domain applications for categorical methods.

Intended Outcomes:A proposed landscape of potential CT applications and the infrastructure needed to realize them, together with a 5-10 year roadmap for developing the field of applied category theory. This should include perspectives from industry, academia and government as well as major milestones, potential funding sources, avenues for technology transfer and necessary improvements in tool support and methodology. Exploratory collaborations between category theorists and domain experts. We will ask that each group come prepared to meet the other side. Mathematicians should be prepared with concrete examples that demonstrate practical applications of CT in an intuitive way. Domain experts should bring to the table specific problems to which they can devote time and/or funding as well as some reasons about why they think CT might be relevant to this application.

Invited Speakers:

John Baez (University of California at Riverside) and John Foley (Metron Scientific Solutions).

Bob Coecke (University of Oxford).

Dusko Pavlovic (University of Hawaii).

Some other likely participants include Chris Boner (Metron), Arquimedes Canedo (Siemens at Princeton), Stephane Dugowson (Supméca), William Edmonson (North Carolina A&T), Brendan Fong (MIT), Mark Fuge (University of Maryland), Jack Gray (Penumbra), Steve Huntsman (BAE Systems), Patrick Johnson (Dassault Systèmes), Al Jones (NIST), Cliff Joslyn (Pacific Northwest National Laboratory), Richard Malek (NSF), Tom Mifflin (Metron), Ira Monarch (Carnegie Mellon), John Paschkewitz (DARPA), Evan Patterson (Stanford), Blake Pollard (NIST), Emilie Purvine (Pacific Northwest National Laboratory), Mark Raugas (Pacific Northwest National Laboratory), Bill Regli (University of Maryland), Michael Robinson (American U.) Alberto Speranzon (Honeywell Aerospace), David Spivak (MIT), Eswaran Subrahmanian (Carnegie Mellon), Jamie Vicary (Birmingham and Oxford), and Ryan Wisnesky (Categorical Informatics).

I hope we make a lot of progress—and I plan to let you know how it goes!

]]>Now students in the Applied Category Theory 2018 school are reading about categories applied to linguistics. Read the blog article here for more:

• Jade Master and Cory Griffith, Linguistics using category theory, The *n*-Category Café, 6 February 2018.

This was written by my grad student Jade Master along with Cory Griffith, an undergrad at Stanford. Jade is currently working with me on the semantics of open Petri nets.

What’s the basic idea of this linguistics and category theory stuff? I don’t know much about this, but I can say a bit.

Since category theory is great for understanding the semantics of programming languages, it makes sense to try it for human languages, even though they’re much harder. The first serious attempt I know was by Jim Lambek, who introduced pregroup grammars in 1958:

• Joachim Lambek, The mathematics of sentence structure, *Amer. Math. Monthly* **65** (1958), 154–170.

In this article he hid the connection to category theory. But when you start diagramming sentences or phrases using his grammar, as below, you get planar string diagrams as shown above. So it’s not surprising—if you’re in the know—that he’s secretly using monoidal categories where every object has a right dual and, separately, a left dual.

This fact is just barely mentioned in the Wikipedia article:

but it’s explained in more detail here:

• A. Preller and J. Lambek, Free compact 2-categories, *Mathematical Structures in Computer Science* **17** (2005), 309-340.

This stuff is hugely fun, so I’m wondering why I never looked into it before! When I talked to Lambek, who is sadly no longer with us, it was mainly about his theories relating particle physics to quaternions.

Recently Mehrnoosh Sadrzadeh and Bob Coecke have taken up Lambek’s ideas, relating them to the category of finite-dimensional vector spaces. Choosing a monoidal functor from a pregroup grammar to this category allows one to study linguistics using linear algebra! This simplifies things, perhaps a bit too much—but it makes it easy to do massive computations, which is very popular in this age of “big data” and machine learning.

It also sets up a weird analogy between linguistics and quantum mechanics, which I’m a bit suspicious of. While the category of finite-dimensional vector spaces with its usual tensor product is monoidal, and has duals, it’s symmetric, so the difference between writing a word to the left of another and writing it to the right of another gets washed out! I think instead of using vector spaces one should use modules of some noncommutative Hopf algebra, or something like that. Hmm… I should talk to those folks.

To discuss this, please visit The *n*-Category Café, since there’s a nice conversation going on there and I don’t want to split it. There has also been a conversation on Google+, and I’ll quote some of it here, so you don’t have to run all over the internet.

Noam Zeilberger wrote:

You might have been simplifying things for the post, but a small comment anyways: what Lambek introduced in his original paper are these days usually called “Lambek grammars”, and not exactly the same thing as what Lambek later introduced as “pregroup grammars”. Lambek grammars actually correspond to monoidal biclosed categories in disguise (i.e., based on left/right division rather than left/right duals), and may also be considered without a unit (as in his original paper). (I only have a passing familiarity with this stuff, though, and am not very clear on the difference in linguistic expressivity between grammars based on division vs grammars based on duals.)

Noam Zeilberger wrote:

If you haven’t seen it before, you might also like Lambek’s followup paper “On the calculus of syntactic types”, which generalized his original calculus by dropping associativity (so that sentences are viewed as trees rather than strings). Here are the first few paragraphs from the introduction:

…and here is a bit near the end of the 1961 paper, where he made explicit how derivations in the (original) associative calculus can be interpreted as morphisms of a monoidal biclosed category:

John Baez wrote:

Noam Zeilberger wrote: “what Lambek introduced in his original paper are these days usually called “Lambek grammars”, and not exactly the same thing as what Lambek later introduced as “pregroup grammars”.”

Can you say what the difference is? I wasn’t simplifying things on purpose; I just don’t know this stuff. I think monoidal biclosed categories are great, and if someone wants to demand that the left or right duals be inverses, or that the category be a poset, I can live with that too…. though if I ever learned more linguistics, I might ask why those additional assumptions are reasonable. (Right now I have no idea how reasonable the whole approach is to begin with!)

Thanks for the links! I will read them in my enormous amounts of spare time. :-)

Noam Zeilberger wrote:

As I said it’s not clear to me what the linguistic motivations are, but the way I understand the difference between the original “Lambek” grammars and (later introduced by Lambek) pregroup grammars is that it is precisely analogous to the difference between a monoidal category with left/right residuals and a monoidal category with left/right duals. Lambek’s 1958 paper was building off the idea of “categorial grammar” introduced earlier by Ajdukiewicz and Bar-Hillel, where the basic way of combining types was left division A\B and right division B/A (with no product).

Noam Zeilberger wrote:

At least one seeming advantage of the original approach (without duals) is that it permits interpretations of the “semantics” of sentences/derivations in cartesian closed categories. So it’s in harmony with the approach of “Montague semantics” (mentioned by Richard Williamson over at the n-Cafe) where the meanings of natural language expressions are interpreted using lambda calculus. What I understand is that this is one of the reasons Lambek grammar started to become more popular in the 80s, following a paper by Van Benthem where he observed that such such lambda terms denoting the meanings of expressions could be computed via “homomorphism” from syntactic derivations in Lambek grammar.

Jason Nichols wrote:

John Baez, as someone with a minimal understanding of set theory, lambda calculus, and information theory, what would you recommend as background reading to try to understand this stuff?

It’s really interesting, and looks relevant to work I do with NLP and even abstract syntax trees, but I reading the papers and wiki pages, I feel like there’s a pretty big gap to cross between where I am, and where I’d need to be to begin to understand this stuff.

John Baez wrote:

Jason Nichols: I suggest trying to read some of Lambek’s early papers, like this one:

• Joachim Lambek, The mathematics of sentence structure,

Amer. Math. Monthly65(1958), 154–170.(If you have access to the version at the American Mathematical Monthly, it’s better typeset than this free version.) I don’t think you need to understand category theory to follow them, at least not this first one. At least for starters, knowing category theory mainly makes it clear that the structures he’s trying to use are not arbitrary, but “mathematically natural”. I guess that as the subject develops further, people take more advantage of the category theory and it becomes more important to know it. But anyway, I recommend Lambek’s papers!

Borislav Iordanov wrote:

Lambek was an amazing teacher, I was lucky to have him in my ungrad. There is a small and very approachable book on his pregroups treatment that he wrote shortly before he passed away: “From Word to Sentence: a computational algebraic approach to grammar”. It’s plain algebra and very fun. Sadly looks like out of print on Amazon, but if you can find it, well worth it.

Andreas Geisler wrote:

One immediate concern for me here is that this seems (don’t have the expertise to be sure) to repeat a very old mistake of linguistics, long abandoned :

Words do not have atomic meanings. They are not a part of some 1:1 lookup table.

The most likely scenario right now is that our brains store meaning as a continuously accumulating set of connections that ultimately are impacted by every instance of a form we’ve ever heard/seen.

So, you shall know a word by all the company you’ve ever seen it in.

Andreas Geisler wrote:

John Baez I am a linguist by training, you’re welcome to borrow my brain if you want. You just have to figure out the words to use to get my brain to index what you need, as I don’t know the category theory stuff at all.

It’s a question of interpretation. I am also a translator, so i might be of some small assistance there as well, but it’s not going to be easy either way I am afraid.

John Baez wrote:

]]>Andreas Geisler wrote: “I might be of some small assistance there as well, but it’s not going to be easy either way I am afraid.”

No, it wouldn’t. Alas, I don’t really have time to tackle linguistics myself. Mehrnoosh Sadrzadeh is seriously working on category theory and linguistics. She’s one of the people leading a team of students at this Applied Category Theory 2018 school. She’s the one who assigned this paper by Lambek, which 2 students blogged about. So she would be the one to talk to.

So, you shall know a word by all the company you’ve ever seen it in.

Yes, that quote appears in the blog article by the students, which my post here was merely an advertisement for.

The school for Applied Category Theory 2018 is up and running! Students are blogging about papers! The first blog article is about a diagrammatic method for studying causality:

• Joseph Moeller and Dmitry Vagner, A categorical semantics for causal structure, The *n*-Category Café, 22 January 2018.

Make sure to read the whole blog conversation, since it helps a lot. People were confused about some things at first.

Joseph Moeller is a grad student at U.C. Riverside working with me and a company called Metron Scientific Solutions on “network models”—a framework for designing networks, which the Coast Guard is already interested in using for their search and rescue missions:

• John C. Baez, John Foley, Joseph Moeller and Blake S. Pollard, Network Models. (Blog article here.)

Dmitry Vagner is a grad student at Duke who is very enthusiastic about category theory. Dmitry has worked with David Spivak and Eugene Lerman on open dynamical systems and the operad of wiring diagrams.

It’s great to see these students digging into the wonderful world of category theory and its applications.

To discuss this stuff, please go to The *n*-Category Café.

But let me explain why I’m interested. I’m interested in applied category theory… but this is special.

Mike Stay came to work with me at U.C. Riverside after getting a master’s in computer science at the University of Auckland, where he worked with Cristian Calude on algorithmic randomness. For example:

• Cristian S. Calude and Michael A. Stay, From Heisenberg to Gödel via Chaitin, *International Journal of Theoretical Physics* **44** (2008), 1053–1065.

• Cristian S. Calude and Michael A. Stay, Most programs stop quickly or never halt, *Advances in Applied Mathematics*, **40** (2008), 295–308.

It seems like ages ago, but I dimly remember looking at his application, seeing the title of the first of these two papers, and thinking “he’s either a crackpot, or I’m gonna like him”.

You see, the lure of relating Gödel’s theorem to Heisenberg’s uncertainty principle is fatally attractive to many people who don’t really understand either. I looked at the paper, decided he *wasn’t* a crackpot… and yes, it turned out I liked him.

(A vaguely similar thing happened with my student Chris Rogers, who’d written some papers applying techniques from general relativity to the study of crystals. As soon as I assured myself this stuff was for real, I knew I’d like him!)

Since Mike knew a lot about computer science, his presence at U. C. Riverside emboldened me to give a seminar on classical versus quantum computation. I used this as an excuse to learn the old stuff about the lambda-calculus and cartesian closed categories. When I first started, I thought the basic story would be obvious: people must be making up categories where the morphisms describe *processes of computation*.

But I soon learned I was wrong: people were making up categories where objects were data types, but the morphisms were *equivalence classes* of things going between data types—and this equivalence relation completely washed out the difference, between, say, a program that actually computes 237 × 419 and a program that just prints out 99303, which happens to be the answer to that problem.

In other words, the actual *process of computation* was not visible in the category-theoretic framework. I decided that to make it visible, what we really need are *2-categories* in which 2-morphisms are ‘processes of computation’. Or in the jargon: objects are types, morphisms are terms, and 2-morphisms are rewrites.

It turned out people had already thought about this:

• Barnaby P. Hilken, Towards a proof theory of rewriting: the simply-typed 2λ-calculus, *Theor. Comp. Sci.* **170** (1996), 407–444.

• R. A. G. Seely, Weak adjointness in proof theory in *Proc. Durham Conf. on Applications of Sheaves*, Springer Lecture Notes in Mathematics **753**, Springer, Berlin, 1979, pp. 697–701.

• R. A. G. Seely, Modeling computations: a 2-categorical framework, in *Proc. Symposium on Logic in Computer Science* 1987, Computer Society of the IEEE, pp. 65—71.

But I felt this viewpoint wasn’t nearly as popular as it should be. It should be very popular, at least among theoretical computer scientists, because it describes *what’s actually going on* in the lambda-calculus. If you read a big fat book on the lambda-calculus, like Barendregt’s *The Lambda Calculus: Its Syntax and Semantics*, you’ll see it spends a lot of time on reduction strategies: that is, ways of organizing the process of computation. All this is buried in the usual 1-categorical treatment. It’s living up at the 2-morphism level!

Mike basically agreed with me. We started by writing introduction to the usual 1-categorical stuff, and how it shows up in many different fields:

• John Baez and Michael Stay, Physics, topology, logic and computation: a Rosetta Stone, in *New Structures for Physics*, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95–172.

For financial reasons he had to leave U. C. Riverside and take a job at Google. But he finished his Ph.D. at the University of Auckland, with Cristian Calude and me as co-advisors. And large chunk of his thesis dealt with cartesian closed 2-categories and their generalizations suitable for quantum computation:

• Michael Stay, Compact closed bicategories, *Theory and Applications of Categories* **31** (2016), 755–798.

Great stuff! My students these days are building on this math and marching ahead.

I said Mike ‘basically’ agreed with me. He agreed that we need to go beyond the usual 1-categorical treatment to model the process of computation. But when it came to applying this idea to computer science, Mike wasn’t satisfied with thinking about the lambda-calculus. That’s an old model of computation: it’s okay for a single primitive computer, but not good for systems where different parts are sending messages to each other, like the internet, or even multiprocessing in a single computer. In other words, the lambda-calculus doesn’t really handle the pressing issues of concurrency and distributed computation.

So, Mike wanted to think about more modern formalisms for computation, like the pi-calculus, using 2-categories.

He left Google and wrote some papers with Greg Meredith on these ideas, for example:

• Michael Stay and Lucius Gregory Meredith, Higher category models of the pi-calculus.

• Michael Stay and Lucius Gregory Meredith, Representing operational semantics with enriched Lawvere theories.

The second one takes a key step: moving away from 2-categories to graph-enriched categories, which are simpler and perhaps better.

Then, after various twists and turns, he started a company called Pyrofex with Nash Foster. Or perhaps I should say Foster started a company with Mike, since Foster is the real bigshot of the two. Here’s what their webpage says:

Hello—

My name is Nash Foster, and together with my friend and colleague Mike Stay, I recently founded a company called Pyrofex. We founded our company for one simple reason: we love to build large-scale distributed systems that are always reliable and secure and we wanted to help users like yourself do it more easily.

When Mike and I founded the company, we felt strongly that several key advances in programming language theory would ease the development of every day large-scale systems. However, we were not sure exactly how to expose the APIs to users or how to build the tool-sets. Grand visions compete with practical necessity, so we spent months talking to users just like you to discover what kinds of things you were most interested in. We spent many hours at white-boards, checking our work against the best CS theory we know. And, of course, we have enjoyed many long days of hacking in the hopes that we can produce something new and useful, for you.

I think this is the most exciting time in history to be a programmer (and I wrote my first program in the early 1980’s). The technologies available today are varied and compelling and exciting. I hope that you’ll be as excited as I am about some of the ideas we discovered while building our software.

And on January 8th, 2018, their company entered into an arrangement with Greg Meredith’s company Rchain. Below I’ll quote part of an announcement. I don’t know much about this stuff—at least, not yet. But I’m happy to see some good ideas getting applied in the real world, and especially happy to see Mike doing it.

]]>## The RChain Cooperative & Pyrofex Corporation announce Strategic Partnership

The RChain Cooperative and Pyrofex Corporation today announced strategically important service contracts and an equity investment intended to deliver several mutually beneficial blockchain solutions. RChain will acquire 1.1 million shares of Pyrofex Common Stock as a strategic investment. The two companies will ink separate service contracts to reinforce their existing relationship and help to align their business interests.

Pyrofex will develop critical tools and platform components necessary for the long-term success of the RChain platform. These tools are designed to leverage RChain’s unique blockchain environment and make blockchain development simpler, faster, and more effective than ever before. Under these agreements, Pyrofex will develop the world’s first decentralized IDE for writing blockchain smart contracts on the RChain blockchain.

Pyrofex also commits to continuing the core development of RChain’s blockchain platform and to organizing RChain’s global developer events and conferences.

## Comments on the News

“We’re thrilled to have an opportunity to strengthen our relationship with the RChain Cooperative in 2018. Their commitment to open-source development mirrors our own corporate values. It’s a pleasure to have such a close relationship with a vibrant open-source community. I’ve rarely seen the kind of excitement the Coop’s members share and we look forward to delivering some great new technology this year.” — Nash E. Foster, Cofounder & CEO, Pyrofex Corp.

“Intuitive development tools are important for us and the blockchain ecosystem as a whole; we’re incredibly glad Pyrofex intends to launch their tools on RChain first. But, Ethereum has been a huge supporter of RChain and we’re pleased that Pyrofex intends to support Solidity developers as well. Having tools that will make it possible for developers to migrate smart contracts between blockchains is going to create tremendous possibilities.” — Lucius Greg Meredith, President, RChain Cooperative Background

Pyrofex is a software development company co-founded by Dr. Michael Stay, PhD and Nash Foster in 2016. Dr. Stay and Greg Meredith are long-time colleagues and collaborators whose mutual research efforts form the mathematical foundations of RChain’s technology. One example of the work that Greg and Mike have collaborated on is the work on the LADL (Logic as Distributed Law) algorithm. You can watch Dr. Stay present the latest research from the RChain Developers retreat.

Pyrofex and its development team should be familiar to those who follow the RChain Cooperative. They currently employ 14 full-time and several part-time developers dedicated to RChain platform development. Pyrofex CEO Nash Foster and Lead Project Manager Medha Parlikar have helped grow RChain’s development team to an impressive 20+ Core devs with plans on doubling by mid 2018. The team now includes multiple PhDs, ex-Googlers, and other word class talents.

Every Wednesday, you will find Medha on our debrief updating the community with the latest developments in RChain. Here she is announcing the recent node.hello release along with a demo from core developer Chris Kirkwood-Watts.

The working relationship between the RChain Cooperative and Pyrofex has gone so well that the Board of Directors and the community at large have supported Pyrofex’s proposal to develop Cryptofex, the much needed developer tool kit for the decentralized world.

The RChain Cooperative is ecstatic to further develop its relationship with the Pyrofex team.

“As we fly towards Mercury and beyond, we all could use better tools.”

— The RChain Co-op Team.Listen to the announcement from Greg Meredith as well as a short Q&A with Pyrofex CEO Nash Foster, from a recent community debrief.

The following is an excerpt from the soon to be released Cryptofex Whitepaper.

## The Problem: Writing Software is Hard, Compiling is Harder

In 1983, Bordland Software Corporation acquired a small compiler called Compas Pascal and released it in the United States as Turbo Pascal. It was the first product to integrate a compiler and the editor in which software was written and for nearly a decade Borland’s products defined the market for integrated development environments (IDEs).

The year after Borland released TurboPascal, Ken Thompson observed the distinct and unique dangers associated with compiler technologies. In his famous Turing Award acceptance speech, Thompson described a mechanism by which a virus can be injected into a compiler such that every binary compiled with that compiler will replicate the virus.

“In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.” — Ken Thompson

Unfortunately, many developers today remain stuck in a world constructed in the early 1980’s. IDEs remain essentially the same, able to solve only those problems that neatly fit onto their laptop’s single Intel CPU. But barely a month ago, on 22nd November 2017, the Intel Corporation released a critical firmware update to the Intel Management Engine and in the intervening weeks, the public at large has become aware of the “Meltdown” bug. The IME and other components are exactly the sort of low-level microcode applications that Thompson warned about. Intel has demonstrated perfectly that in the past 33 years, we have learned little and gone nowhere.

Ironically, we have had a partial solution to these problems for nearly a decade. In 2009, David A. Wheeler published his PhD dissertation, in which he proposed a mechanism by which multiple compilers can be used to verify the correctness of a compiler output. Such a mechanism turns out to be tailor-made for the decentralized blockchain environment. Combining Wheeler’s mechanism with a set of economic incentives for compile farms to submit correct outputs gives us a very real shot at correcting a problem that has plagued us for more than 30 years.

## The Solution: A Distributed and Decentralized Toolchain

If we crack open the development environments at companies like Google and Amazon, many of us would be surprised to discover that very few programs are compiled on a single machine. Already, the most sophisticated organizations in the world have moved to a distributed development environment. This allows them to leverage the cloud, bringing high-performance distributed computing to bear on software development itself. At Google, many thousands of machines churn away compiling code, checking it for correctness, and storing objects to be re-used later. Through clever use of caching and “hermetic” builds, Google makes its builds faster and more computationally efficient than could possibly be done on individual developer workstations. Unfortunately, most of us cannot afford to dedicate thousands of machines to compilation.

The open-source community might be able to build large scale shared compilation environments on the Internet, but Ken Thompson explained to us why we could not trust a shared environment for these workloads. However, in the age of blockchain, it’s now possible to build development environments that harness the power of large-scale compute to compile and check programs against programmer intent. Secure, cheap, and fast — we can get all three.

CryptoFex is just such a Decentralized Integrated Development Environment (DIDE) allowing software engineers to author, test, compile, and statically check their code to ensure that it is secure, efficient, and scalable.

You can see the whole speech annotated here. Here is the first part. The last line states the vision:

Good morning. As our Constitution requires, I’m here to report on the condition of our state.

Simply put, California is prospering. While it faces its share of difficulties, we should never forget the bounty and the endless opportunities bestowed on this special place—or the distance we’ve all traveled together these last few years.

It is now hard to visualize—or even remember—the hardships, the bankruptcies and the home foreclosures so many experienced during the Great Recession. Unemployment was above 12 percent and 1.3 million Californians lost their jobs.

The deficit was $27 billion in 2011. The New York Times, they called us: “The Coast of Dystopia.” The Wall Street Journal saw: “The Great California Exodus.” The Economist of London pronounced us: “The Ungovernable State.” And the Business Insider simply said: “California is Doomed.”

Even today, you will find critics who claim that the California dream is dead. But I’m used to that. Back in my first term, a prestigious report told us that California had the worst business climate in America. In point of fact, personal income in 1975, my first year as governor, was $154 billion. Today it has grown to $2.4 trillion. In just the last eight years alone, California’s personal income has grown $845 billion and 2.8 million new jobs have been created. Very few places in the world can match that record.

That is one of the reasons why confidence in the work that you are doing has risen so high. That contrasts sharply with the abysmal approval ratings given to the United States Congress. Certainly our on-time budgets are well received, thanks in large part to the lowering of the two-thirds vote to a simple majority to pass the budget.

But public confidence has also been inspired by your passing—with both Republicans and Democratic votes:

• Pension reform—and don’t minimize that, that was a big pension reform. May not be the final one, but it was there and you did it, Republicans and Democrats;

• Workers’ Compensation reform, another vote with Republicans and Democrats there;

• The Water Bond;

• The Rainy Day Fund; and

• The Cap-and-Trade Program.

And by the way, you Republicans, as I look over here and I look over there, don’t worry, I’ve got your back!

All these programs are big and very important to our future. And their passage demonstrates that some American governments can actually get things done—even in the face of deepening partisan division.

The recent fires and mudslides show us how much we are affected by natural disasters and how we can rise to the occasion—at the local level, at the state level and with major help from the federal government. I want to especially thank all of the firefighters, first responders and volunteers. They answered the call to help their fellow neighbors, in some cases even when their own homes were burning. Here we see an example of people working together irrespective of party.

The president himself has given California substantial assistance and the congressional leadership is now sponsoring legislation to help California, as well as the other states that have suffered major disasters—Texas, Florida and the Commonwealth of Puerto Rico.

In this regard, we should never forget our dependency on the natural environment and the fundamental challenges it presents to the way we live. We can’t fight nature. We have to learn how to get along with her. And I want to say that again: We can’t fight nature. We have to learn how to get along with her.

And that’s not so easy. For thousands of years this land now called California supported no more than 300,000 people. That’s 300,000 people and they did that for thousands and thousands—some people say, as long as 20,000 years. Today, 40 million people live in the same place and their sheer impact on the soils, the forests and the entire ecosystem has no long-term precedent. That’s why we have to innovate constantly and create all manner of shelter, machines and creative technologies. That will continue, but only with ever greater public and private investment.

The devastating forest fires and the mudslides are a profound and growing challenge. Eight of the state’s most destructive fires have occurred in the last five years. Last year’s Thomas fire in Ventura and Santa Barbara counties was the largest in recorded history. The mudslides that followed were among the most lethal the state has ever encountered. In 2017, we had the highest average summer temperatures in recorded history. Over the last 40 years, California’s fire season has increased 78 days—and in some places it is nearly year-round.

So we have to be ready with the necessary firefighting capability and communication systems to warn residents of impending danger. We also have to manage our forests—and soils—much more intelligently.

Toward that end, I will convene a task force composed of scientists and knowledgeable forest practitioners to review thoroughly the way our forests are managed and suggest ways to reduce the threat of devastating fires. They will also consider how California can increase resiliency and carbon storage capacity. Trees in California should absorb CO_{2}, not generate huge amounts of black carbon and greenhouse gas as they do today when forest fires rage across the land.

Despite what is widely believed by some of the most powerful people in Washington, the science of climate change is not in doubt. The national academies of science of every major country in the world—including Russia and China—have all endorsed the mainstream view that human caused greenhouse gases are trapping heat in the oceans and in the atmosphere and that action must be taken to avert catastrophic changes in our weather systems. All nations agree except one and that is solely because of one man: our current president.

Here in California, we follow a different path. Enlightened by top scientists at the University of California, Stanford and Caltech, among others, our state has led the way. I’ll enumerate just how:

• Building and appliance efficiency standards;

• Renewable electricity—reaching 50 percent in just a few years;

• A powerful low-carbon fuel standard; incentives for zero-emission vehicles;

• Ambitious policies to reduce short-lived climate pollutants like methane and black carbon;

• A UN sponsored climate summit this September in San Francisco; and

• The nation’s only functioning cap-and-trade system.

I will shortly provide an expenditure plan for the revenues that the cap-and-trade auctions have generated. Your renewing this program on a bipartisan basis was a major achievement and will ensure that we will have substantial sums to invest in communities all across the state—both urban and agricultural.

The goal is to make our neighborhoods and farms healthier, our vehicles cleaner—zero emission the sooner the better—and all our technologies increasingly lowering their carbon output. To meet these ambitious goals, we will need five million zero-emission vehicles on the road by 2030. And we’re going to get there. Believe me. We only have 350,000 today, so we’ve all got a lot of work. And think of all the jobs and how much cleaner our air will be then.

When you passed cap-and-trade legislation, you also passed a far-reaching air pollution measure that for the first time focuses on pollutants that disproportionately affect specific neighborhoods. Instead of just measuring pollutants over vast swaths of land, regulators will zero in on those communities which are particularly disadvantaged by trains, trucks or factories.

Along with clean air, clean water is a fundamental good that must be protected and made available on a sustainable basis. When droughts occur, conservation measures become imperative. In recent years, you have passed historic legislation to manage California’s groundwater, which local governments are now implementing.

In addition, you passed—and more than two-thirds of voters approved—a water bond that invests in safe drinking water, conservation and storage. As a result, we will soon begin expending funds on some of the storage we’ve needed for decades.

As the climate changes and more water arrives as rain instead of snow, it is crucial that we are able to capture the overflow in a timely and responsible way. That, together with recycling and rainwater recapture will put us in the best position to use water wisely and in the most efficient way possible. We are also restoring the Sacramento and San Joaquin watersheds to protect water supplies and improve California’s iconic salmon runs.

Finally, we have the California Waterfix, a long studied and carefully designed project to modernize our broken water system. I am convinced that it will conserve water, protect the fish and the habitat in the Delta and ensure the delivery of badly needed water to the millions of people who depend on California’s aqueducts. Local water districts—in both the North and South—are providing the leadership and the financing because they know it is vital for their communities, and for the whole state. That is true, and that is the reason why I have persisted.

Our economy, the sixth largest in the world, depends on mobility, which only a modern and efficient transportation system provides. The vote on the gas tax was not easy but it was essential, given the vast network of roads and bridges on which California depends and the estimated $67 billion in deferred maintenance on our infrastructure. Tens of millions of cars and trucks travel over 330 billion miles a year. The sun’s only 93 million miles away.

The funds that SB 1 makes available are absolutely necessary if we are going to maintain our roads and transit systems in good repair. Twenty-five other states have raised gas taxes. Even the U.S. Chamber of Commerce has called for a federal gas tax because the highway trust fund is nearly broke.

Government does what individuals can’t do, like build roads and bridges and support local bus and light rail systems. This is our common endeavor by which we pool our resources through the public sector and improve all of our lives. Fighting a gas tax may appear to be good politics, but it isn’t. I will do everything in my power to defeat any repeal effort that gets on the ballot. You can count on that.

I’m looking for that one Republican. A brave, brave man.

Since I have talked about tunnels and transportation, I will bring up one more item of infrastructure: high-speed rail. I make no bones about it. I like trains and I like high-speed trains even better. So did the voters in 2008 when they approved the bond. Look, 11 other countries have high-speed trains. They are now taken for granted all over Europe, in Japan and in China. President Reagan himself said in Japan on November 11, 1983 the following, and I quote: “The State of California is planning to build a rapid speed train that is adapted from your highly successful bullet train.” Yes, we were, and now we are actually building it. Takes a long time.

Like any big project, there are obstacles. There were for the Bay Area Rapid Transit System, for the Golden Gate Bridge and the Panama Canal. I’ll pass over in silence the Bay bridge, that was almost 20 years. And by the way, it was over budget by $6 billion on a $1 billion project. So that happens. But not with the high-speed rail, we’ve got that covered.

But build it they did and build it we will—America’s first high-speed rail system. One link between San Jose and San Francisco—an electrified Caltrain—is financed and ready to go. Another billion, with matching funds, will be invested in Los Angeles to improve Union Station as a major transportation hub and fix the Anaheim corridor.

The next step is completing the Valley segment and getting an operating system connected to San Jose. Yes, it costs lots of money but it is still cheaper and more convenient than expanding airports, which nobody wants to, and building new freeways, which landowners often object to. All of that is to meet the growing demand. It will be fast, quiet and powered by renewable electricity and last for a hundred years. After all you guys are gone.

Already, more than 1,500 construction workers are on the job at 17 sites and hundreds of California businesses are providing services, generating thousands of job years of employment. As the global economy puts more Americans out of work and lowers wages, infrastructure projects like this will be a key source of well-paid California jobs.

Difficulties challenge us but they can’t discourage or stop us. Whether it’s roads or trains or dams or renewable energy installations or zero-emission cars, California is setting the pace for the entire nation. Yes, there are critics, there are lawsuits and there are countless obstacles. But California was built on dreams and perseverance and the bolder path is still our way forward.

On January 26th, the governor’s office made this announcement:

]]>Taking action to further California’s climate leadership, Governor Edmund G. Brown Jr. today signed an executive order to boost the supply of zero-emission vehicles and charging and refueling stations in California. The Governor also detailed the new plan for investing $1.25 billion in cap-and-trade auction proceeds to reduce carbon pollution and improve public health and the environment.

“This executive order aims to curb carbon pollution from cars and trucks and boost the number of zero-emission vehicles driven in California,” said Governor Brown. “In addition, the cap-and-trade investments will, in varying degrees, reduce California’s carbon footprint and improve the quality of life for all.”

Zero-Emission Vehicle Executive OrderCalifornia is taking action to dramatically reduce carbon emissions from transportation—a sector that accounts for 50 percent of the state’s greenhouse gas emissions and 80 percent of smog-forming pollutants.

To continue to meet California’s climate goals and clean air standards, California must go even further to accelerate the market for zero-emission vehicles. Today’s executive order implements the Governor’s call for a new target of 5 million ZEVs in California by 2030, announced in his State of the State address yesterday, and will help significantly expand vehicle charging infrastructure.

The Administration is also proposing a new eight-year initiative to continue the state’s clean vehicle rebates and spur more infrastructure investments. This $2.5 billion initiative will help bring 250,000 vehicle charging stations and 200 hydrogen fueling stations to California by 2025.

Today’s action builds on past efforts to boost zero-emission vehicles, including: legislation signed last year and in 2014 and 2013; adopting the 2016 Zero-Emission Vehicle Plan and the Advanced Clean Cars program; hosting a Zero-Emission Vehicle Summit; launching a multi-state ZEV Action Plan; co-founding the International ZEV Alliance; and issuing Executive Order B-16-12 in 2012 to help bring 1.5 million zero-emission vehicles to California by 2025.

In addition to today’s executive order, the Governor also released the 2018 plan for California’s Climate Investments—a statewide initiative that puts billions of cap-and-trade dollars to work reducing greenhouse gas emissions, strengthening the economy and improving public health and the environment–particularly in disadvantaged communities.

California Climate Investments projects include affordable housing, renewable energy, public transportation, zero-emission vehicles, environmental restoration, more sustainable agriculture and recycling, among other projects. At least 35 percent of these investments are made in disadvantaged and low-income communities.

The $1.25 billion climate investment plan can be found here.

A short time ago, on the Croatian island of Zlarin, there gathered a band of bold individuals—rebels of academia and industry, whose everyday thoughts and actions challenge the separations of the modern world. They journeyed from all over to learn of the grand endeavor of another open mind, an expert functional programmer and creative hacktivist with significant mathematical knowledge: Jelle |yell-uh| Herold.

The Dutch computer scientist has devoted his life to helping our species and our planet: from consulting in business process optimization to winning a Greenpeace hackathon, from updating Netherlands telecommunications to creating a website to determine ways for individuals to help heal the earth, Jelle has gained a comprehensive perspective of the interconnected era. Through a diverse and innovative career, he has garnered crucial insights into software design and network computation—most profoundly, he has realized that it is imperative that these immense forces of global change develop thoughtful, comprehensive systematization.

Jelle understood that initiating such a grand ambition requires a massive amount of work, and the cooperation of many individuals, fluent in different fields of mathematics and computer science. Enter the Zlarin meeting: after a decade of consideration, Jelle has now brought together proponents of categories, open games, dependent types, Petri nets, string diagrams, and blockchains toward a singular end: a universal language of distributed systems—Statebox.

Statebox is a programming language formed and guided by fundamental concepts and principles of theoretical mathematics and computer science. The aim is to develop the canonical process language for distributed systems, and thereby elucidate the way these should actually be designed. The idea invokes the deep connections of these subjects in a novel and essential way, to make code simple, transparent, and concrete. Category theory is both the heart and pulse of this endeavor; more than a theory, it is a way of thinking universally. We hope the project helps to demonstrate the importance of this perspective, and encourages others to join.

The language is designed to be self-optimizing, open, adaptive, terminating, error-cognizant, composable, and most distinctively—visual. Petri nets are the natural representation of decentralized computation and concurrency. By utilizing them as program models, the entire language is diagrammatic, and this allows one to inspect the flow of the process executed by the program. While most languages only compile into illegible machine code, Statebox compiles directly into diagrams, so that the user immediately sees and understands the concrete realization of the abstract design. We believe that this immanent connection between the “geometric” and “algebraic” aspects of computation is of great importance.

Compositionality is a rightfully popular contemporary term, indicating the preservation of type under composition of systems or processes. This is essential to the universality of the type, and it is intrinsic to categories, which underpin the Petri net. A pertinent example is that composition allows for a form of abstraction in which programs do not require complete specification. This is parametricity: a program becomes executable when the functions are substituted with valid terms. Every term has a type, and one cannot connect pieces of code that have incompatible inputs and outputs—the compiler would simply produce an error. The intent is to preserve a simple mathematical structure that imposes as little as possible, and still ensure rationality of code. We can then more easily and reliably write tools providing automatic proofs of termination and type-correctness. Many more aspects will be explained as we go along, and in more detail in future posts.

Statebox is more than a specific implementation. It is an evolving aspiration, expressing an ideal, a source of inspiration, signifying a movement. We fully recognize that we are at the dawn of a new era, and do not assume that the current presentation is the best way to fulfill this ideal—but it is vital that this kind of endeavor gains the hearts and minds of these communities. By learning to develop and design by pure theory, we make a crucial step toward universal systems and knowledge. Formalisms are biased, fragile, transient—thought is eternal.

Thank you for reading, and thank you to John Baez—|bi-ez|, some there were not aware—for allowing me to write this post. Azimuth and its readers represent what scientific progress can and should be; it is an honor to speak to you. My name is Christian Williams, and I have just begun my doctoral studies with Dr. Baez. He received the invitation from Jelle and could not attend, and was generous enough to let me substitute. Disclaimer: I am just a young student with big dreams, with insufficient knowledge to do justice to this huge topic. If you can forgive some innocent confidence and enthusiasm, I would like to paint a big picture, to explain why this project is important. I hope to delve deeper into the subject in future posts, and in general to celebrate and encourage the cognitive revolution of Applied Category Theory. (Thank you also to Anton and Fabrizio for providing some of this writing when I was not well; I really appreciate it.)

Statebox Summit, Zlarin 2017, was awesome. Wish you could’ve been there. Just a short swim in the Adriatic from the old city of Šibenik |shib-enic|, there lies the small, green island of Zlarin |zlah-rin|, with just a few hundred kind inhabitants. Jelle’s friend, and part of the Statebox team, Anton Livaja and his family graciously allowed us to stay in their houses. Our headquarters was a hotel, one of the few places open in the fall. We set up in the back dining room for talks and work, and for food and sunlight we came to the patio and were brought platters of wonderful, wholesome Croatian dishes. As we burned the midnight oil, we enjoyed local beer, and already made history—the first Bitcoin transaction of the island, with a progressive bartender, Vinko.

Zlarin is a lovely place, but we haven’t gotten to the best part—the people. All who attended are brilliant, creative, and spirited. Everyone’s eyes had a unique spark to light. I don’t think I’ve ever met such a fascinating group in my life. The crew: Jelle, Anton, Emi Gheorghe, Fabrizio Genovese, Daniel van Dijk, Neil Ghani, Viktor Winschel, Philipp Zahn, Pawel Sobocinski, Jules Hedges, Andrew Polonsky, Robin Piedeleu, Alex Norta, Anthony di Franco, Florian Glatz, Fredrik Nordvall Forsberg. These innovators have provocative and complementary ideas in category theory, computer science, open game theory, functional programming, and the blockchain industry; and they came to share an important goal. These are people who work earnestly to better humanity, motivated by progress, not profit. Talking with them gave me hope, that there are enough intelligent, open-minded, and caring people to fix this mess of modern society. In our short time together, we connected—now, almost all continue to contribute and grow the endeavor.

Why is society a mess? The present human condition is absurd. We are in a cognitive renaissance, yet our world is in peril. We need to realize a deeper harmony of theory and practice—we need ideas that dare to dream big, that draw on the vast wealth of contemporary thought to guide and unite subjects in one mission. The way of the world is only a reflection of how we choose to think, and for more than a century we have delved endlessly into thought itself. If we truly learn from our thought, knowledge and application become imminently interrelated, not increasingly separate. It is imperative that we abandon preconception, pretense and prejudice, and ask with naive sincerity: “How should things be, really, and how can we make it happen?”

This pertains more generally to the irresponsibly ad hoc nature of society—we find ourselves entrenched in inadequate systems. Food, energy, medicine, finance, communications, media, governance, technology—our deepening dependence on centralization is our greatest vulnerability. Programming practice is the perfect example of the gradual failure of systems when their design is left to wander in abstraction. As business requirements evolved, technological solutions were created haphazardly, the priority being immediate return over comprehensive methodology, which resulted in ‘duct-taped’ systems, such as the Windows OS. Our entire world now depends on unsystematic software, giving rise to so much costly disorganization, miscommunication, and worse, bureaucracy. Statebox aims to close the gap between the misguided formalisms which came out of this type of degeneration, and design a language which corresponds naturally to essential mathematical concepts—to create systems which are rational, principled, universal. To explain why Statebox represents to us such an important ideal, we must first consider its closest relative, the elephant in the technological room: blockchain.

Often the best ideas are remarkably simple—in 2008, an unknown person under the alias Satoshi Nakamoto published the whitepaper Bitcoin: A Peer-to-Peer Electronic Cash System. In just a few pages, a protocol was proposed which underpins a new kind of computational network, called a blockchain, in which interactions are immediate, transparent, and permanent. This is a personal interpretation—the paper focuses on the application given in its title. In the original financial context, immediacy is one’s ability to directly transact with anyone, without intermediaries, such as banks; transparency is one’s right to complete knowledge of the economy in which one participates, meaning that each node owns a copy of the full history of the network; permanence is the irrevocability of one’s transactions. These core aspects are made possible by an elegant use of cryptography and game theory, which essentially removes the need for trusted third parties in the authorization, verification, and documentation of transactions. Per word, it’s almost peerless in modern influence; the short and sweet read is recommended.

The point of this simplistic explanation is that blockchain is about more than economics. The transaction could be any cooperation, the value could be any social good—when seen as a source of consensus, the blockchain protocol can be expanded to assimilate any data and code. After several years of competing cryptocurrencies, the importance of this deeper idea was gradually realized. There arose specialized tools to serve essential purposes in some broader system, and only recently have people dared to conceive of what this latter could be. In 2014, a wunderkind named Vitalik Buterin created Ethereum, a fully programmable blockchain. Solidity is a Turing-complete language of smart contracts, autonomous programs which enable interactions and enact rules on the network. With this framework, one can not only transact with others, but implement any kind of process; one can build currencies, websites, or organizations—decentralized applications, constructed with smart contracts, could be just about anything.

There is understandably great confidence and excitement for these ventures, and many are receiving massive public investment. Seriously, the numbers are staggering—but most of it is pure hype. There is talk of the first global computer, the internet of value, a single shared source of truth, and other speculative descriptions. But compared to the ambition, the actual theory is woefully underdeveloped. So far, implementations make almost no use of the powerful ideas of mathematics. There are still basic flaws in blockchain itself, the foundation of almost all decentralized technology. For example, the two viable candidates for transaction verification are called Proof of Work and Proof of Stake: the former requires unsustainable consumption of resources, namely hardware and electricity, and the latter is susceptible to centralization. Scalability is a major problem, thus also cost and speed of transactions. A major Ethereum dApp, Decentralized Autonomous Organization, was hacked.

These statements are absolutely not to disregard all of the great work of this community; it is primarily rhetoric to distinguish the high ideals of Statebox, and I lack the eloquence to make the point diplomatically, nor near the knowledge to give a real account of this huge endeavor. We now return to the rhetoric.

What seems to be lost in the commotion is the simple recognition that we do not yet really know what we should make, nor how to do so. The whole idea is simply too big—the space of possibility is almost completely unknown, because this innovation can open every aspect of society to reform. But as usual, people try to ignore their ignorance, imagining it will disappear, and millions clamor about things we do not yet understand. Most involved are seeing decentralization as an exciting business venture, rather than our best hope to change the way of this broken world; they want to cash in on another technological wave. Of the relatively few idealists, most still retain the assumptions and limitations of the blockchain.

For all this talk, there is little discussion of how to even work toward the ideal abstract design. Most mathematics associated to blockchain is statistical analysis of consensus, while we’re sitting on a mountain of powerful categorical knowledge of systems. At the summit, Prof. Neil Ghani said “it’s like we’re on the Moon, talking about going to Mars, while everyone back on Earth still doesn’t even have fire.” We have more than enough conceptual technology to begin developing an ideal and comprehensive system, if the right minds come together. Theory guides practice, practice motivates theory—the potential is immense.

Fortunately, there are those who have this big picture in mind. Long before the blockchain craze, Jelle saw the fundamental importance of both distributed systems and the need for academic-industrial symbiosis. In the mid-2000’s, he used Petri nets to create process tools for businesses. Employees could design and implement any kind of abstract workflow to more effectively communicate and produce. Jelle would provide consultation to optimize these processes, and integrate them into their existing infrastructure—as it executed, it would generate tasks, emails, forms and send them to designated individuals to be completed for the next iteration. Many institutions would have to shell out millions of dollars to IBM or Fujitsu for this kind of software, and his was more flexible and intuitive. This left a strong impression on Jelle, regarding the power of Petri nets and the impact of deliberate design.

Many experiences like this gradually instilled in Jelle a conviction to expand his knowledge and begin planning bold changes to the world of programming. He attended mathematics conferences, and would discuss with theorists from many relevant subjects. On the island, he told me that it was actually one of Baez’s talks about networks which finally inspired him to go for this huge idea. By sincerely and openly reaching out to the whole community, Jelle made many valuable connections. He invited these thinkers to share his vision—theorists from all over Europe, and some from overseas, gathered in Croatia to learn and begin to develop this project—and it was a great success.

By now you may be thinking, alright kid spill the beans already. Here they are, right into your brain—well, most will be in the next post, but we should at least have a quick overview of some of the main ideas not already discussed.

The notion of open system complements compositionality. The great difference between closure and openness, in society as well as theory, was a central theme in many of our conversations during the summit. Although we try to isolate and suspend life and cognition in abstraction, the real, concrete truth is what flows through these ethereal forms. Every system in Statebox is implicitly open, and this impels design to idealize the inner and outer connections of processes. Open systems are central to the Baez Network Theory research team. There are several ways to categorically formalize open systems; the best are still being developed, but the first main example can be found in *The Algebra of Open and Interconnected Systems* by Brendan Fong, an early member of the team.

Monoidal categories, as this blog knows well, represent systems with both series and parallel processes. One of the great challenge of this new era of interconnection is distributed computation—getting computers to work together as a supercomputer, and monoidal categories are essential to this. Here, objects are data types, and morphisms are computations, while composition is serial and tensor is parallel. As Dr. Baez has demonstrated with years of great original research, monoidal categories are essential to understanding the complexity of the world. If we can connect our knowledge of natural systems to social systems, we can learn to integrate valuable principles—a key example being complete resource cognizance.

Petri nets are presentations of free strict symmetric monoidal categories, and as such they are ideal models of “normal” computation, i.e. associative, unital, and commutative. *Open* Petri nets are the workhorses of Statebox. They are the morphisms of a category which is itself monoidal—and via openness it is even richer and more versatile. Most importantly it is compact closed, which introduces a simple but crucial duality into computation—input-output interchange—which is impossible in conventional cartesian closed computation, and actually brings the paradigm closer to quantum computation

Petri nets represent processes in an intuitive, consistent, and decentralized way. These will be multi-layered via the notion of operad and a resourceful use of Petri net tokens, representing the interacting levels of a system. Compositionality makes exploring their state space much easier: the state space of a big process can be constructed from those of smaller ones, a technique that more often than not avoids state space explosion, a long-standing problem in Petri net analysis. The correspondence between open Petri nets and a logical calculus, called place/transition calculus, allows the user to perform queries on the Petri net, and a revolutionary technique called information-gain computing greatly reduces response time.

Dependently typed functional programming is the exoskeleton of this beautiful beast; in particular, the underlying language is Idris. Dependent types arose out of both theoretical mathematics and computer science, and they are beginning to be recognized as very general, powerful, and natural in practice. Functional programming is a similarly pure and elegant paradigm for “open” computation. They are fascinating and inherently categorical, and deserve whole blog posts in the future.

Even economics has opened its mind to categories. Statebox is very fortunate to have several of these pioneers—open game theory is a categorical, compositional version of game theory, which allows the user to dynamically analyze and optimize code. Jules’ choice of the term “teleological category” is prescient; it is about more than just efficiency—it introduces the possibility of building principles into systems, by creating game-theoretical incentives which can guide people to cooperate for the greater good, and gradually lessen the influence of irrational, selfish priorities.

Categories are the language by which Petri nets, functional programming, and open games can communicate—and amazingly, all of these theories are unified in an elegant representation called string diagrams. These allow the user to forget the formalism, and reason purely in graphical terms. All the complex mathematics goes under the hood, and the user only needs to work with nodes and strings, which are guaranteed to be formally correct.

Category theory also models the data structures that are used by Statebox: Typedefs is a very lightweight—but also very expressive—data structure, that is at the very core of Statebox. It is based on initial F-algebras, and can be easily interpreted in a plethora of pre-existing solutions, enabling seamless integration with existing systems. One of the core features of Typedefs is that serialization is categorically internalized in the data structure, meaning that every operation involving types can receive a unique hash and be recorded on the blockchain public ledger. This is one of the many components that make Statebox fail-resistant: every process and event is accounted for on the public ledger, and the whole history of a process can be rolled back and analyzed thanks to the blockchain technology.

The Statebox team is currently working on a monograph that will neatly present how all the pertinent categorical theories work together in Statebox. This is a formidable task that will take months to complete, but will also be the cleanest way to understand how Statebox works, and which mathematical questions have still to be answered to obtain a working product. It will be a thorough document that also considers important aspects such as our guiding ethics.

The team members are devoted to creating something positive and different, explicitly and solely to better the world. The business paradigm is based on the principle that innovation should be open and collaborative, rather than competitive and exclusive. We want to share ideas and work with you. There are many blooming endeavors which share the ideals that have been described in this article, and we want them all to learn from each other and build off one another.

For example, Statebox contributor and visionary economist Viktor Winschel has a fantastic project called Oicos. The great proponent of applied category theory, David Spivak, has an exciting and impressive organization called Categorical Informatics. Mike Stay, a past student of Dr. Baez, has started a company called Pyrofex, which is developing categorical distributed computation. There are also somewhat related languages for blockchain, such as Simplicity, and innovative distributed systems such as Iota and RChain. Even Ethereum is beginning to utilize categories, with Casper. And of course there are research groups, such as Network Theory and Mathematically Structured Programming, as well as so many important papers, such as Algebraic Databases. This is just a slice of everything going on; as far as I know there is not yet a comprehensive account of all the great applied category theory and distributed innovations being developed. Inevitably these endeavors will follow the principle they share, and come together in a big way. Statebox is ready, willing, and able to help make this reality.

If you are interested in Statebox, you are welcomed with open arms. You can contact Jelle at jelle@statebox.io, Fabrizio at fabrizio@statebox.org, Emi at emi@statebox.io, Anton at anton@statebox.io; they can provide more information, connect you to the discussion, or anything else. There will be a second summit in 2018 in about six months, details to be determined. We hope to see you there. Future posts will keep you updated, and explain more of the theory and design of Statebox. Thank you very much for reading.

P.S. Found unexpected support in Šibenik! Great bar—once a reservoir.

]]>This article is very interesting:

• Ed Yong, Brain cells share information with virus-like capsules, *Atlantic*, January 12, 2018.

Your brain needs a protein called Arc. If you have trouble making this protein, you’ll have trouble forming new memories. The neuroscientist Jason Shepherd noticed something weird:

He saw that these Arc proteins assemble into hollow, spherical shells that look uncannily like viruses. “When we looked at them, we thought: What are these things?” says Shepherd. They reminded him of textbook pictures of HIV, and when he showed the images to HIV experts, they confirmed his suspicions. That, to put it bluntly, was a huge surprise. “Here was a brain gene that makes something that looks like a virus,” Shepherd says.

That’s not a coincidence. The team showed that Arc descends from an ancient group of genes called gypsy retrotransposons, which exist in the genomes of various animals, but can behave like their own independent entities. They can make new copies of themselves, and paste those duplicates elsewhere in their host genomes. At some point, some of these genes gained the ability to enclose themselves in a shell of proteins and leave their host cells entirely. That was the origin of retroviruses—the virus family that includes HIV.

It’s worth pointing out that gypsy is the name of a *specific kind* of retrotransposon. A retrotransposon is a gene that can make copies of itself by first transcribing itself from DNA into RNA and then converting itself back into DNA and inserting itself at other places in your chromosomes.

About 40% of your genes are retrotransposons! They seem to mainly be ‘selfish genes’, focused on their own self-reproduction. But some are also useful to you.

So, Arc genes are the evolutionary cousins of these viruses, which explains why they produce shells that look so similar. Specifically, Arc is closely related to a viral gene called gag, which retroviruses like HIV use to build the protein shells that enclose their genetic material. Other scientists had noticed this similarity before. In 2006, one team searched for human genes that look like gag, and they included Arc in their list of candidates. They never followed up on that hint, and “as neuroscientists, we never looked at the genomic papers so we didn’t find it until much later,” says Shepherd.

I love this because it confirms my feeling that viruses are deeply entangled with our evolutionary past. Computer viruses are just the latest phase of this story.

As if that wasn’t weird enough, other animals seem to have independently evolved their own versions of Arc. Fruit flies have Arc genes, and Shepherd’s colleague Cedric Feschotte showed that these descend from the same group of gypsy retrotransposons that gave rise to ours. But flies and back-boned animals co-opted these genes independently, in two separate events that took place millions of years apart. And yet, both events gave rise to similar genes that do similar things: Another team showed that the fly versions of Arc also sends RNA between neurons in virus-like capsules. “It’s exciting to think that such a process can occur twice,” says Atma Ivancevic from the University of Adelaide.

This is part of a broader trend: Scientists have in recent years discovered several ways that animals have used the properties of virus-related genes to their evolutionary advantage. Gag moves genetic information between cells, so it’s perfect as the basis of a communication system. Viruses use another gene called env to merge with host cells and avoid the immune system. Those same properties are vital for the placenta—a mammalian organ that unites the tissues of mothers and babies. And sure enough, a gene called syncytin, which is essential for the creation of placentas, actually descends from env. Much of our biology turns out to be viral in nature.

Here’s something I wrote in 1998 when I was first getting interested in this business:

RNA reverse transcribing virusesRNA reverse transcribing viruses are usually called retroviruses. They have a single-stranded RNA genome. They infect animals, and when they get inside the cell’s nucleus, they copy themselves into the DNA of the host cell using reverse transcriptase. In the process they often cause tumors, presumably by damaging the host’s DNA.

Retroviruses are important in genetic engineering because they raised for the first time the possibility that RNA could be transcribed into DNA, rather than the reverse. In fact, some of them are currently being deliberately used by scientists to add new genes to mammalian cells.

Retroviruses are also important because AIDS is caused by a retrovirus: the human immunodeficiency virus (HIV). This is part of why AIDS is so difficult to treat. Most usual ways of killing viruses have no effect on retroviruses when they are latent in the DNA of the host cell.

From an evolutionary viewpoint, retroviruses are fascinating because they blur the very distinction between host and parasite. Their genome often contains genetic information derived from the host DNA. And once they are integrated into the DNA of the host cell, they may take a long time to reemerge. In fact, so-called endogenous retroviruses can be passed down from generation to generation, indistinguishable from any other cellular gene, and evolving along with their hosts, perhaps even from species to species! It has been estimated that up to 1% of the human genome consists of endogenous retroviruses! Furthermore, not every endogenous retrovirus causes a noticeable disease. Some may even help their hosts.

It gets even spookier when we notice that once an endogenous retrovirus lost the genes that code for its protein coat, it would become indistinguishable from a long terminal repeat (LTR) retrotransposon—one of the many kinds of “junk DNA” cluttering up our chromosomes. Just how much of us is made of retroviruses? It’s hard to be sure.

For my whole article, go here:

It’s about the mysterious subcellular entities that stand near the blurry border between the living and the non-living—like viruses, viroids, plasmids, satellites, transposons and prions. I need to update it, since a lot of new stuff is being discovered!

Jason Shepherd’s new paper has a few other authors:

• Elissa D. Pastuzyn, Cameron E. Day, Rachel B. Kearns, Madeleine Kyrke-Smith, Andrew V. Taibi, John McCormick, Nathan Yoder, David M. Belnap, Simon Erlendsson, Dustin R. Morado, John A.G. Briggs, Cédric Feschotte and Jason D. Shepherd, The neuronal gene Arc encodes a repurposed retrotransposon gag protein that mediates intercellular RNA transfer, *Cell* **172** (2018), 275–288.

*Five* elements? Yes, besides earth, air, water and fire, he includes a fifth element that doesn’t feel the Earth’s gravitational pull: the ‘quintessence’, or ‘aether’, from which heavenly bodies are made.

In the same book he also tried to use the Platonic solids to explain the orbits of the planets:

The six planets are Mercury, Venus, Earth, Mars, Jupiter and Saturn. And the tetrahedron and cube, in case you’re wondering, sit outside the largest sphere shown above. You can see them another picture from Kepler’s book:

These ideas may seem goofy now, but studying the exact radii of the planets’ orbits led him to discover that these orbits aren’t circular: they’re *ellipses!* By 1619 this led him to what we call Kepler’s laws of planetary motion. And those, in turn, helped Newton verify Hooke’s hunch that the force of gravity goes as the inverse square of the distance between bodies!

In honor of this, the problem of a particle orbiting in an inverse square force law is called the **Kepler problem**.

So, I’m happy that Greg Egan, Layra Idarani and I have come across a solid mathematical connection between the Platonic solids and the Kepler problem.

But this involves a detour into the 4th dimension!

It’s a remarkable fact that the Kepler problem has not just the expected conserved quantities—energy and the 3 components of angular momentum—but also 3 more: the components of the Runge–Lenz vector. To understand those extra conserved quantities, go here:

• Greg Egan, The ellipse and the atom.

Noether proved that conserved quantities come from symmetries. Energy comes from time translation symmetry. Angular momentum comes from rotation symmetry. Since the group of rotations in 3 dimensions, called **SO(3)**, is itself 3-dimensional, it gives 3 conserved quantities, which are the 3 components of angular momentum.

None of this is really surprising. But if we take the angular momentum *together with* the Runge–Lenz vector, we get 6 conserved quantities—and these turn out to come from the group of rotations in 4 dimensions, **SO(4)**, which is itself 6-dimensional. The obvious symmetries in this group just *rotate* a planet’s elliptical orbit, while the unobvious ones can also *squash* or *stretch* it, changing the eccentricity of the orbit.

(To be precise, all this is true only for the ‘bound states’ of the Kepler problem: the circular and elliptical orbits, not the parabolic or hyperbolic ones, which work in a somewhat different way. I’ll only be talking about bound states in this post!)

Why should the Kepler problem have symmetries coming from rotations in 4 dimensions? This is a fascinating puzzle—we know a lot about it, but I doubt the last word has been spoken. For an overview, go here:

• John Baez, Mysteries of the gravitational 2-body problem.

This SO(4) symmetry applies not only to the *classical mechanics* of the inverse square force law, but also the *quantum mechanics!* Nobody cares much about the quantum mechanics of two particles attracting gravitationally via an inverse square force law—but people care a lot about the quantum mechanics of hydrogen atoms, where the electron and proton attract each other via their electric field, which also obeys an inverse square force law.

So, let’s talk about hydrogen. And to keep things simple, let’s pretend the proton stays fixed while the electron orbits it. This is a pretty good approximation, and experts will know how to do things exactly right. It requires only a slight correction.

It turns out that wavefunctions for bound states of hydrogen can be reinterpreted as functions on the 3-sphere, S^{3} The sneaky SO(4) symmetry then becomes obvious: it just rotates this sphere! And the Hamiltonian of the hydrogen atom is closely connected to the Laplacian on the 3-sphere. The Laplacian has eigenspaces of dimensions *n*^{2} where *n* = 1,2,3,…, and these correspond to the eigenspaces of the hydrogen atom Hamiltonian. The number *n* is called the principal quantum number, and the hydrogen atom’s energy is proportional to -1/*n*^{2}.

If you don’t know all this jargon, don’t worry! All you need to know is this: if we find an eigenfunction of the Laplacian on the 3-sphere, it will give a state where the hydrogen atom has a definite energy. And if this eigenfunction is invariant under some subgroup of SO(4), so will this state of the hydrogen atom!

The biggest finite subgroup of SO(4) is the rotational symmetry group of the 600-cell, a wonderful 4-dimensional shape with 120 vertices and 600 dodecahedral faces. The rotational symmetry group of this shape has a whopping 7,200 elements! And here is a marvelous moving image, made by Greg Egan, of an eigenfunction of the Laplacian on S^{3} that’s invariant under this 7,200-element group:

We’re seeing the wavefunction on a moving slice of the 3-sphere, which is a 2-sphere. This wavefunction is actually real-valued. Blue regions are where this function is positive, yellow regions where it’s negative—or maybe the other way around—and black is where it’s almost zero. When the image fades to black, our moving slice is passing through a 2-sphere where the wavefunction is almost zero.

For a full explanation, go here:

• Greg Egan, In the chambers with seven thousand symmetries, 2 January 2018.

Layra Idarani has come up with a complete classification of *all* eigenfunctions of the Laplacian on S^{3} that are invariant under this group… or more generally, eigenfunctions of the Laplacian on a sphere of *any* dimension that are invariant under the even part of *any* Coxeter group. For the details, go here:

• Layra Idarani, SG-invariant polynomials, 4 January 2018.

All that is a continuation of a story whose beginning is summarized here:

• John Baez, Quantum mechanics and the dodecahedron.

So, there’s a lot of serious math under the hood. But right now I just want to marvel at the fact that we’ve found a wavefunction for the hydrogen atom that not only has a well-defined energy, but is also invariant under this 7,200-element group. This group includes the usual 60 rotational symmetries of a dodecahedron, but also other much less obvious symmetries.

I don’t have a good picture of what these less obvious symmetries do to the wavefunction of a hydrogen atom. I understand them a bit better *classically*—where, as I said, they squash or stretch an elliptical orbit, changing its eccentricity while not changing its energy.

We can have fun with this using the old quantum theory—the approach to quantum mechanics that Bohr developed with his colleague Sommerfeld from 1920 to 1925, before Schrödinger introduced wavefunctions.

In the old Bohr–Sommerfeld approach to the hydrogen atom, the quantum states with specified energy, total angular momentum and angular momentum about a fixed axis were drawn as elliptical orbits. In this approach, the symmetries that squash or stretch elliptical orbits are a bit easier to visualize:

This picture by Pieter Kuiper shows some orbits at the 5th energy level, *n = 5*: namely, those with different eigenvalues of the total angular momentum, ℓ.

While the old quantum theory was superseded by the approach using wavefunctions, it’s possible to make it mathematically rigorous for the hydrogen atom. So, we can draw elliptical orbits that rigorously correspond to a basis of wavefunctions for the hydrogen atom. So, I believe we can draw the orbits corresponding to the basis elements whose linear combination gives the wavefunction shown as a function on the 3-sphere in Greg’s picture above!

We should get a bunch of ellipses forming a complicated picture with dodecahedral symmetry. This would make Kepler happy.

As a first step in this direction, Greg drew the collection of orbits that results when we take a circle and apply all the symmetries of the 600-cell:

For more details, read this:

• Greg Egan, Kepler orbits with the symmetries of the 600-cell.

To do this really right, one should learn a bit about ‘old quantum theory’. I believe people have been getting it a bit wrong for quite a while—starting with Bohr and Sommerfeld!

If you look at the ℓ = 0 orbit in the picture above, it’s a long skinny ellipse. But I believe it really should be *a line segment straight through the proton*: that’s what’s an orbit with *no* angular momentum looks like.

There’s a paper about this:

• Manfred Bucher, Rise and fall of the old quantum theory.

Matt McIrvin had some comments on this:

]]>This paper from 2008 is a kind of thing I really like: an exploration of an old, incomplete theory that takes it further than anyone actually did at the time.

It has to do with the Bohr-Sommerfeld “old quantum theory”, in which electrons followed definite orbits in the atom, but these were quantized–not all orbits were permitted. Bohr managed to derive the hydrogen spectrum by assuming circular orbits, then Sommerfeld did much more by extending the theory to elliptical orbits with various shapes and orientations. But there were some problems that proved maddeningly intractable with this analysis, and it eventually led to the abandonment of the “orbit paradigm” in favor of Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics, what we know as modern quantum theory.

The paper argues that the old quantum theory was abandoned prematurely. Many of the problems Bohr and Sommerfeld had came not from the orbit paradigm per se, but from a much simpler bug in the theory: namely, their rejection of orbits in which the electron moves entirely radially and goes right through the nucleus! Sommerfeld called these orbits “unphysical”, but they actually correspond to the s orbital states in the full quantum theory, with zero angular momentum. And, of course, in the full theory the electron in these states does have some probability of being inside the nucleus.

So Sommerfeld’s orbital angular momenta were always off by one unit. The hydrogen spectrum came out right anyway because of the happy accident of the energy degeneracy of certain orbits in the Coulomb potential.

I guess the states they really should have been rejecting as “unphysical” were Bohr’s circular orbits: no radial motion would correspond to a certain zero radial momentum in the full theory, and we can’t have that for a confined electron because of the uncertainty principle.

In quantum mechanics, the position of a particle is not a definite thing: it’s described by a ‘wavefunction’. This says how probable it is to find the particle at any location… but it also contains other information, like how probable it is to find the particle moving at any *velocity*.

Take a hydrogen atom, and look at the wavefunction of the electron.

**Question 1.** Can we make the electron’s wavefunction have all the rotational symmetries of a dodecahedron—that wonderful Platonic solid with 12 pentagonal faces?

Yes! In fact it’s too easy: you can make the wavefunction look like whatever you want.

So let’s make the question harder. Like everything else in quantum mechanics, angular momentum can be uncertain. In fact you can never make all 3 components of angular momentum take definite values simultaneously! However, there are lots of wavefunctions where the *magnitude* of an electron’s angular momentum is completely definite.

This leads naturally to the next question, which was first posed by Gerard Westendorp:

**Question 2.** Can an electron’s wavefunction have a definite magnitude for its angular momentum while having all the rotational symmetries of a dodecahedron?

Yes! And there are *infinitely many ways for this to happen!* This is true even if we neglect the radial dependence of the wavefunction—that is, how it depends on the distance from the proton. Henceforth I’ll always do that, which lets us treat the wavefunction as a function on a sphere. And by the way, I’m also ignoring the electron’s spin! So, whenever I say ‘angular momentum’ I mean *orbital* angular momentum: the part that depends only on the electron’s position and velocity.

Question 2 has a trivial solution that’s too silly to bother with. It’s the spherically symmetric wavefunction! That’s invariant under *all* rotations. The real challenge is to figure out the simplest nontrivial solution. Egan figured it out, and here’s what it looks like:

The rotation here is just an artistic touch. Really the solution should be just sitting there, or perhaps changing colors while staying the same shape.

In what sense is this the simplest nontrivial solution? Well, the magnitude of the angular momentum is equal to

where the number is *quantized*: it can only take values 0, 1, 2, 3,… and so on.

The trivial solution to Question 2 has The first nontrivial solution has Why 6? That’s where things get interesting. We can get it using the 6 lines connecting opposite faces of the dodecahedron!

I’ll explain later how this works. For now, let’s move straight on to a harder question:

**Question 3.** What’s the smallest choice of where we can find *two linearly independent* wavefunctions that both have the same and both have all the rotational symmetries of a dodecahedron?

It turns out to be And Egan created an image of a wavefunction oscillating between these two possibilities!

But we can go a lot further:

**Question 4.** For each how many linearly independent functions on the sphere have that value of and all the rotational symmetries of a dodecahedron?

For ranging from 0 to 29 there are either none or one. There are none for these numbers:

1, 2, 3, 4, 5, 7, 8, 9, 11, 13, 14, 17, 19, 23, 29

and one for these numbers:

0, 6, 10, 12, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28

The pattern continues as follows. For ranging from 30 to 59 there are either one or two. There is one for these numbers:

31, 32, 33, 34, 35, 37, 38, 39, 41, 43, 44, 47, 49, 53, 59

and two for these numbers:

30, 36, 40, 42, 45, 46, 48, 50, 51, 52, 54, 55, 56, 57, 58

The numbers in these two lists are just 30 more than the numbers in the first two lists! And it continues on like this forever: there’s always one more linearly independent solution for than there is for

**Question 5.** What’s special about these numbers from 0 to 29?

0, 6, 10, 12, 15, 18, 20, 21, 22, 24, 25, 26, 27, 28

You don’t need to know tons of math to figure this out—but I guess it’s a sort of weird pattern-recognition puzzle unless you know which patterns are likely to be important here. So I’ll give away the answer.

Here’s the answer: these are the numbers below 30 that can be written as sums of the numbers 6, 10 and 15.

But the real question is *why?* Also: what’s so special about the number 30?

The short, cryptic answer is this. The dodecahedron has 6 axes connecting the centers of opposite faces, 10 axes connecting opposite vertices, and 15 axes connecting the centers of opposite edges. The least common multiple of these numbers is 30.

But this requires more explanation!

For this, we need more math. You may want to get off here. But first, let me show you the solutions for and as drawn by Greg Egan. I’ve already showed you which we could call the **quantum dodecahedron**:

Here is which looks like a **quantum icosahedron**:

And here is :

Maybe this deserves to be called a **quantum Coxeter complex**, since the Coxeter complex for the group of rotations and reflections of the dodecahedron looks like this:

The dodecahedron and icosahedron have the same symmetries, but for some reason people talk about the icosahedron when discussing symmetry groups, so let me do that.

So far we’ve been looking at the *rotational* symmetries of the icosahedron. These form a group called or for short, with 60 elements. We’ve been looking for certain functions on the sphere that are invariant under the action of this group. To get them all, we’ll first get ahold of all polynomials on that are invariant under the action of this group Then we’ll restrict these to the sphere.

To save time, we’ll use the work of Claude Chevalley. He looked at *rotation and reflection* symmetries of the icosahedron. These form the group also known as but let’s call it for short. It has 120 elements, but never confuse it with two other groups with 120 elements: the symmetric group on 5 letters, and the binary icosahedral group.

Chevalley found all polynomials on that are invariant under the action of this bigger group These invariant polynomials form an algebra, and Chevalley showed that this algebra is freely generated by 3 homogeneous polynomials:

• of degree 2.

• of degree 6. To get this we take the dot product of with each of the 6 vectors joining antipodal vertices of the icosahedron, and multiply them together.

• of degree 10. To get this we take the dot product of with each of the 10 vectors joining antipodal face centers of the icosahedron, and multiply them together.

So, linear combinations of products of these give all polynomials on invariant under all rotation and reflection symmetries of the icosahedron.

But we want the polynomials that are invariant under just *rotational* symmetries of the icosahedron! To get all these, we need an extra generator:

• of degree 15. To get this we take the dot product of with each of the 15 vectors joining antipodal edge centers of the icosahedron, and multiply them together.

You can check that this is invariant under rotational symmetries of the icosahedron. But unlike our other polynomials, this one is not invariant under reflection symmetries! Because 15 is an odd number, switches sign under ‘total inversion’—that is, replacing with This is a product of three reflection symmetries of the icosahedron.

Thanks to Egan’s extensive computations, I’m completely convinced that and generate the algebra of all -invariant polynomials on I’ll take this as a fact, even though I don’t have a clean, human-readable proof. But someone must have proved it already—do you know where?

Since we now have 4 polynomials on they must obey a relation. Egan figured it out:

The exact coefficients depend on some normalization factors used in defining and Luckily the details don’t matter much. All we’ll really need is that this relation expresses in terms of the other generators. And this fact is easy to see without any difficult calculations!

How? Well, we’ve seen is unchanged by rotations, while it changes sign under total inversion. So, the most any rotation or reflection symmetry of the icosahedron can do to is change its sign. This means that is invariant under all these symmetries. So, by Chevalley’s result, it must be a polynomial in and .

So, we now have a nice description of the -invariant polynomials on in terms of generators and relations. Each of these gives an -invariant function on the sphere. And Leo Stein, a postdoc at Caltech who has a great blog on math and physics, has kindly created some images of these.

The polynomial is spherically symmetric so it’s too boring to draw. The polynomial of degree 6, looks like this when restricted to the sphere:

Since it was made by multiplying linear functions, one for each axis connecting opposite vertices of an icosahedron, it shouldn’t be surprising that we see blue blobs centered at these vertices.

The polynomial of degree 10, looks like this:

Here the blue blobs are centered on the icosahedron’s 20 faces.

Finally, here’s of degree 15:

This time the blue blobs are centered on the icosahedron’s 30 edges.

Now let’s think a bit about functions on the sphere that arise from polynomials on Let’s call them **algebraic functions** on the sphere. They form an algebra, and it’s just the algebra of polynomials on modulo the relation since the sphere is the set

It makes no sense to talk about the ‘degree’ of an algebraic function on the sphere, since the relation equates polynomials of different degree. What makes sense is the number that I was talking about earlier!

The group acts by rotation on the space of algebraic functions on the sphere, and we can break this space up into irreducible representations of It’s a direct sum of irreps, one of each ‘spin’

So, we can’t talk about the degree of a function on the sphere, but we can talk about its value. On the other hand, it’s very convenient to work with homogeneous polynomials on which have a definite degree—and these *restrict to* functions on the sphere. How can we relate the degree and the quantity ?

Here’s one way. The polynomials on form a graded algebra. That means it’s a direct sum of vector spaces consisting of homogeneous polynomials of fixed degree, and if we multiply two homogeneous polynomials their degrees add. But the algebra of polynomials restricted to the sphere is merely filtered algebra.

What does this mean? Let be the algebra of all algebraic functions on the sphere, and let consist of those that are restrictions of polynomials of degree Then:

1)

and

2)

and

3) if we multiply a function in by one in we get one in

That’s what a filtered algebra amounts to.

But starting from a filtered algebra, we can get a graded algebra! It’s called the associated graded algebra.

To do this, we form

and let

Then has a product where multiplying a guy in and one in gives one in So, it’s indeed a graded algebra! For the details, see Wikipedia, which manages to make it look harder than it is. The basic idea is that we multiply in and then ‘ignore terms of lower degree’. That’s what is all about.

Now I want to use two nice facts. First, is the spin- representation of Second, there’s a natural map from any filtered algebra to its associated graded algebra, which is an isomorphism of vector spaces (though not of algebras). So, we get an natural isomorphism of vector spaces

from the algebraic functions on the sphere to the direct sum of all the spin- representations!

Now to the point: because this isomorphism is *natural*, it commutes with symmetries, so we can also use it to study algebraic functions on the sphere that are invariant under a group of linear transformations of

Before tackling the group we’re really interested in, let’s try the group of rotation *and reflection* symmetries of the icosahedron, As I mentioned, Chevalley worked out the algebra of polynomials on that are invariant under this bigger group. It’s a graded commutative algebra, and it’s free on three generators: of degree 2, of degree 6, and of degree 10.

Starting from here, to get the algebra of -invariant algebraic functions on the sphere, we mod out by the relation This gives a filtered algebra which I’ll call (It’s common to use a superscript with the name of a group to indicate that we’re talking about the stuff that’s invariant under some action of that group.) From this we can form the associated graded algebra

where

If you’ve understood everything I’ve been trying to explain, you’ll see that is the space of all functions on the sphere that transform in the spin- representation and are invariant under the rotation and reflection symmetries of the icosahedron.

But now for the fun part: what is this space like? By the work of Chevalley, the algebra is spanned by products

but since we have the relation and no other relations, it has a basis given by products

So, the space has a basis of products like this whose degree is meaning

Thus, the space we’re really interested in:

has a basis consisting of equivalence classes

where

So, we get:

**Theorem 1.** The dimension of the space of functions on the sphere that lie in the spin- representation of and are invariant under the rotation and reflection symmetries of the icosahedron equals the number of ways of writing as an unordered sum of 6’s and 10’s.

Let’s see how this goes:

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 0

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 2, with basis

So, the story starts out boring, with long gaps. The odd numbers are completely uninvolved. But it heats up near the end, and reaches a thrilling climax at At this point we get *two* linearly independent solutions, because 30 is the least common multiple of the degrees of and

It’s easy to see that from here on the story ‘repeats’ with period 30, with the dimension growing by 1 each time:

Now, finally, we are to tackle Question 4 from the first part of this post: for each how many linearly independent functions on the sphere have that value of and all the rotational symmetries of a dodecahedron?

We just need to repeat our analysis with the group of rotational symmetries of the dodecahedron, replacing the bigger group

We start with algebra of polynomials on that are invariant under . As we’ve seen, this is a graded commutative algebra with *four* generators: as before, but also of degree 15. To make up for this extra generator there’s an extra relation, which expresses in terms of the other generators.

Starting from here, to get the algebra of -invariant algebraic functions on the sphere, we mod out by the relation This gives a filtered algebra I’ll call Then we form the associated graded algebra

where

What we really want to know is the dimension of since this is the space of functions on the sphere that transform in the spin- representation and are invariant under the rotational symmetries of the icosahedron.

So, what’s this space like? The algebra is *spanned* by products

but since we have the relation and a relation expressing in terms of other generators, it has a *basis* given by products

where

So, the space has a basis of products like this whose degree is meaning

and

Thus, the space we’re really interested in:

has a basis consisting of equivalence classes

where

and

So, we get:

**Theorem 2.** The dimension of the space of functions on the sphere that lie in the spin- representation of and are invariant under the rotational symmetries of the icosahedron equals the number of ways of writing as an unordered sum of 6’s, 10’s and at most one 15.

Let’s work out these dimensions explicitly, and see how the extra generator changes the story! Since it has degree 15, it contributes some solutions for odd values of But when we reach the magic number 30, this extra generator loses its power: has degree 30, but it’s a linear combination of other things.

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 0

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 0

: dimension 1, with basis

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 1, with basis

: dimension 1, with basis

: dimension 0

: dimension 1, with basis

: dimension 1, with basis

: dimension 1, with basis

: dimension 1, with basis

: dimension 1, with basis

: dimension 0

: dimension 2, with basis

From here on the story ‘repeats’ with period 30, with the dimension growing by 1 each time:

So, we’ve more or less proved everything that I claimed in the first part. So we’re done!

But I can’t resist saying a bit more.

First, there’s a very different and somewhat easier way to compute the dimensions in Theorems 1 and 2. It uses the theory of characters, and Egan explained it in a comment on the blog post on which this is based.

Second, if you look in these comments, you’ll also see a lot of material about harmonic polynomials on —that is, those obeying the Laplace equation. These polynomials are very nice when you’re trying to decompose the space of functions on the sphere into irreps of The reason is that the *harmonic* homogeneous polynomials of degree when restricted to the sphere, give you exactly the spin- representation!

If you take *all* homogeneous polynomials of degree and restrict them to the sphere you get a lot of ‘redundant junk’. You get the spin- rep, plus the spin- rep, plus the spin- rep, and so on. The reason is the polynomial

and its powers: if you have a polynomial living in the spin- rep and you multiply it by you get another one living in the spin- rep, but you’ve boosted the degree by 2.

Layra Idarani pointed out that this is part of a nice general theory. But I found all this stuff slightly distracting when I was trying to prove Theorems 1 and 2 assuming that we had explicit presentations of the algebras of – and -invariant polynomials on So, instead of introducing facts about harmonic polynomials, I decided to use the ‘associated graded algebra’ trick. This is a more algebraic way to ‘eliminate the redundant junk’ in the algebra of polynomials and chop the space of functions on the sphere into irreps of

Also, Egan and Idarani went ahead and considered what happens when we replace the icosahedron by another Platonic solid. It’s enough to consider the cube and tetrahedron. These cases are actually subtler than the icosahedron! For example, when we take the dot product of with each of the 10 vectors joining antipodal face centers of the cube, and multiply them together, we get a polynomial that’s not invariant under rotations of the cube! Up to a constant it’s just and this changes sign under some rotations.

People call this sort of quantity, which gets multiplied by a number under transformations instead of staying the same, a **semi-invariant**. The reason we run into semi-invariants for the cube and tetrahedron is that their rotational symmetry groups, and have nontrivial abelianizations, namely and The abelianization of is trivial.

Egan summarized the story as follows:

Just to sum things up for the cube and the tetrahedron, since the good stuff has ended up scattered over many comments:

For the cube, we define:A of degree 4 from the cube’s vertex-axes, a full invariant

B of degree 6 from the cube’s edge-centre-axes, a semi-invariant

C of degree 3 from the cube’s face-centre-axes, a semi-invariantWe have full invariants:

A of degree 4

C^{2}of degree 6

BC of degree 9B

^{2}can be expressed in terms of A, C and P, so we never use it, and we use BC at most once.So the number of copies of the trivial rep of the rotational symmetry group of the

cubein spin ℓ is the number of ways to write ℓ as an unordered sum of 4, 6 and at most one 9.

For the tetrahedron, we embed its vertices as four vertices of the cube. We then define:V of degree 4 from the tet’s vertices, a full invariant

E of degree 3 from the tet’s edge-centre axes, a full invariantAnd the B we defined for the embedding cube serves as a full invariant of the tet, of degree 6.

B

^{2}can be expressed in terms of V, E and P, so we use B at most once.So the number of copies of the trivial rep of the rotational symmetry group of the

tetrahedronin spin ℓ is the number of ways to write ℓ as a sum of 3, 4 and at most one 6.

All of this stuff reminds me of a baby version of the theory of modular forms. For example, the algebra of modular forms is graded by ‘weight’, and it’s the free commutative algebra on a guy of weight 4 and a guy of weight 6. So, the dimension of the space of modular forms of weight is the number of ways of writing as an unordered sum of 4’s and 6’s. Since the least common multiple of 4 and 6 is 12, we get a pattern that ‘repeats’, in a certain sense, mod 12. Here I’m talking about the simplest sort of modular forms, based on the group But there are lots of variants, and I have the feeling that this post is secretly about some sort of variant based on *finite* subgroups of instead of infinite discrete subgroups.

There’s a lot more to say about all this, but I have to stop or I’ll never stop. Please ask questions and if you want me to say more!

]]>