*guest post by Tobias Fritz*

In Part 1 and Part 2, we learnt about ordered commutative monoids and how they formalize theories of resource convertibility and combinability. In this post, I would like to say a bit about the applications that have been explored so far. First, the study of resource theories has become a popular subject in quantum information theory, and many of the ideas in my paper actually originate there. I’ll list some references at the end. So I hope that the toolbox of ordered commutative monoids will turn out to be useful for this. But here I would like to talk about an example application that is much easier to understand, but no less difficult to analyze: graph theory and the resource theory of zero-error communication.

A graph consists of a bunch of nodes connected by a bunch of edges, for example like this:

This particular graph is the *pentagon graph* or *5-cycle*. To give it some resource-theoretic interpretation, think of it as the *distinguishability graph* of a communication channel, where the nodes are the symbols that can be sent across the channel, and two symbols share an edge if and only if they can be unambiguously decoded. For example, the pentagon graph roughly corresponds to the distinguishability graph of my handwriting, when restricted to five letters only:

So my ‘w’ is distinguishable from my ‘u’, but it may be confused for my ‘m’. In order to communicate unambiguously, it looks like I should restrict myself to using only two of those letters in writing, since any third of them may be mistaken for one of the other three. But alternatively, I could use a block code to create context around each letter which allows for perfect disambiguation. This is what happens in practice: I write in natural language, where an entire word is usually not ambiguous.

One can now also consider graph homomorphisms, which are maps like this:

The numbers on the nodes indicate where each node on the left gets mapped to. Formally, a graph homomorphism is a function taking nodes to nodes such that adjacent nodes get mapped to adjacent nodes. If a homomorphism exists between graphs and then we also write ; in terms of communication channels, we can interpret this as saying that simulates since the homomorphism provides a map between the symbols which preserves distinguishability. A ‘code’ for a communication channel is then just a homomorphism from the complete graph in which all nodes share an edge to the graph which describes the channel. With this ordering structure, the collection of all finite graphs forms an ordered set. This ordered set has an intricate structure which is intimately related to some big open problems in graph theory.

We can also combine two communication channels to form a compound one. Going back to the handwriting example, we can consider the new channel in which the symbols are pairs of letters. Two such pairs are distinguishable if and only if either the first letters of each pair are distinguishable or the second letters are,

When generalized to arbitrary graphs, this yields the definition of disjunctive product of graphs. It is not hard to show that this equips the ordered set of graphs with a binary operation compatible with the ordering, so that we obtain an ordered commutative monoid denoted **Grph**. It mathematically formalizes the resource theory of zero-error communication.

Using the toolbox of ordered commutative monoids combined with some concrete computations on graphs, one can show that **Grph** is not cancellative: if is the complete graph on 11 nodes, then but there exists a graph such that

The graph turns out to have 136 nodes. This result seems to be new. But if you happen to have seen something like this before, please let me know!

Last time, we also talked about rates of conversion. In **Grph**, it turns out that some of these correspond to famous graph invariants! For example, the rate of conversion from a graph to the single-edge graph is Shannon capacity where is the complement graph. This is of no surprise since was originally defined by Shannon with precisely this rate in mind, although he did not use the language of ordered commutative monoids. In any case, the Shannon capacity is a graph invariant notorious for its complexity: it is not known whether there exists an algorithm to compute it! But an application of the Rate Theorem from Part 2 gives us a formula for the Shannon capacity:

where ranges over all graph invariants which are monotone under graph homomorphisms, multiplicative under disjunctive product, and normalized such that Unfortunately, this formula still not produce an algorithm for computing But it nonconstructively proves the existence of many new graph invariants which approximate the Shannon capacity from above.

Although my story ends here, I also feel that the whole project has barely started. There are lots of directions to explore! For example, it would be great to fit Shannon’s noisy channel coding theorem into this framework, but this has turned out be technically challenging. If you happen to be familiar with rate-distortion theory and you want to help out, please get in touch!

#### References

Here is a haphazard selection of references on resource theories in quantum information theory and related fields:

• Igor Devetak, Aram Harrow and Andreas Winter, A resource framework for quantum Shannon theory.

• Gilad Gour, Markus P. Müller, Varun Narasimhachar, Robert W. Spekkens and Nicole Yunger Halpern, The resource theory of informational nonequilibrium in thermodynamics.

• Fernando G.S.L. Brandão, Michał Horodecki, Nelly Huei Ying Ng, Jonathan Oppenheim and Stephanie Wehner, The second laws of quantum thermodynamics.

• Iman Marvian and Robert W. Spekkens, The theory of manipulations of pure state asymmetry: basic tools and equivalence classes of states under symmetric operations.

• Elliott H. Lieb and Jakob Yngvason, The physics and mathematics of the second law of thermodynamics.

Thanks for the posts. I am not sure whether your meaning of adjacent is the same as mine. I understand the above as that a triangle with vertices A,B,C can’t be mapped to a single edge via a graph homomorphism.

There are many different definitions of ‘graph’, each with its own concept of ‘homomorphism’, but I think Tobias is using this one here:

A

(simple) graphis a finite set together with a set of 2-element subsets of We call the elements ofnodesand we call the elements ofedges. If is an edge we say the nodes and areadjacent, and we say is an edgefromtoSo, there can’t be an edge from a node to itself. There can be at most one edge from to If there’s an edge from to it’s also an edge from edge from to

It can be helpful to call this a ‘simple graph’ to distinguish it from all the other kinds of graphs. I hardly ever use simple graphs: I prefer directed multigraphs, because they’re actually simpler! But a lot of people call simple graphs just ‘graphs’. A lot of people call directed multigraphs just ‘graphs’, too.

Yes that’s also what I understood from the above explanation, but I wasn’t sure so I asked. That is if you convert a triangle to an edge with vertices x,y then because either x or y has to be labeled by two letters, like x=A,B and y=C the conversion can’t be anymore a homomorphism as described above.

I have so far not really understood what the Shannon capacity for a graph is, but by the above I understand now that the rate of conversion to an edge can’t be used as a definition of the Shannon capacity of a triangle. So what would be an easy description for the Shannon capacity of a triangle?

(where I had understood that “conversions” are homomorphisms ?)

Nad wrote:

I’ll let Tobias answer that, but I’ll note that an easy

descriptionof the Shannon capacity still doesn’t make it easy tocomputethe Shannon capacity. Nobody knows the Shannon capacity of a 7-cycle: that is, a graph that looks like a heptagon!Yep, you’re both completely correct! Sorry if it wasn’t clear to begin with.

So here’s a general vague question. How are your ideas on the Shannon capacity of a graph related to Tom Leinster’s?

• Tom Leinster, The Shannon capacity of a graph, 1,

The n-Category Café, 10 August 2013.• Tom Leinster, The Shannon capacity of a graph, 2,

The n-Category Café, 18 August 2013.Maybe there’s nothing much to say… but some people, including Nad, might like the first part, since it gives a nice explanation of the idea of Shannon capacity.

Indeed Tom’s explanation is much better then the Wikipedia article – at least I find that. So on a first glance it seems for a triangle (a 3-cycle) in Tom’s sense the Shannon capacity would be zero, because for the triangle and for all strong products of itself the cardinality of the largest indepent set seems to be zero. However Tobias used the complementary type of graphs (for Tom’s graph an edge means “rather undistinguishable” and for Tobias an edge means “distinguishable” within a communication). I guess the complementary graph of a 3-cycle (Tobias graph) would be three distinct points (Tom graph). Those 3 points would all belong to an independent set. If I understood correctly if one takes the strong product of this graph of three points than one would have a graph with 9 seperate points taking the square root is again 3 a.s.o. so the Shannon capacity should be 3. Is that right?

Well done — that’s exactly right!

Having to take complementary graph is a bit of an annoyance. But there’s a reason that I work with ‘distinguishability graphs’ instead of the usual ‘confusability graphs’: converting one communication channel into another corresponds to a graph homomorphism of distinguishability graphs, but not of confusability graphs. When using the channel to communicate, we’re interested in converting the channel into a perfect channel, which distinguishes all input symbols perfectly. So we’re interested in independent sets in the confusability graph, or in cliques in the distinguishability graph. When using the same channel repeatedly, the maximal throughput of bits per channel use is described by the Shannon capacity.

I think that I found a small error in the post. In your counterexample to cancellativity in Grph, even though , you say has 26 vertices. In your paper, you have , which has vertices.

I think that I have a smaller counterexample. Let be a prime power such that and let be the Paley graph, or quadratic residue graph, on vertices. Paley graphs are vertex transitive and self-complementary, so by Theorem 12 in Lovasz’s paper “On the Shannon capacity of a graph”, . The web page at http://www.research.ibm.com/people/s/shearer/indpal.html lists independence number (which equal clique number in this case) for small Paley graphs. In particular, . Thus while and .

Very cool!! David Roberson and I are working on exploring the ordered commutative monoid of graphs in more detail. Since your observation also goes in that direction, maybe you’re interested in helping out? If so, just send me an email. (It seems that we’re both working on other projects first, so we’re likely to be quite slow.)

You’re right, I should have said 136 vertices in the original post…

In order to get the subscripts right on this blog, you can write P_{17} instead of P_17. The 7’s in cullina’s second comment all belong to the subscript.

I have fixed the subscripts in cullina’s second comment. It would be great if this post of yours could lead to a collaboration!

I will also fix the original post so that it says has 136 vertices. Not everybody reads the comments to find errors!

The paper has now appeared on the arXiv:

http://arxiv.org/abs/1504.03661