I’m trying to finish off a paper that Tobias Fritz and I have been working on, which gives a category-theoretic (and Bayesian!) characterization of relative entropy. It’s a kind of sequel to our paper with Tom Leinster, in which we characterized entropy.
That earlier paper was developed in conversations on the n-Category Café. It was a lot of fun; I sort of miss that style of working. Also, to get warmed up, I need to think through some things I’ve thought about before. So, I might as well write them down here.
The idea
There are many categories related to probability theory, and they’re related in many ways. Last summer—on the 24th of August 2012, according to my notes here—Jamie Vicary, Brendan Fong and I worked through a bunch of these relationships. I need to write them down now, even if they’re not all vitally important to my paper with Tobias. They’re sort of buzzing around my brain like flies.
(Tobias knows this stuff too, and this is how we think about probability theory, but we weren’t planning to stick it in our paper. Maybe we should.)
Let’s restrict attention to probability measures on finite sets, and related structures. We could study these questions more generally, and we should, but not today. What we’ll do is give a unified purely algebraic description of:
• finite sets
• measures on finite sets
• probability measures on finite sets
and various kinds of maps between these:
• functions
• bijections
• measure-preserving functions
• stochastic maps
Finitely generated free [0,∞)-modules
People often do linear algebra over a field, which is—roughly speaking—a number system where you can add, subtract, multiply and divide. But algebraists have long realized that a lot of linear algebra still works with a commutative ring, where you can’t necessarily divide. It gets more complicated, but also a lot more interesting.
But in fact, a lot still works with a commutative rig, where we can’t necessarily subtract either! Something I keep telling everyone is that linear algebra over rigs is a good idea for studying things like probability theory, thermodynamics, and the principle of least action.
Today we’ll start with the rig of nonnegative real numbers with their usual addition and multiplication; let’s call this The idea is that measure theory, and probability theory, are closely related to linear algebra over this rig.
Let be the category with of finitely generated free -modules as objects, and module homomorphisms as morphisms. I’ll call these morphisms maps.
Puzzle. Do we need to say ‘free’ here? Are there finitely generated modules over that aren’t free?
Every finitely generated free -module is isomorphic to for some finite set In other words, it’s isomorphic to for some So, is equivalent to the category where objects are natural numbers, a morphism from to is an matrix of numbers in and composition is done by matrix multiplication. I’ll also call this equivalent category
We can take tensor products of finitely generated free modules, and this makes into a symmetric monoidal †-category. This means we can draw maps using string diagrams in the usual way. However, I’m feeling lazy so I’ll often write equations when I could be drawing diagrams.
One of the rules of the game is that all these equations will make sense in any symmetric monoidal †-category. So we could, if we wanted, generalize ideas from probability theory this way. If you want to do this, you’ll need to know that is the unit for the tensor product in We’ll be seeing this guy a lot. So if you want to generalize, replace by any symmetric monoidal †-category, and replace by the unit for the tensor product.
Finite sets
There’s a way to see the category of finite sets lurking in which we can borrow from this paper:
• Bob Coecke, Dusko Pavlovic and Jamie Vicary, A new description of orthogonal bases.
For any finite set we get a free finitely generated -module, namely This comes with some structure:
• a multiplication coming from pointwise multiplication of -valued functions on
• the unit for this multiplication, an element of which we can write as a morphism
• a comultiplication, obtained by taking the diagonal map and promoting it to a linear map
• a counit for this comultiplication, obtained by taking the unique map to the terminal set and promoting it to a linear map
These morphisms make
into a commutative Frobenius algebra in That’s a thing where the unit, counit, multiplication and comultiplication obey these laws:
(I drew these back when I was feeling less lazy.) This Frobenius algebra is also ‘special’, meaning it obeys this:
And it’s also a †-Frobenius algebra, meaning that the counit and comultiplication are obtained from the unit and multiplication by ‘flipping’ them using the †category structure. (If we think of a morphism in as a matrix, its dagger is its transpose.)
Conversely, suppose we have any special commutative †-Frobenius algebra Then using the ideas in the paper by Coecke, Pavlovich and Vicary we can recover a basis for consisting of the vectors with
This basis forms a set such that
for some specified isomorphism in Furthermore, this is an isomorphism of special commutative †-Frobenius algebras!
In case you’re wondering, these vectors correspond to the functions on that are zero everywhere except at one point where they equal 1.
In short, a special commutative †-Frobenius algebra in is just a fancy way of talking about a finite set. This may seem silly, but it’s a way to start describing probability theory using linear algebra very much as we do with quantum theory. This analogy between quantum theory and probability theory is so interesting that it deserves a book.
Functions and bijections
Now suppose we have two special commutative †-Frobenius algebra in , say and
Suppose is a Frobenius algebra homomorphism: that is, a map preserving all the structure—the unit, counit, multiplication and comultiplication. Then it comes from an isomorphism of finite sets. This lets us find the groupoid of finite sets and bijections, inside
Alternatively, suppose is just a coalgebra homomorphism: that is a map preserving just the counit and comultiplication. Then it comes from an arbitrary function between finite sets. This lets us find the category of finite sets and functions, inside
But what if preserves just the counit? This sounds like a dry, formal question. But it’s not: the answer is something useful, a ‘stochastic map’.
Stochastic maps
A stochastic map from a finite set to a finite set is a map sending each point of to a probability measure on
We can think of this as a -shaped matrix of numbers in where a given column gives the probability that a given point in goes to any point in The sum of the numbers in each column will be 1. And conversely, any -shaped matrix of numbers in where each column sums to 1, gives a stochastic map from to
But now let’s describe this idea using the category We’ve seen a finite set is the same as a special commutative †-Frobenius algebra. So, say we have two of these, and Our matrix of numbers in is just a map
So, we just need a way to state the condition that each column in the matrix sums to 1. And this condition simply says that preserves the counit:
where is the counit for and similarly for
To understand this, note that if we use the canonical isomorphism
the counit can be seen as the map
that takes any -tuple of numbers and sums them up. In other words, it’s integration with respect to counting measure. So, the equation
says that if we take any -tuple of numbers, multiply it by the matrix and then sum up the entries of the resulting -tuple, it’s the same as if we summed up the original -tuple. But this says precisely that each column of the matrix sums to 1.
So, we can use our formalism to describe the category with finite sets as objects and stochastic maps as morphisms. We’ve seen this category is equivalent to the category with special commutative †-Frobenius algebras in as objects and counit-preserving maps as morphisms.
Finite measure spaces
Now let’s use our formalism to describe finite measure spaces—by which, beware, I mean a finite sets equipped with measures! To do this, we’ll use a special commutative †-Frobenius algebra in together with any map
Starting from these, we get a specified isomorphism
and sends the number 1 to a vector in : that is, a function on taking values in Multiplying this function by counting measure, we get a measure on
Puzzle. How can we describe this measure without the annoying use of counting measure?
Conversely, any measure on a finite set gives a special commutative †-Frobenius algebra in equipped with a map from
So, we can say a finite measure space is a special commutative †-Frobenius algebra in equipped with a map
And given two of these,
and a coalgebra morphism
obeying this equation
then we get a measure-preserving function between finite measure spaces! If you’re a category theorist, you’ll draw this equation as a commutative triangle:
Conversely, any measure-preserving function between finite measure spaces obeys this equation. So, we get an algebraic way of describing the category with finite measure spaces as objects and measure-preserving maps as morphisms.
Finite probability measure spaces
I’m mainly interested in probability measures. So suppose is a special commutative †-Frobenius algebra in equipped with a map
We’ve seen this gives a finite measure space. But this is a probability measure space if and only if
where
is the counit for The equation simply says the total integral of our measure is 1.
So, we get a way of describing the category which has finite probability measure spaces as objects and measure-preserving maps as objects. Given finite probability measure spaces described this way:
a measure-preserving function is a coalgebra morphism
such that the obvious triangle commutes:
Measure-preserving stochastic maps
Say we have two finite measure spaces. Then we can ask whether a stochastic map from one to the other is measure-preserving. And we can answer this question in the language of
Remember, a finite measure space is a special commutative †-Frobenius algebra in together with a map
Say we have another one:
A stochastic map is just a map
that preserves the counit:
But it’s a measure-preserving stochastic map if also
Addendum
In this article I didn’t get anywhere near to talking about what Tobias and I were actually doing! But it was good to get this basic stuff written down.
Here’s the paper we eventually wrote:
• John Baez and Tobias Fritz, A Bayesian characterization of relative entropy, Theory and Applications of Categories 29 (2014), 421–456.
And here’s my whole series of blog articles about it:
• Relative Entropy (Part 1): how various structures important in probability theory arise naturally when you do linear algebra using only the nonnegative real numbers.
• Relative Entropy (Part 2): a category related to statistical inference, and how relative entropy defines a functor on this category.
• Relative Entropy (Part 3): how to characterize relative entropy as a functor from to
• Relative Entropy (Part 4): wrap-up, and an invitation to read more about the underlying math at the n-Category Café.
I usually use the rig for measure theory (with , so that it is a rig; see also nLab: lower real number#arithmetic for a constructive treatment that makes this happen automatically) rather than . This doesn’t really matter for probability measures, but normally one allows positive measures (even on finite measure spaces) to be infinite.
Good point! I don’t feel a pressing need for here, but I think everything would work just as well with it… and when we get to relative entropy, we’ll see it really needs to take the value whenever you have a prior that assigns zero probability to an event that turns out to happen, leaving you ‘infinitely surprised’. I hadn’t thought that that use of could be extended to supplant the use of here, since I (implicitly) thought of entropy and probability as being different ‘types of thing’. But maybe using everywhere will make things nicer somehow.
It’s great to see this written down! Typo: $\dagger $-category is missing the “latex”.
Concerning the first puzzle on finitely generated -modules: most of them are not free, and these are among my favorite mathematical objects: polyhedral convex cones! The simplest non-free example is a 3-dimensional convex cone spanned by four rays such as . Including these in the category will take us into the realm of general probabilistic theories, where the cones are non-classical state spaces. I don’t see how to retain the †-structure, though…
There are other -modules which are not even embeddable into a vector space. The most notorious ones are semilattices with a bottom element, i.e. commutative idempotent monoids . Such a gadget becomes a -module if one takes multiplication by positive scalars to act trivially.
Concerning the second puzzle, what about using rather than ? I’m probably missing something, it’s early morning ;)
Thanks! Are you trying to keep your identity secret? I’d like to let people know who you are.
More later… thanks a lot for help on that first puzzle. The second puzzle I think I understand.
I’m not trying to keep my identity secret, I just need to get to grips with WordPress ;) So this comment is just an experiment for trying to see whether it puts my name down now…
I actually think it’s better to describe a measure on a finite set using a map rather than a map Of course using the †-category structure we can freely turn one into the other, but still…
The reason is that measures are covariant; we want to be able to take a measure and push it forward along and get a new measure on And in this approach. it’s easy: just compose them and get Note this works when is actually a function between finite sets, but also more generally when is a stochastic map or even more generally when it’s what I was calling a `map’, meaning any homomorphism of -modules.
Anyway, here’s a slick way to see how a map gives a measure on when without mentioning counting measure. First you note that a map is the same as a -valued function on Then you note that the composite map
amounts to multiplying by some number in We can think of this number as the integral of the function with respect to the measure corresponding to
In short, measures are maps Functions are maps Measures and functions are dual, both in the sense that we can compose them and get numbers, and in the sense that we can ‘turn one around’ using the †-structure and get the other!
That’s very neat!
I think the simplest nonfree example is 1d, isn’t it? I’m thinking about:
As a space, this has to be 1-dimensional, as one can easily check, and the unique 1d free module is spanned by one element. Yet no one element of spans it.
Isn’t this just the real line, considered as an -module? Your quotient sounds a lot like the construction of formally turning an abelian monoid into an abelian group by throwing in inverses. This can also be applied to -modules, in which case it turns every module into a vector space.
So yes, the simplest non-free module is 1-dimensional, namely just the real line. D’oh!
In fact, the “simplest” example is probably just , considered as a join semilattice…
Hey, you’re right! I didn’t think of that because I wasn’t sure if the real line would be finitely generated, but of course it is! It’s generated by 1 and -1. And that’s how I thought of my example: two generators such that the sum is zero.
Your second example is more interesting, though, because who ever thinks of as a -module?
Actually what’s really going on here is that is the Boolean semiring (rig) with , which comes with a rig homomorphism . This is how the Boolean rig becomes a -module.
More generally, the -modules are precisely the join-semilattices, hence all these become -modules via the above homomorphism.
(Writing for the Boolean rig may be a bit misleading, since the addition is not the usual one!)
Sorry, I should have explained that the homomorphism is the map which sends to and every positive number to .
You probably know this, but I should point out: if you think of as the rig that governs probability theory, then its quotient is the rig that governs possibility theory.
Numbers in are relative probabilities. In , 0 means ‘impossible’ and 1 means ‘possible’. So, the quotient map sends nonzero probabilities to ‘possible’, and zero probability to ‘impossible’.
So what’s really really going on is that is the rig of truth values under the operations OR and AND. This makes sense, since probability theory is generalized logic.
Hey—great to see you here, John! This example
is confusing to me. Is that quotient a quotient of sets, or of -modules? Since you’re trying to create a -module I’ll assume the latter.
If so, you’re taking viewed as a free -module with two generators, and you’re creating a quotient -module in which all elements of the form are set equal to zero.
But doing that forces
That is, all elements along diagonal lines get identified. So, every element becomes equivalent to one of the form or one of the form . And
so every element has a negative. Now I’m thinking Tobias was right and your gadget is isomorphic to as a -module. Let me just check that addition works correctly: I want plus to equal if is bigger, and if is bigger. I’ll just do the first case:
Yup! So, you’ve got a topological 1-dimensional module that requires two generators, and it’s isomorphic to
Nice.
In the first part of this mini-series, I described how various ideas important in probability theory arise naturally when you start doing linear algebra using only the nonnegative real numbers. But Tobias Fritz and I have proved a theorem characterizing the concept of relative entropy, which is also known as ‘relative information’, ‘information gain’ or—most terrifying and least helpful of all—’Kullback-Leibler divergence’. And in this second part, I’ll introduce two key players in this theorem.
In the last couple days I’ve returned to working on a paper with Tobias Fritz where we give a Bayesian characterization of the concept of ‘relative entropy’. This summer I wrote two blog articles about this paper:
• Relative Entropy (Part 1): how various structures important in probability theory arise naturally when you do linear algebra using only the nonnegative real numbers.
• Relative Entropy (Part 2): a category related to statistical inference, and how relative entropy defines a functor on this category.
But then Tobias Fritz noticed a big problem.
Tobias Fritz, a postdoc at the Perimeter Institute, is working with me on category-theoretic aspects of information theory. We published a paper on entropy with Tom Leinster, and we’ve got a followup on relative entropy that’s almost done. I should be working on it right this instant! But for now, read the series of posts here on Azimuth: Relative Entropy Part 1, Part 2 and Part 3. […]
Now Tobias Fritz and I have finally finished our paper on this subject:
• A Bayesian characterization of relative entropy.