In the last of my Oxford talks I explain how entropy and relative entropy can be understood using certain categories related to probability theory… and how these categories also let us understand Bayesian networks!
The first two parts are explanations of these papers:
• John Baez, Tobias Fritz and Tom Leinster, A characterization of entropy in terms of information loss
• John Baez and Tobias Fritz, A Bayesian characterization of relative entropy.
Somewhere around here the talk was interrupted by a fire drill, waking up the entire audience!
By the way, in my talk I mistakenly said that relative entropy is a continuous functor; in fact it’s just lower semicontinuous. I’ve fixed this in my slides.
The third part of my talk was my own interpretation of Brendan Fong’s master’s thesis:
• Brendan Fong, Causal Theories: a Categorical Perspective on Bayesian Networks.
I took a slightly different approach, by saying that a causal theory is the free category with products on certain objects and morphisms coming from a directed acyclic graph . In his thesis he said was the free symmetric monoidal category where each generating object is equipped with a cocommutative comonoid structure. This is close to a category with finite products, though perhaps not quite the same: a symmetric monoidal category where every object is equipped with a cocommutative comonoid structure in a natural way (i.e., making a bunch of squares commute) is a category with finite products. It would be interesting to see if this difference hurts or helps.
By making this slight change, I am claiming that causal theories can be seen as algebraic theories in the sense of Lawvere. This would be a very good thing, since we know a lot about those.
You can also see the slides of this talk. Click on any picture in the slides, or any text in blue, and get more information!