## The Monoidal Grothendieck Construction

24 April, 2020

My student Joe Moeller gave a talk at the MIT Categories Seminar today! People discussed his talk at the Category Theory Community Server, and if you join that you can see the discussion here:

https://categorytheory.zulipchat.com/#narrow/stream/229457-MIT-Categories.20Seminar/topic/April.2023.20-.20Joe.20Moeller’s.20talk

You can see his slides here, and watch a video of his talk here:

The monoidal Grothendieck construction

Abstract. The Grothendieck construction gives an equivalence between fibrations and indexed categories. We will begin with a review of the classical story. We will then lift this correspondence to two monoidal variants, a global version and a fibre-wise version. Under certain conditions these are equivalent, so one can transfer fibre-wise monoidal structures to the total category. We will give some examples demonstrating the utility of this construction in applied category theory and categorical algebra.

The talk is based on this paper:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

This, in turn, had its roots in our work on network models, a setup for the compositional design of networked systems:

• John Baez, John Foley, Joe Moeller and Blake Pollard, Network models.

## Star-Autonomous Envelopes

21 April, 2020

In the fourth talk of the ACT@UCR seminar, Michael Shulman told us how to create nice string diagams for any closed symmetric monoidal category.

Mike had to teach right after his talk, but he rejoined us for discussions later at the Category Theory Community Server, here:

https://categorytheory.zulipchat.com/#narrow/stream/229966-ACT.40UCR-seminar/topic/April.2022nd.3A.20Michael.20Shulman

You can view or join the conversation there if you sign in.

You can see his slides here, or download a video of his talk here, or watch the video here:

• April 22, Michael Shulman, Star-autonomous envelopes.

Abstract. Symmetric monoidal categories with duals, a.k.a. compact monoidal categories, have a pleasing string diagram calculus. In particular, any compact monoidal category is closed with [A,B] = (A* ⊗ B), and the transpose of A ⊗ B → C to A → [B,C] is represented by simply bending a string. Unfortunately, a closed symmetric monoidal category cannot even be embedded fully-faithfully into a compact one unless it is traced; and while string diagram calculi for closed monoidal categories have been proposed, they are more complicated, e.g. with “clasps” and “bubbles”. In this talk we obtain a string diagram calculus for closed symmetric monoidal categories that looks almost like the compact case, by fully embedding any such category in a star-autonomous one (via a functor that preserves the closed structure) and using the known string diagram calculus for star-autonomous categories. No knowledge of star-autonomous categories will be assumed.

His talk is based on this paper:

• Michael Shulman, Star-autonomous envelopes.

This subject is especially interesting to me since Mike Stay and I introduced string diagrams for closed monoidal categories in a somewhat ad hoc way in our Rosetta Stone paper—but the resulting diagrams required clasps and bubbles:

This is the string diagram for beta reduction in the cartesian closed category coming from the lambda calculus.

## Bigness (Part 3)

14 April, 2020

Last time I talked about a 200-terabyte proof, completed in 2016, that the journal Nature called the “largest ever”. But Michael Nielsen pointed me to a vastly longer proof from 2012. This one required about 60,000,000 terabytes—that is, 60 exabytes! So, they were unable to store the whole proof.

Let’s pause to contemplate what 60 exabytes amounts to. It’s been estimated that all words ever spoken by human beings, if transcribed into text, would take just 5 exabytes to store. As of 2020, the genome of all the humans on Earth would take about 10 exabytes to store (if one foolishly didn’t do any data compression). On the other hand, in 2011 the total capacity of all hard drives sold by Seagate Technology was 330 exabytes.

So what proof was 60 exabytes long?

A semigroup is a set with a binary associative operation, which we write as multiplication. We say two semigroups $S$ and $T$ are isomorphic if there is a bijection $f\colon S \to T$ such that

$f(ss') = f(s) f(s')$

for all $s,s' \in S$, and anti-isomorphic if there’s a bijection with

$f(ss') = f(s') f(s)$

for all $s,s' \in S$. We say two semigroups are equivalent if they are isomorphic or anti-isomorphic.

In 2012, Andreas Distler, Chris Jefferson, Tom Kelsey, and Lars Kottho showed that up to equivalence, there are

12,418,001,077,381,302,684 ≈ 1.2 · 1019

semigroups with 10 elements. That’s the fact that took 60 exabytes to establish! They also checked their method against previous calculations of the number of semigroups with 7, 8, or 9 elements.

Upon hearing of the much shorter proof called the “largest ever” by Nature, Chris Jefferson wrote:

Darn, I had no idea one could get into the media with this kind of stuff. I had a much larger “proof”, where we didn’t bother storing all the details, in which we enumerated 718,981,858,383,872 semigroups, towards counting the semigroups of size 10.

Uncompressed, it would have been about 63,000 terabytes just for the semigroups, and about a thousand times that to store the “proof”, which is just the tree of the search.

Of course, it would have compressed extremely well, but also I’m not sure it would have had any value, you could rebuild the search tree much faster than you could read it from a disc, and if anyone re-verified our calculation I would prefer they did it by a slightly different search, which would give us much better guarantees of correctness.

As I mentioned, his team found a total of 12,418,001,077,381,302,684 semigroups of size 10. But they took advantage of a formula, also published in 2012, that counts most of the semigroups of a given size, namely those that are ‘nilpotent of degree 3’. Thus, they only had to find 718,981,858,383,872 by a brute force search, a mere 0.006% of the total:

• Andreas Distler, Chris Jefferson, Tom Kelsey, and Lars Kottho, The semigroups of order 10, in Principles and Practice of Constraint Programming, Springer Lecture Notes in Computer Science 7514, Springer, Berlin, 2012, pp. 883–899.

What does it mean for a semigroup to be nilpotent of degree 3? It means that if you multiply any three elements of the semigroup you always get the same answer! Here’s the paper that showed how to count those:

• Andreas Distler and James D. Mitchell, The number of nilpotent semigroups of degree 3, Electron. J. Combin. 19 (2012) 51.

It’s a piece of semigroup folklore that as the size approaches infinity, the fraction of semigroups of that size that are nilpotent of degree 3 approaches 1.

Puzzle. Can you prove this?

Puzzle. Assuming the folklore is true, what’s the rough growth rate of the number of semigroups with $n$ elements, counted up to equivalence? The formulas given by Distler and Mitchell are very complicated, but they may have simple asymptotics.

Here they count the number of semigroups that are nilpotent of degree 3 with various numbers of elements, up to equivalence:

## Open Games: the Long Road to Practical Applications

13 April, 2020

In the third talk of the ACT@UCR seminar, Jules Hedges spoke about open games!

He gave his talk on Wednesday April 15th. Afterwards we discussed it at the Category Theory Community Server, here:

https://categorytheory.zulipchat.com/

You can view or join the conversation there if you sign in.

You can see his slides here, or download a video here, or watch the video here:

• April 15, Jules Hedges: Open games: the long road to practical applications.

Abstract. I will talk about open games, and the closely related concepts of lenses/optics and open learners. My goal is to report on the successes and failures of an ongoing effort to try to realise the often-claimed benefits of categories and compositionality in actual practice. I will introduce what little theory is needed along the way. Here are some things I plan to talk about:

— Lenses as an abstraction of the chain rule

— Comb diagrams

— Surprising applications of open games: Bayesian inference, value function iteration

— The state of tool support

— Open games in their natural habitat: microeconomics

— Sociological aspects of working with economics

## Bigness (Part 2)

13 April, 2020

In the 1980s, the mathematician Ronald Graham asked if it’s possible to color each positive integer either red or blue, so that no triple of integers $a,b,c$ obeying Pythagoras’ famous equation:

$a^2 + b^2 = c^2$

all have the same color. He offered a prize of \$100.

It’s was solved in 2016! The answer is no. You can do it for numbers up to 7824, and a solution is shown here:

(White squares can go either way.)

But you can’t do it for numbers up to 7825. To prove this, you could try all the ways of coloring these numbers and show that nothing works. Unfortunately that would require trying

3 628 407 622 680 653 855 043 364 707 128 616 108 257 615 873 380 491 654 672 530 751 098 578 199 115 261 452 571 373 352 277 580 182 512 704 196 704 700 964 418 214 007 274 963 650 268 320 833 348 358 055 727 804 748 748 967 798 143 944 388 089 113 386 055 677 702 185 975 201 206 538 492 976 737 189 116 792 750 750 283 863 541 981 894 609 646 155 018 176 099 812 920 819 928 564 304 241 881 419 294 737 371 051 103 347 331 571 936 595 489 437 811 657 956 513 586 177 418 898 046 973 204 724 260 409 472 142 274 035 658 308 994 441 030 207 341 876 595 402 406 132 471 499 889 421 272 469 466 743 202 089 120 267 254 720 539 682 163 304 267 299 158 378 822 985 523 936 240 090 542 261 895 398 063 218 866 065 556 920 106 107 895 261 677 168 544 299 103 259 221 237 129 781 775 846 127 529 160 382 322 984 799 874 720 389 723 262 131 960 763 480 055 015 082 441 821 085 319 372 482 391 253 730 679 304 024 117 656 777 104 250 811 316 994 036 885 016 048 251 200 639 797 871 184 847 323 365 327 890 924 193 402 500 160 273 667 451 747 479 728 733 677 070 215 164 678 820 411 258 921 014 893 185 210 250 670 250 411 512 184 115 164 962 089 724 089 514 186 480 233 860 912 060 039 568 930 065 326 456 428 286 693 446 250 498 886 166 303 662 106 974 996 363 841 314 102 740 092 468 317 856 149 533 746 611 128 406 657 663 556 901 416 145 644 927 496 655 933 158 468 143 482 484 006 372 447 906 612 292 829 541 260 496 970 290 197 465 492 579 693 769 880 105 128 657 628 937 735 039 288 299 048 235 836 690 797 324 513 502 829 134 531 163 352 342 497 313 541 253 617 660 116 325 236 428 177 219 201 276 485 618 928 152 536 082 354 773 892 775 152 956 930 865 700 141 446 169 861 011 718 781 238 307 958 494 122 828 500 438 409 758 341 331 326 359 243 206 743 136 842 911 727 359 310 997 123 441 791 745 020 539 221 575 643 687 646 417 117 456 946 996 365 628 976 457 655 208 423 130 822 936 961 822 716 117 367 694 165 267 852 307 626 092 080 279 836 122 376 918 659 101 107 919 099 514 855 113 769 846 184 593 342 248 535 927 407 152 514 690 465 246 338 232 121 308 958 440 135 194 441 048 499 639 516 303 692 332 532 864 631 075 547 542 841 539 848 320 583 307 785 982 596 093 517 564 724 398 774 449 380 877 817 714 717 298 596 139 689 573 570 820 356 836 562 548 742 103 826 628 952 649 445 195 215 299 968 571 218 175 989 131 452 226 726 280 771 962 970 811 426 993 797 429 280 745 007 389 078 784 134 703 325 573 686 508 850 839 302 112 856 558 329 106 490 855 990 906 295 808 952 377 118 908 425 653 871 786 066 073 831 252 442 345 238 678 271 662 351 535 236 004 206 289 778 489 301 259 384 752 840 495 042 455 478 916 057 156 112 873 606 371 350 264 102 687 648 074 992 121 706 972 612 854 704 154 657 041 404 145 923 642 777 084 367 960 280 878 796 437 947 008 894 044 010 821 287 362 106 232 574 741 311 032 906 880 293 520 619 953 280 544 651 789 897 413 312 253 724 012 410 831 696 803 510 617 000 147 747 294 278 502 175 823 823 024 255 652 077 422 574 922 776 413 427 073 317 197 412 284 579 070 292 042 084 295 513 948 442 461 828 389 757 279 712 121 164 692 705 105 851 647 684 562 196 098 398 773 163 469 604 125 793 092 370 432

possibilities. But in 2016, three mathematicians cleverly figured out how to eliminate most of the options. That left fewer than a trillion to check!

So they spent 2 days on a supercomputer, running 800 processors in parallel, and checked all the options. None worked. They verified their solution on another computer.

This is one of the world’s biggest proofs: it’s 200 terabytes long! That’s about equal to all the digitized text held by the US Library of Congress. There’s also a 68-gigabyte digital signature—sort of a proof that a proof exists—if you want to skim it.

It’s interesting that these 200 terabytes were used to solve a yes-or-no question, whose answer takes a single bit to state: no.

I’m not sure breaking the world’s record for the longest proof is something to be proud of. Mathematicians prize short, insightful proofs. I bet a shorter proof of this result will eventually be found. Still, it’s fun that we can do such things. Here’s a story about the proof:

• Evelyn Lamb, Two-hundred-terabyte maths proof is largest ever, Nature, May 26, 2016.

and here’s the actual paper:

• Marijn J. H. Heule, Oliver Kullmann and Victor W. Marek, Solving and verifying the Boolean Pythagorean triples problem via cube-and-conquer.

The “cube-and-conquer” paradigm is a “hybrid SAT method for hard problems, employing both look-ahead and CDCL solvers”. The actual benefit of this huge proof is developing such methods for solving big problems! Roberto Bayardo explained:

CDCL is “conflict-directed clause learning”: when you hit a dead end in backtrack search, this is the process of recording a (hopefully small) clause that prevents you from making the same mistake again… a type of memorization, essentially.

Look-ahead: in backtrack search, you repeat the process of picking an unassigned variable and then picking an assignment for that variable until you reach a “dead end” (upon deriving a contradiction). Look-ahead involves doing some amount of processing on the remaining formula after each assignment in order to simplify it. This includes identifying variables for which one of its assignments can be quickly eliminated. “Unit propagation” is a type of look-ahead, though I suspect in this case they mean something quite a bit more sophisticated.

In fact, despite Evelyn Lamb’s title, this proof was not the longest ever, even in 2016. But more about that next time!

(In case you’re wondering, I am starting to collect and polish bunch of articles I have written about large numbers and other big things. You may have seen some of the before.)

## Bigness (Part 1)

13 April, 2020

The French mathematicians who went under the pseudonym Nicolas Bourbaki did a lot of good things—but not so much in the foundations of mathematics. In this paper, Adrian Mathias showed that their formalism was incredibly inefficient:

• Adrian R. D. Mathias, A term of length 4,523,659,424,929, Synthese 133 (2002), 75–86.

He proved that the definition of the number 1 in Bourbaki’s 1954 text on set theory requires 4,523,659,424,929 symbols and also 1,179,618,517,981 links connecting symbols, “without which, of course, the formula would be unreadable.”

One reason is that their definition of the number 1 is complicated in the first place. But worse, they don’t take $\exists$, “there exists”, as primitive. Instead they define it.

They use a version of Hilbert’s “choice operator”. For any formula $\Phi(x)$ they define a quantity $\tau_x(\Phi)$ that’s a choice of $x$ making $\Phi(x)$ true if such a choice exists, and just anything otherwise. Then they define $\exists x \Phi(x)$ to mean $\Phi$ holds for this choice. That is, $\Phi(\tau_x(\Phi))$.

This builds the axiom of choice into the definition of $\exists$. That’s a terrible idea (which goes back to Hilbert).

On top of that, their implementation of this idea leads to huge formulas. First, in $\tau_x(\Phi)$ they replace each appearance of $x$ in $\Phi$ with a box, $\square$, and draw “links” from each box to the $x$ subscript in $\tau_x(\Phi)$. This creates an ungainly mess. But more importantly, in the expression $\Phi(\tau_x(\Phi))$, each original appearance of $x$ in $\Phi(x)$ has been replaced by $\tau_x(\Phi)$. Thus, if the variable $x$ appears $n$ times in $\Phi(x)$ and the variable $y$ appears $m$ times, then the expression $\Phi(\tau_x(\Phi))$ contains the variable $y$ a total of $m + nm$ times.

This creates a combinatorial explosion when we have nested quantifers. And since they define $\forall$ (“for all”) in terms of $\exists$, both kinds of quantifiers have this explosive effect. Their definition of “1” is rather long, with nested quantifiers, so it becomes huge when you write it out in detail.

And in the 1970 edition of their book, it got much longer! Here they defined ordered pairs in terms of sets rather than taking them as primitive, which is basically a good thing. But this makes their definition of the number 1 much longer. Robert Solovay calculated that this new definition requires

2409875496393137472149767527877436912979508338752092897
≈ 2.4 · 1054

symbols and

871880233733949069946182804910912227472430953034182177
≈ 8.7 · 1053

At 80 symbols per line, 50 lines per page, 1,000 pages per book, the shorter version would occupy more than a million books, and the longer, 6 · 1047 books.

If each book weighed a kilogram, these books would weigh about 3 · 1017 times the mass of the Sun. Since our galaxy weighs about 1.5 trillion times as much as the Sun, these books would weigh about 200,000 times as much as the Milky Way.

Puzzle. Using Bourbaki’s later formalism, if you wrote a proof that 1 + 1 = 2 on paper, could it fit inside the observable Universe?