I meant to ask: What syntax do you use to get hyperlink text?

Half the time I use HTML:

`<a href = "http://math.ucr.edu/home/baez/">my home page</a>`

and half the time I used Markdown, in the following manner:

`[my home page](http://math.ucr.edu/home/baez/)`

But Markdown hasn’t really been standardized, and several subtly different versions are floating around, so saying “You can use Markdown” isn’t very clear. Is there a guide to the specific version of it, that you use on this blog?

You should be able to find one starting from here since I’m using some sort of default version of Markdown provided by WordPress for its blogs. But I don’t know what it is. I don’t do anything fancy, so it seems like everything I try to do with Markdown works on this blog, the Azimuth Forum (which runs on forum software called “Vanilla”) and also the *n*-Category Café (which runs on software called “Moveable Type”, together with a lot of homebrewed extra features by Jacques Distler).

You’re welcome to try stuff and see if it works; I don’t mind spending 5 minutes a day fixing formatting problems and discussing them!

]]>Two addenda:

My very last remark was about not preserving all resources.

I’m pretty sure that’s nonsense in that resource aware type system would do exactly this: Count how often each *variable* is used. The combinators aren’t variables. So they can be safely forgotten about.

And secondly, I also wanted to clarify what I meant by these constraints “preserving composition”:

Take any non-reduced (but reducible, i.e. it shouldn’t be an infinite loop) small subsection Q of any given (reducible) program P and look at it separately. It will be a valid program on its own.

Now, reduce Q, turning it into Q_r.

Put Q_r back into the hole you cut into P, keeping all the wires in the same order. (Globular will also enforce the wiring order), obtaining a new program P’.

Reducing P will yield P_r.

Reducing P’ will yield P’_r.

My claim is, that (once again, being careful with infinite loops and such), P_r = P’_r.

At least in behavior. (I believe that means they are eta-equivalent?)

Different execution order *may* yield different exact layouts.

Some of that ought to be fixable as well, but I’m not sure all is:

I did not, as yet, add certain evidently true restructuring rules to Copy: Since it simply copies an input, arbitrary trees of copy hanging at the same input will give the exact same outcomes. Only the number of Copies is important.

So Commutativity and Associativity will hold for them.

Adding these would be quite trivial. Maybe I’ll make another update later just to get those relations as well.

I’m not sure adding just those laws would be enough to make the above claim even stronger, saying they even are *exactly* the same (in that shuffling around the way Copies are branched may transform one into the other).

That might well be the case though.

At any rate, my real point is that, if you keep this information around, you can plug in input-side the same stuff before and after reduction, and ultimately the behavior will be unchanged.

By deleting this information (especially the Eats which normally just disappear entirely), this could no longer be guaranteed. A reduced program would have a different number of inputs from an unreduced one, so you’d need to plug in different programs to begin with. Composition of programs would no longer necessarily even type-check, and if it did, behaviors wouldn’t have to be the same either, because, effectively, the wrong wires end up with the wrong inputs.

All this concerns vertical composition. I think there are no problems at all with horizontal composition which simply corresponds to tensoring together two programs to obtain a new program with the combined inputs and outputs side by side.

]]>I know some versions of Markdown have a neat and tidy syntax for that. I could, of course, also use HTML to accomplish it.

But Markdown hasn’t really been standardized, and several subtly different versions are floating around, so saying “You can use Markdown” isn’t very clear. Is there a guide to the specific version of it, that you use on this blog?

Whether Globular has the expressiveness to generate everything you want from these generators, and derive all the rules you want from a finite set of relations, is an open question.

I’d love for somebody to tackle that question, because as far as I can tell, the answer would necessarily have to clarify these combinator calculi in some subtle way:

I faithfully built all the objects and all the rules. I don’t see what could possibly be missing. If anything *is* missing, it ought to be something really subtle.

At best, the whole thing may be underconstrained because certain actions the current construction allows may lead to invalid states. – But most of that is the typing problem. The actual untyped version of these calculi would allow those exact same problematic states.

The one exception sill is that instance of Copy in the Communication rule, as I already established.

But in the SKI calculus? I’m pretty sure there’s literally nothing. – Essentially by construction. The pieces come together to generate the exact same behavior as the actual calculus would. No deviation is possible. The biggest difference is, that I have to do several extra steps (rearranging different combinators) to get everything in position to be able to apply stuff in the first place. But as long as we’re dealing with finite programs – no infinite rearrangement moves are required – this ought to work out.

I guess, *that* could ultimately be the limitation? But infinitely large programs (in terms of symbols, not in terms of rewrites until halting, i.e. infinite loops) seem like a relatively minor edge case not to have.

Or if there is no limitation of any kind, that ought to have some rather fundamental implications within category theory, right?

I’m not sure how to prove any of this though. All I can definitely do is: Show me an arbitrary valid string of SKI combinators. I can implement it in Globular and arrive at the same final form (assuming the string halts) as any other correct implementation would. (And if it doesn’t halt, I’ll run into the exact same loop. Also, where execution order matters, it’ll matter for me as well.)

I’m not quite sure what would have to be shown exactly. What conditions would I have to check to show, that this Globular implementation is fully faithful? If you have more insight into that, do you think you could give me a rough outline? Maybe I could actually do at least parts of such a proof.

I should just emphasize, though, that “expressiveness” is not the same as “computational power”.

I know. I didn’t mean to suggest that. My point just was, if my goal is to compute arbitrary things, without any constraints on how easy or practical or what ever else it is, the SKI combinators alone would already be sufficient.

That was me saying I’d really like some simple, small, proper examples of using these combinators to accomplish something that would be really hard to do with just SKI combinators, just so I could get a feeling for how these rules actually work in practice.

Like, what’s the “Hello World” of rho-combinators?

You will be tempted to use finite products whenever you’re trying to duplicate or delete data: that’s what they’re good for. And that’s where you’re likely to run into a wall.

Oooh, ok, that’s actually helpful. This may well be it.

So for the *most* part, I do not have problems with duplication or deletion. That’s what Copy and Eat are for, after all.

I *do* run into what migh be considerd a problem at the very end though:

Globular always (sensibly) demands the same number of inputs. To accomplish that, I cannot copy raw inputs:

I can’t tell Globular to make this:

https://imgur.com/KDfWrLo

be the same as this:

https://imgur.com/1w7qbln

Similarly, with Eat, I can’t tell it to make this:

https://imgur.com/2koCBQB

be the same as this:

https://imgur.com/5gse58m

That being said, to me that’s almost a feature: It preserves composability.

If you have some arbitrary program P where a bunch of Eats and Copies are left at the very bottom, at the inputs, if I then decide to take some *other* arbitrary program P’ , the outputs of which would become inputs of P, it’d still work. Globular basically can evaluate a program all the way to the point where it will tell you how often each input is used. I think that’s actually useful to know?

Inputs that end up being just an Eat are ones that are used 0 times. – In the usual SKI combinator calculus, you would just drop these altogether.

Inputs that end up being a bunch of Copies are ones that are used n+1 times with n instances of Copy directly at that input.

In the usual SKI combinator calculus, each input would have be written as a variable labelled “x” or something, and so copying it is as simple as writing it down multiple times in various spots.

A fully reduced program in my Globular implementation might end up looking something like this:

https://imgur.com/5LoYmw5

(Usually you’d also have some combinators, but this is a possible outcome.)

which is equivalent to something like

(((((x((((x(((x((x(x)))))))))))))))

whereas normal SKI combinator calculus would be like this:

https://imgur.com/10pMxzC

(and all the inputs have the same label “x”)

So if the limitation of this is, that I can’t just forget about resources (similar, given my current understanding, to what some linear type system would actually be designed to do), then that certainly is a limitation, but, for many usecases, a useful one.

(Note, I do forget *some* resources though: In particular, once the diagram is reduced, or equivalently, the program is run, there is no way to tell how many combinators I started with to get to this behavior. I effectively only keep close track of unknown/variable inputs. Because Globular requires me to do so.)

So in this example, would the generators be the combinators, and the relations their various rules?

That’s the idea. Whether Globular has the expressiveness to generate everything you want from these generators, and derive all the rules you want from a finite set of relations, is an open question. It can only generate new things from your generators using the operations in a semistrict n-categories, and similarly it can only derive new rules from the relations you state using the rules of a semistrict n-category.

There are many different kinds of logical system, with varying amounts of expressiveness. Globular is intentionally close to the bottom of the expressiveness hierarchy except in one respect, namely it allows a higher “n” (as in n-category) than almost any other system. So, I’d be surprised if the expressiveness of an ordinary computer programming language, even a simplified one like the lambda-calculus or SKI combinator calculus, can be captured by Globular. But who knows? Maybe I’m confused.

Like, the SKI combinators themselves are already Turing complete, right?

Right. I should just emphasize, though, that “expressiveness” is not the same as “computational power”. You can click the link to learn more about expressiveness, though probably not the really good math stuff. Suffice it to say that expressiveness is one reason most people prefer C+ to machine language, even though both are Turing complete.

Jamie wrote:

Globular lets you work with any finitely-presented algebraic signature up to dimension 4, as long as the terms are purely compositional.

I’m not sure, I fully understand every implication of this, but it seems to me, that a large set of combinator calculi fit this description.

Don’t jump to that conclusion: the phrases “algebraic signature” and “purely compositional” are crucial here.

Unrelatedly, I stumbled over this: https://ncatlab.org/nlab/show/Lawvere+theory Which may give a more general answer to Jamie? Lawvere theories are categories with finite products.

Right—and n-categories don’t have finite products unless you demand that they do, and Globular doesn’t allow you to demand that, so Globular probably can’t do all sorts of stuff that Lawvere theories (or graph-enriched Lawvere theories) can do.

Finite products are a key aspect of “expressiveness” that Jamie is deliberately _not _trying to include. You will be tempted to use finite products whenever you’re trying to duplicate or delete data: that’s what they’re good for. And that’s where you’re likely to run into a wall.

I would have to really carefully examine what you’re doing, to see if you’re somehow getting around this wall. Unfortunately I don’t have time.

As Jamie noted, you can use Globular to handle finitely presented PROs, and I believe you can also use it to handle finitely presented PROPs. PROPs are in the same ball-park as Lawvere theories, but they’re less expressive. For example there’s a Lawvere theory for groups, but not a PROP for monoids, because groups have an axiom

in which the letter is duplicated on the left side and deleted on the right side.

This is a long, fun story but I’ve told it too many times in my life to tell it again now.

A random other issue:

What would the j be here? Simply the cell level? Or perhaps one less than the cell level?

The same as the cell level. A “j-morphism” is a “j-cell” is a “cell of dimension j”, where j = 0, 1, 2, 3, …

]]>As far as I can tell, the paper *does* require products here but only explicitly shows them here (minimal paraphrasing):

is shorthand for

Which clearly features a bunch of products.

But I believe, all that is, is a different way to write down the following diagram:

Read the picture from bottom to top and, in parallel, the above equation from left to right:

At first, I have two inputs T×T (i.e. composed side by side).

Then, left of that, I introduce a combinator, making it 1×T×T

It’s the K combinator, which has a single output wire, so now it’s T×T×T – three wires side by side.

Next up, I hit a branch of type T×T->T (or, curried, T->T->T), combining the two wires on the left, so as I cross that, I end up with T×T again.

Finally, I hit another branch, combining the remaining wires into one, giving me just T.

Here I did an overlay, trying to line up how the equation and the diagram correspond. Due to the way Globular lays out things, it got a little cramped in the bottom half.

https://imgur.com/0A9hfbw

So if I’m right, I essentially emulated products, to the extend they are needed here, by horizontal composition.

(Or it would be horizontal composition, if I hadn’t skipped two cell levels to get better-behaved wires. – Throughout the paper, they write, that it only requires 2-categories to model all this. And if I manually added all the wire interaction rules like interchange laws and such, which Globular gives me for 4-categories, already at the 2-cell level, it’d be true for my implementation as well.)

Horizontal composition is the 2-category version of the composition already available at the 1-category level, right? If I’m not mistaken, it’s the vertical composition that’s new. Or did I get those turned around?

Regardless, meanwhile, vertical composition is used for introducing all the relevant structures which, in the above equation, show up as various morphisms.

In the overlay version of the image, horizontal composition is blue, vertical composition is red.

The entire rest of the paper doesn’t reference products at all, as far as I could tell. It’s all hidden behind the various strings of combinators and parentheses as, apparently, a form of syntactic sugar.

]]>So in this example, would the generators be the combinators, and the relations their various rules?

As far as I can tell, I have:

An object env

(implicitly) the identity morphisms

1_2: Id(Id(env))

1: Id(Id(Id(env)))

A morphism of wires T: 1_2 -> 1_2

a bunch of morphisms of type 1->T, which are my combinators

three extra morphisms

Eat: T -> 1

Copy: T -> T×T

Branch: T×T -> T

from which all valid (and as yet some invalid) programs can be built (I think)

and, defined with all these, morphisms one level up, that give me the relations between them, from which all other valid equations can be derived via appropriate usage of composition and identity.

What would the j be here? Simply the cell level? Or perhaps one less than the cell level? (I.e. do you count what dimension the morphism lives in, or objects of what dimension the morphism connects?)

Or is that something entirely else?

Because I wasn’t sure, I dropped the j in the above.

Quick googling found me this as first/best match: https://ncatlab.org/nlab/show/J-homomorphism but I don’t know if that’s related.

Unrelatedly, I stumbled over this:

https://ncatlab.org/nlab/show/Lawvere+theory

Which may give a more general answer to Jamie?

Lawvere theories are categories with finite products.

The rough idea is to define an algebraic theory as a category with finite products and possessing a “generic algebra” (e.g., a generic group), and then define a model of that theory (e.g., a group) as a product-preserving functor out of that category.

It might be, that the group in question must itself be finitely presentable for it to work? But either way, that at least *sounds* like it will mostly satisfy Globular’s constraints.

That being said, we’re dealing here with Graph-enriched theories, about which the paper by Michael Stay and L.G. Meredith ( https://arxiv.org/abs/1704.03080 ) states:

Gph is the category of graphs and graph homomorphisms. Gph has finite products.

and

A Gph-enriched category has finite products if the underlying

category does.

Meaning, we’re still dealing with finite products.

We’re also dealing with multisorted Lawvere theories, and that’s a tougher nut to crack. To get the full, correctly typed (/sorted) theory, we seemingly need infinitely many sorts.

My upcoming approach is, to try to make that, too, work by exploiting the fact, that we are only talking about two generator sorts, W and N, here, so perhaps I can finitely represent all the required sorts. (The biggest challenge here will be the S combinator)

Interestingly, about the sorts, the paper also states:

A multisorted Gph-enriched Lawvere theory, hereafter

Gph-theory is a Gph-enriched category with finite products Th equipped with a finite set S of sorts and a Gph-enriched functor θ : FinSetop/S → Th that preserves products strictly.

The set of sorts is also finite. Presumably, things like N->W aren’t counted as sorts in that set, then.

Finally, the very first sentence of 4, states:

Lawvere theories and their generalizations are categories with infinitely many objects and morphisms, but most theories of interest are finitely generated.

Which is vaguely related to my statement above, that most combinator calculi should admit a similar embedding into Globular, because the most interesting ones are finitely generated.

Below that, it states:

A presentation of the underlying multisorted Lawvere theory of a finitely-generated Gph-theory is a signature for a term calculus, consisting of a set of sorts, a set of term constructors, and a set of equations, while the edges in the hom graphs of the theory encode the reduction relation.

And after that, it presents the SKI combinator calculus as a Gph-theory, doing, as far as I can tell, pretty much 1:1 what I did.

(My single wire is the mentioned set of sorts, the various combinators and the three special morphisms are the set of term constructors, and after that I have the set of equations and reduction relations)

So while I absolutely might be missing something, I think that should pretty much answer Jamie’s questions about whether this work can be represented in Globular.

]]>At a guess that’s because combinator calculi very often are finitely representable (everything can be done with just the finite set of combinators),

Jamie said “finitely presentable”, which means something precise.

It’s a bit tiring to say *exactly* what a finitely presented *n*-category is, but it roughly means:

1) There are finitely many **generators** – *j*-morphisms for various *j* from which which all others can be generated using he operations that every *n*-category has.

2) There are finitely many **relations** – equations been *j*-morphisms, from which other equations can be deduced.

In general, a **presentation** is a collection of generators for some algebraic structure, together with a collection of relations. We say an algebraic structure is “finitely presentable” if it has a presentation with finitely many generators and finitely many relations.

The classic example is a finitely presented group, and whenever people talk about “finitely presented” algebraic structures this is the paradigm they have in mind. In a group, the ways you get to “generate” new elements from old are multiplying two elements and taking the inverse of an element. In other structures, you get to do other things, which must be carefully specified.

]]>At the moment general feature development of Globular is frozen, while we develop the next iteration, which will allow computation in infinite-dimensions. (I’m working on this with Christoph Dorn, Christopher Douglas, and David Reutter.)

That’s more or less what I figured, and I’m very much looking forward to it! Thanks for your work! Even in its current state, it’s a great help to me, for understanding things simply by building them. And it’s quite incredible, what’s already possible.

As said, many of the features I’d like to see have actually already been suggested by others. But I’ll be sure to add new suggestions, as I spot them. I’ll have to read through the tracker again first, though. (Or is there another place to put suggestions besides Github? – That page apparently hasn’t been updated in a while.)

Globular lets you work with any finitely-presented algebraic signature up to dimension 4, as long as the terms are purely compositional.

I’m not sure, I fully understand every implication of this, but it seems to me, that a large set of combinator calculi fit this description.

Generally, combinators appear to map quite nicely to Globular as string diagrams similar to how I defined them. All kinds of combinator calculi are possible, actually, as long as the rules don’t become too crazy. (For instance, I once tried to make SF combinators work, but the F-combinator is *really* challenging, due to its complicated, case-sensitive application rules. I think it might be possible, and I did get part of the way there, but I’m missing edge cases.)

At a guess that’s because combinator calculi very often are finitely representable (everything can be done with just the finite set of combinators), and compositional. (Any (if required, well-typed) string of combinators and balanced parentheses will be a valid program).

I believe, the three helper objects I need to make it work – Branch, Copy, and Eat, are rarely avoidable. Almost any combinator calculus should be realizable using only those three in addition to the actual combinators. (BCI combinators, being linear, would only require Branch. I’m not sure that one could ever be gotten rid of in a practical manner.)

Unless you have really exotic rules, which certainly is possible.

(There surely is a way to make that mathematically precise. What rules can or can’t be done? What combinator calculi admit these constraints?)

As previously explained, Branch is really the same as parentheses.

If I got that right, it should have the type:

Branch: forall X, Y : (X -> Y) -> X -> Y

Meanwhile, Copy and Eat come up in the S- and K-combinators. Since those are some kind of almost minimal set of combinators, presumably any universal non-linear combinator calculus will need these.

These have the types:

Copy: forall X : X -> X×X

Eat: forall X: X -> 1

I believe, those are the two morphisms a comonoid would have, right?

The 4-dimensional part is also nice in that, usually, I’ll want strings in such a diagram to pass through each other freely, which is best done in 4-cells, where all the relevant rules to manipulate them already are present.

It’s possible that Globular can only encode some fragment of the theory, which may still be interesting; in that case, it would be good to be clear exactly what fragment this is.

Well, as said, if the Rho combinators can do it (and my understanding is, that they are a full translation of the Rho calculus), what I did thus far will be able to do it as well.

I also already mentioned the caveat in the communication rule. There is an input-side Copy instruction there, which copies an input in order to declare it the same. This is brittle, because it relies on you *not* copying something, in order to instead apply the more complicated, and thus perhaps easily unnoticed communication rule. As yet I don’t know how I might make somebody prevent the application of Copy, when they probably would like to do a communication instead.

(* more later)

Specifically, what’s happening is this:

The communication rule states:

((| C) ((| ((for (&P)) Q)) (( ! (&P)) R)) )

->

((| C) (Q (&R)) )

Or, with fewer parentheses:

C | for (& P) Q | ! (&P) R

->

C | Q (&R)

Where P is a process, &P is it lifted to a name.

So

! (&P) R

I think, basically calls out “I have something called P!

and

for (&P) Q

says “If I get something called P, I can interpret it!”

And so, R can be communicated as input to Q, which interprets it into another process Q(&R).

The problem is, that (&P) comes up twice. The way I implemented this, inputs are essentially anonymous. (Hence, say, S can be applied without caring about what’s below at all!) – so the only way (I can think of), to tell Globular, that two inputs are actually *the same input*, is by copying from a single input.

And presumably, without typing you could build nonsense, such as, I don’t know,

((| C) (& P)) = C | &P

which the types would rule out, because &P has type N (and P has type W), but | only takes things of type W to produce another W.

That’s clearly a flaw with the current design. But one that’ll be really hard to adress without polymorphism support (as per the previous discussion).

Other than those two issues, my current understanding is, that this is an entirely faithful, complete translation of rho-combinators, and therefore of rho calculus.

I might be mistaken though.

In summary, it should have the full power of rho combinators, but it lacks a few necessary constraints to make possible only legal trees of them, and to transform legal trees into ones that still are legal (i.e., they can still reach the same normal form).

Finally, I suppose, since inputs can’t be completely deleted, only “marked for deletion”, it could be argued, that the translation isn’t so perfect after all, even if the above problems were to be sorted out. But that seems like a non-issue to me. If *really* necessary, I could always just cap these up on the bottom (input-side) as well, thereby allowing me to actually delete inputs. Mostly that just seems like making things less clean though.

And arguably, knowing how much was thrown away could also be of interest.

(*)

There may be better ways to do the communication rule, but I’d have to have more of an intuition for when any of this even comes up.

Like, the SKI combinators themselves are already Turing complete, right? In principle, I could literally just ignore all the fancy new combinators, and build arbitrary programs in SKI combinators.

And I have a vague intuition for how programs look like when using SKI combinators.

I don’t really have an idea at all for how the same stuff looks like in pi or rho calculus. Or how you’d typically use any of the new elements.

Until I have that, it’s very hard for me to gauge the actual needs which, then, could further inform the design.

For instance, if it turned out, that the availability of communication can easily be gauged, I could potentially constrain Copy-nodes to be sensitive to that. You then could no longer make a premature copying error.

The most obvious idea I have to that effect is, to simply not allow copying &-combinators until communication can be ruled out. But I’m not sure that’s either necessary or sufficient for avoiding these issues. The types ought to allow it at least.

]]>