But, I haven’t been able to find any good account of this. Is this a known/standard thing? I want to know just how far the Veblen construction can be pushed — like, beyond the large Veblen ordinal, is there an “ultimate Veblen ordinal” somewhere; and might it be equal to an already-named / well-known one, such as Bachmann-Howard? (I suppose a negative answer here would be if arbitrarily large computable ordinals could be obtained this way, making the “ultimate Veblen ordinal” just .)

Now I can certainly think of ways of continuing beyond the large Veblen ordinal myself… but I don’t really want to reinvent the wheel here when I expect others have likely already done it better. So, is there a good account of this somewhere?

]]>But since you had such fun with ordinals here (and here and here), I better add that Ketonen and Solovay later gave a proof based on the ε_{0} stuff and the hierarchy of fast-growing functions. (The variation due to Loebl and Nešetřil is nice and short.) We should talk about this sometime! I wish I understood all the connections better. (Stillwell’s Roads to Infinity offers a nice entry point, though he does like to gloss over details.)

I think a jusification for the word ‘derivative’ might be the following: that you are deriving f’ from f, and, besides, f’ tells you informatoin about the speed of f, as you get to know its fixed points!

]]>]]>Problem #26

Despite its stark, last-centry plain HTML style, the TCLA list of open problems is something of a goldmine. As far as I know, it’s one of the few repositories of “Theory B” open problems, where Theory A deals with combinatorics of datatypes and complexity theory, and has avalanches of open problems, including the-problem-who-shall-not-be-named, and that several generations of computer scientists who would give 10 years of their life for a solution (I’d actually consider the offer myself).

But theory B is concerned about more philosophical issues, such as “why is writing correct software so darn hard?” or “can we write code that can be extended, but without modifying it?”. It’s nice every once in a while to have a hard, concrete, mathematical question to gnaw away at, even if the actual concrete applications of a solution might be a bit, well, nonexistent (as is tradition for juicy mathematical problems).

The TLCA list of open problems concentrates on problems in so-called type theory, which has a strong overlap with logic and “combinatory logic” (which is more about computation than logic). It’s got a few nasty problems in there, some of which have been solved, but some of which are still very open. I’ve personally got my eye on one, #9, though I don’t expect to have it nailed down anytime soon. It’s good to have stretch goals.

The one I’m going to bring up today was reminded to me by a discussion with John Baez involving descriptions of large ordinals. He was mentioning the connection with descriptions of large numbers, and I brought up the connection with

weak systems of logic, which is the essential starting point of ordinal analysis, at least historically.The connection is this: we can use our

intuitionof the well-foundedness of aconcrete“finitist” description of an ordinal to justify the consistency of some system of reasoning. In general, each ordinal corresponds to a certain logical system, and as a happy consequence, we get a well-ordering of logical systems along their consistency strength.Somewhat ironically, ordinal descriptions, invented with the goal of making consistency of logical systems intuitive, get

really hardto understand as they get large, and even reasonably weak logics have ordinals of nightmarish descriptions. Whether this means that ordinal analysis has somewhat failed at it’s task, or whether we put too much trust in our stronger logical systems is left as an exercise to the reader.Now the connection between ordinal size and theory consistency is technically fleshed out, the details are unfortunately quite strenuous, and it would be nice to have some more intuitive understanding. Now the connection between logical systems and programing languages, on the other hand, is quite simple and intuitive: proofs are programs, and statements are types. Easy peezy! It might therefore be useful to see what ordinals have to say about the programing languages themselves.

To make a long(ish) story short, the real property we want to show is that the programing language corresponding to the logic only allows

total programs, or in other words a program of type A actually defines avalueof type A after reduction/computation, rather than just an empty promise. Delivery on this promise corresponds to 1-consistency in logic: if the logic says a number exists, then it “delivers” i.e. the number “actually exists”. This implies consistency, of course.So we want to prove that certain typed programs only have finite reductions. What better tool than ordinals, which are the very definition of having only finite “computations” or decreasing sequences? It turns out to be very tough to get a “natural” mapping from programs to ordinals, which is exactly what problem 26 asks for. In theory, there always is a mapping from programs to ordinals, even smaller than omega: a program maps to the largest number of possible reductions. But this is not “natural”, because showing that this number exists involves using the usual proof that such a number exists, and that proof doesn’t involve ordinals whatsoever. Note that problem 26 restricts the question to simply typed terms, but the question remains valid for all sorts of more complicated things.

Now the mystery is a bit in defining what “natural” means. Ideally, it would be a structural-ish map on well typed terms (say, an induction on type derivations) to ordinals using not too crazy intermediate notions, or operations on ordinals. Ultimately, it comes down to a personal preference, like with the definitions of computability, where Turing’s proposal was well-received by Gödel, as opposed to the lambda calculus, which

at the timedidn’t seem to capture the intuition of general computation (so there’s a cultural component as well).There have been a few nice attempts at resolving this questions for simple types, including some old work by Turing, Howard and Tait that give

partialsolutions, but a full solution still awaits, and certainly needs to be cleaned up and simplified by whoever comes next.

Ah, but what if the empty set (as ordinal zero) is the smallest set and ordinal that can be defined without self-reference.

Also, what if it can’t?

Some don’t necessarily have well-foundedness (restriction of comprehension) as axiomatized. Then “sets” as they are (as defined by their members and membership) are “purely logical” to fulfill sets that are, and sets that aren’t.

You might notice identity of f(x) = x is “continuous”, but the countable ordinals don’t have a fixed point (or rather, it would be a countable ordinal).

You might also be interested in Scott’s box and circle notation(s), then for notions of collapse as via model collapse or Skolem/Levy. This has collapse implementing the same notion as a “point at infinity” (or fixed point), and box and circle for variously the monotone and uniform.

As you can see, the mathematically infinite lends itself to many and varied extensions (and for each extension, collapse), where then the point is to figure out what to make of the infinite both in terms of the unbounded and large, or, effectively infinite, or, the non-finite and truly infinite, for applications. As some will have a mathematical theory of physics, the “truly infinite” as mathematical is relevant to the applied.

Here all of the trans-finite and large cardinals has all its application collapsed to a countable infinite fixed point, at infinity, or “\infty”. A usual goal is the understanding of the variously physical or mathematical significance of “infinity” in the equations (or increases without bound of relations), and the replacement of that value (or process) with an approximant (usually as of the first few terms of an equivalent series expansion). The requirement for application of infinity is how all the terms drop out as they algebraically reach identity, equality, or tautology, not just the leading terms as provide very accurate estimations of results of combined measurements.

Thank you for letting me post to your blog and I hope your readership finds its validity and of what use it is.

]]>On the topic of those four levels (quickly growing sequences of large integers, computable ordinals, large countable ordinals, cardinals) and correspondances between them, I’d like to link to the blog entry of David Madore where he writes about that topic, because it’s not easy to find from here:

• David Madore, Qui peut dire le nombre le plus grand?

]]>I found a couple of answers to that online. One is the definition of a recursive ordinal, and the other is Kleene’s O. However both seemed pretty unsatisfactory to me; I wanted something that could naturally express operations like addition/multiplication/exponentiation, as well as expressing finite ordinals.

]]>Actually I should expand on my comment a bit, to make it clearer.

In a model of arithmetic every number except zero is a successor, so every nonstandard natural number gives an *infinite* descending sequence

(If this sequence reached zero after finitely many steps, would be standard.) But as explained earlier on this blog, there are no infinite strictly decreasing sequences of ordinals.

Note that we study models of arithmetic within set theory, where we have a way to say what the ‘standard’ natural numbers are. This allows us to say things like “if the sequence reached zero after finitely many steps” and have it mean something: it means after a standard number of steps.

After reading your recent posts my guess is that these are not ordinal numbers, and if they exist in your model they’re included in the set

Right. More precisely, we have a limited ability to work with ordinals within Peano arithmetic. We cannot do much with or larger ordinals in this theory, because is the proof-theoretic ordinal of Peano arithmetic. But we can work with any smaller ordinal by putting its members in one-to-one correspondence with natural numbers and defining a new relation on the natural numbers to describe the ordering of (For , which is the case you’re mainly talking about, we can use the identity map as our one-to-one correspondence and the ‘new relation’ is the ordinary relation. But let’s be more general.)

If we now take a nonstandard model of Peano arithmetic, everything we can do with the ordinal has a nonstandard interpretation. It will have more members, just as you say. These will be nonstandard natural numbers, reinterpreted as members of

]]>That makes sense, thanks!

]]>