Enormous Integers

Sometimes math problems have unexpected answers. For example, consider the integral of this infinite product:

$\displaystyle{ \int_0^\infty \cos(2x) \cos(x) \cos(x/2) \cos(x/3) \cos(x/4) \cdots \, dx }$

The answer matches $\pi/8$ up to its 43rd digit. But it’s not actually $\pi/8.$ Weird, huh? But it’s not a coincidence; there’s an explanation.

Here’s a puzzle due to the logician Harvey Friedman. It too has an unexpected answer.

Say you have a finite alphabet to write with. How long can a word be if no block of letters in this word, from the nth letter to the 2nth, is allowed to appear as a subsequence of a bigger block from the mth letter to the 2mth?

If you have just one letter, this is the longest it can be:

AAA

If you have two, this is the longest it can be:

ABBBAAAAAAA

Puzzle: How long can the word be if you have three letters in your alphabet?

Friedman showed there’s still a finite upper bound on how long it can be. But, he showed it’s incomprehensibly huge!

Now Friedman is one of the world’s experts on large cardinals—large infinite numbers. So when he says a finite number is incomprehensibly huge, you sit up and listen. It’s like seeing a seasoned tiger hunter running through the jungle with his shotgun, yelling “Help! It’s a giant ant!”

The answer to the puzzle involves the 7th Ackermann function. The first Ackermann function is just doubling:

$A_1(n) = 2n$

The second Ackermann function of n is 2 to the nth power:

$A_2(n) = 2^n$

To calculate the third Ackermann function of n, you write down a tower of n 2′s. For example:

$A_3(4) = 2^{2^{2^{2}}} = 65536$

And so on: to calculate the (k+1)st Ackermann function applied to n, you apply the kth Ackermann function n times to the number 1:

$A_{k+1}(n) = A_k A_k \cdots A_k(1)$

where there are n of these $A_k$‘s.

For example, with a little work you can show

$A_4(4) = 2^{2^{2^{2^{\cdot^{\cdot^\cdot}}}}}$

where the number of 2′s in the tower is 65,536.

But this is still puny, compared to the answer to the puzzle!

In 1998 Friedman show that the longest word you can write with 3 letters, following the rule I described, is at least A7(184) letters long.

As he notes, when we reach numbers this large:

… a profound level of incomprehensibility emerges. The definitions are not incomprehensible, but the largeness is incomprehensible. These higher levels of largeness blur, where one is unable to sense one level of largeness from another.

I don’t understand his proof, but you can see it here:

• Harvey M. Friedman, Long finite sequences, 8 October 1998.

When I posted about this on Google+, Andrés Caicedo helped me notice that later in the paper, Friedman goes further. In fact, A7(184) is a ridiculous underestimate of the true answer! And apparently now there’s an upper bound on the answer, too.

When Andrés Caicaido heard me say the word could be at least A7(184) letters long, he wrote:

Much much longer!

I wrote a bit on this:

A lower bound for n(3), 14 February 2012.

Using some computer assistance from Ramdall Dougherty, Friedman shows that a lower bound is A7198(158386). I asked Dougherty, and got a copy of his files. I am hoping to find some decent time in the not too distant future to examine them in detail and see how far this can be pushed.

Friedman also found an upper bound, An(n), where n = A5(5). He mentions this in a note from 2001, but the note is a series of announcements with no proofs. Actually, it is exciting stuff, in spite of the lack of proofs. He discusses some other simply defined combinatorial constructions that give numbers much much longer than this one:

• Harvey M. Friedman, Lecture notes on enormous integers, 22 November 2011.

So, we have a well-defined enormous integer, and we know—or at least Friedman claims—that it’s somewhere between $A_{7198}(158386)$ and $A_{A_5(5)}(A_5(5)).$ That’s an impressively large spread. I wonder how accurately we will ever know this number.

I should add that beside the sheer shock value of seeing such a huge number show up as the answer to a simply stated puzzle, there’s some deep math lurking in these games. It gets rather scary. Near the end of his lecture notes, Friedman mentions a number so big that any proof of its existence—in a certain system of mathematics—has an incomprehensibly large number of symbols.

But if all this talk of enormous integers isn’t scary enough for you, just wait until you read about dark integers:

• Greg Egan, Dark integers, 2007.

Dark matter, dark energy . . . dark integers. They’re all around us, but we don’t usually see them, because they don’t quite play by the rules.

46 Responses to Enormous Integers

1. glorkspangle says:

Friedman’s proof concerns subsequences, not consecutive subsequences. He shows n(2)=11 with the example ABBBAAAAAAA: of the [i,2i] consecutive subsequences of this {AB, BBB, BBAA, BAAAA, AAAAAA}, none is a subsequence of another. I don’t believe (but can’t show) that his theorem works for consecutive subsequences.

• John Baez says:

Thanks, I fixed that mistake. I make no claims to understanding this stuff, not even slightly! But I don’t want my blog post to spread misinformation…

2. dave tweed says:

Related to the first linked paper, which discusses the potential for errors in mathematical statements derived in one way or another from computer output, this page (with links two more formal publications half way down) describes the use of computer systems to aid in determining when mathematical statements derived from “human output” contain errors. (This isn’t via the approach of starting the theorem in a logical system and then producing an automatic proof tree.)

3. David says:

Think it’s a typo, but in

So, we have a well-defined enormous integer, and we know—or at least Friedman claims—that it’s somewhere between $A_{7189}(158386)$ and $A_{A_5(5)}(5)$

Shouldn’t the upper bound be much larger: A_{A_5(5)}(A_5(5)) ?

• David says:

(and when I say “much larger” – in this context, although it’s unimaginably larger, it’s actually not that different: the size of the subscript far outweighing the operand)

• John Baez says:

Thanks, I fixed that typo! And yes, the subscript matters more.

4. What is the largest finite number which has turned up in a “real” problem?

• glorkspangle says:

What is a “real” problem? To a mathematician, the problems producing the numbers described in this blog post are quite real. Such finite numbers include things like n(4), or TREE(4), or BB(10), all of which are huge beyond any comprehension. Quite possibly there are “real” problems which combine these into monsters such as BB(n(6)) or TREE(BB(5)), just as Friedman’s sequence problem creates things like A_{A_5(5)}(A_5(5)).

• Yes, these problems are quite real; I wasn’t implying that they aren’t!

• John Baez says:

While the problems described in this post are “real”, and so are the related ones alluded to by Herr Glorkspangle, I believe they’ve all been designed by mathematicians who are looking for problems with enormous answers. If we include problems like this, it becomes very hard to point to an integer and say it’s the largest one that’s come up in a real problem—because doing that will instantly make some mathematician find a much larger one!

You probably know some of this stuff, but…

Skewes’ number came up as an upper bound on the first number $x$ where the number of primes less than $x$ exceeds

$\displaystyle{ \int_0^x \frac{d t}{\ln t} }$

Namely, in 1955 Skewes showed this has to happen before

$\displaystyle{ 10^{10^{10^{963}}} }$

Later the bound was vastly decreased.

Graham’s number is famous because in 1977, Martin Gardner wrote that “in an unpublished proof, Graham has recently established … a bound so vast that it holds the record for the largest number ever used in a serious mathematical proof.” It’s vastly larger than Skewes’ number. However, it shows up in Ramsey theory, and as far as I can tell, people who work on Ramsey theory are sort of asking for trouble: it’s a famous source of insanely large numbers.

The problem was:

Consider an n-dimensional hypercube, and connect each pair of vertices to obtain a complete graph on 2n vertices. Then colour each of the edges of this graph either red or blue. What is the smallest value of n for which every such colouring contains at least one single-coloured complete subgraph on 4 coplanar vertices?

For a while the best upper bound on this n was Graham’s number, namely

$g_{64}$

where

$g_1 = 3 \uparrow^4 3$

$g_{k+1} = 3 \uparrow^{g_k} 3$

and Knuth’s up-arrow notation is a way of talking about the operations that come after addition, multiplication, exponentiation, tetration, and so on. For example

$2 \uparrow^1 3 = 2 \times (2 \times 2) = 2^3$

is exponentiation but

$2 \uparrow^2 3 = 2 \uparrow^1 (2 \uparrow^1 2) = 2^{2^2}$

is tetration and

$2 \uparrow^3 3 = 2 \uparrow^2 (2 \uparrow^2 2)$

Later Graham’s upper bound was much improved, but it’s still insanely large. The lower bound is, err, umm… 13.

So, one advantage of Friedman’s enormous integers over Skewes’ number and Graham’s number is that they’re showing up as lower bounds on answers to math problems, instead of upper bounds. Clearly you can get very high upper bounds just by being very stupid… though I’m certainly not accusing Skewes or Graham of that!

• Tom Leinster says:

John, I think your finger slipped: the n-dimensional hypercube has $2^n$ vertices, not $2n$, as of course you know.

• John Baez says:

Cut-and-paste loses superscripts. Will fix!

• “You probably know some of this stuff, but…”

I remember an essay by Isaac Asimov called “Skewered”. :-)

For large infinite numbers, Rudy Rucker’s Infinity and the Mind is a nice read.

• Klaus Draeger says:

Many “real” problems are actually parametric, so that the question perhaps should be about naturally occurring rapidly growing sequences, rather than huge individual numbers.

One area where such problems can occur is formal verification, see for example Philippe Schnoebelen’s paper on Ackermann-hardness of (among others) reachability in lossy counter machines: http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/phs-mfcs10.pdf

5. Peter Suter says:

You first quote Andrés Caicaido:

A_{n}(n), where n = A_5(5)

Then you write

it’s somewhere between $A_{7198}(158386)$ and $A_{A_5(5)}(5)$.

shouldn’t that be

it’s somewhere between $A_{7198}(158386)$ and $A_{A_5(5)}(A_5(5))$.

?

6. grlcowan says:

The answer matches pi over eight up to its 43rd digit? Presumably you mean tau over 16!

• John Baez says:

Whoops, sorry! I’m a tauist, but a closet tauist.

7. John Baez says:

Over on Google+, Philip Thrift linked us to this video on Graham’s number:

A number so epic it will collapse your brain into a black hole!

(TREE numbers are mentioned at the end.)

Maybe mathematicians have a biggest number envy. :)﻿

• John Baez says:

I replied to Philip:

Most mathematicians don’t really care much about big numbers. For example, I don’t actually care about them at all, except as part of ‘life’s rich pageant’ along with 4-dimensional Platonic solids, the African caracal and 16 phases of ice. But there are a few mathematicians who really get into them—and then an interesting one-upsmanship develops.

We see the quest for really big numbers play out in three arenas: the natural numbers, the countable ordinals and the uncountable cardinals. The same people tend to get involved in all three.﻿

8. John Baez says:

Puzzle. Put an upper or lower bound on this number:

“The smallest natural number that cannot be specified in seventeen English words without use of self reference.”

• John Baez says:

The above puzzle is my attempt to find a finite analogue of the Feferman-Schütte ordinal. This is roughly, roughly the smallest countable ordinal that cannot be defined without self-reference.

Indeed I’d like to take that as the definition of the Feferman-Schütte ordinal: since this definition is self-referential we don’t fall into Berry’s paradox. Unfortunately it’s hard to define ‘self-reference’ in a formal mathematical way.

The Feferman-Schütte ordinal is often described as the smallest ordinal that can only be defined impredicatively: an impredicative definition is one that mentions the thing being defined, or a set that contains the thing being defined.

However, even this way of describing the Feferman-Schütte ordinal is vague and controversial. The really precise definition of the Feferman-Schütte ordinal is that it’s the smallest ordinal $\alpha$ with

$\phi_\alpha(0) = \alpha$

Here $\phi$ is the Veblen function, a fairly powerful way of generating large countable ordinals. The idea is that we start by setting

$\phi_0(\kappa) = \omega^\kappa$

where $\omega$ is the first infinite ordinal, and then for $\alpha > 0$ we define $\phi_\alpha(\kappa)$ to be the κth fixed point of the functions $\phi_\beta$ for all $\beta < \alpha.$

(Here we count starting at $\kappa = 0$, so what you’d normally call the first fixed point, meaning the smallest one, we’re calling the 0th.)

For example, we have

$\phi_0(1) = \omega$

$\phi_0(2) = \omega^2$

$\phi_0(\omega) = \omega^\omega$

$\phi_0(\omega^\omega) = \omega^{\omega^\omega}$

and so on… but eventually we hit the smallest fixed point of $\phi_0,$ which is usually called $\epsilon_0.$ It’s the smallest ordinal such that

$\omega^{\epsilon_0} = \epsilon_0$

so heuristically speaking we may say

$\epsilon_0 = \omega^{\omega^{\omega^{.^{.^{.}}}}}$

Since this is the smallest fixed point of $\phi_0$ we have

$\phi_1(0) = \epsilon_0$

Then we define

$\phi_1(1) = \epsilon_1$

$\phi_1(2) = \epsilon_2$

$\phi_1(\epsilon_0) = \epsilon_{\epsilon_0}$

and so on, until we hit the smallest fixed point of $\phi_1,$ which is $\phi_2(0),$ So, heuristically, we may say

$\phi_2(0) = \epsilon_{\epsilon_{\epsilon_{._{._{.}}}}}$

And this goes on and on, giving larger and larger countable ordinals, but we’ll never reach the Feferman-Schütte ordinal this way. It’s the first one we won’t get this way…

…and it’s the start of another hierarchy of even larger countable ordinals defined using the Γ function.

There’s something very strange about this business: it’s like you’re running away from your own shadow, but the faster you run the faster it does do, so you can never escape. Any systematic notation for indexing ordinals gives out when it hits a fixed point: the existence of all these fixed points follows from the usual axioms of set theory. But then we can make a new notation that takes this into account and gives even larger ordinals. The game goes on until we hit the Church–Kleene ordinal, which I’m not at liberty to discuss here.

9. F says:

Any word on n(4) and higher, and whether or not they’re finite?

• John Baez says:

Ah, good question! I was hoping someone would ask. Since I didn’t quite define n(k), I should come out and do so now:

Definition. Given an alphabet with k letters, n(k) is the maximum length a word can have if no block of letters in this word, from the ith letter to the 2ith, is allowed to appear as a subsequence of a bigger block from the jth letter to the 2jth.

If you read Harvey Friedman’s paper—even, like me, without truly understanding it—you’ll see that all these n(k)’s are finite. And in his Lecture notes on enormous integers he says (and I paraphrase for clarity):

The number $n(4)$ is a whole ‘nother kettle of fish. Define

$A(k) = A_k(k)$

where $A_k$ is the $k$th Ackermann function.

THEOREM 8.4. We have

$n(4) > A(A( \cdots A(1))) \cdots$

where there are A(187196) A’s.

So n(4) makes n(3) look like a pathetic runt of a number.

But later come some fairly large numbers like n(n(4)), and so on…

10. Todd Trimble says:

John wrote:

We see the quest for really big numbers play out in three arenas: the natural numbers, the countable ordinals and the uncountable cardinals. The same people tend to get involved in all three.﻿

And I think that’s a major point. I admit that my brain fogs out when I see things like that lower bound given for $n(4)$ — it really means nothing to me, and part of me thinks that it’s all so much mental masturbation. However, I somewhere got the idea that what drives a lot of Friedman’s research here is something more serious, along the lines that within ordinary mathematics accessible to graduate students, there lurk numbers so large that to even prove them finite requires something equivalent in strength to super-large cardinal axioms in set theory. In effect, these super-large finite numbers are produced by taking advantage of the underlying combinatorics of super-large cardinals, but now Friedman’s point is that this same type of combinatorics is hidden, in disguise, amongst items that you and I as ordinary mathematicians (NB: not set theorists) might be actually seriously interested in, like: trees, vector spaces, Hilbert spaces, etc. Thus, statements in ordinary mathematics are shot through with set theory considerations in some fairly unexpected ways.

(I’m not sure I’ve said this right — this is just the idea I got.)

It’s certainly true that these people know a bunch of tricks like: how to use large countable ordinals to produce large integers (see for example here.

• John Baez says:

Todd wrote:

It’s certainly true that these people know a bunch of tricks like: how to use large countable ordinals to produce large integers …

I don’t understand that yet but Friedman alludes to it in his Lecture notes on enormous integers, and that’s the sort of thing I want to understand better. I’m imagining an as above, so below kind of thing where some of the same structures appear in the world of large large natural numbers, large countable ordinals and large cardinals. Another example might be how Bachmann uses uncountable cardinals to index large countable ordinals! It’s mentioned near the end of the following very gentle introductory paper. However, now I’m trying to reach the point of understanding these issues in a bit more detail.

• Hilbert Levitz, Transfinite ordinals and their notations: for the uninitiated.

(What a great first name for a logician!)

• Todd Trimble says:

(What a great first name for a logician!)

Off-topic, but from my salad days as a chessplayer, I remember there was a fellow in the local club whose name was Hilbert (“Hibby”) Bernstein. I’ve always thought that would have been a great name for a mathematician, if only he was one!

• John Baez says:

It must only be math fans who name their kids ‘Hilbert’, right? It’s not usually a first name, is it?

(And does anyone really name their kid ‘Dilbert’?)

• Berényi Péter says:

That indexing by cardinals is at the heart of Goodstein Sequences. Goodstein’s theorem (1944) proves using elementary set theory, that all these sequences are finite (they get back to zero in finitely many steps).

Later on (1982) Paris and Kirby have shown that Goodstein’s theorem is unprovable in Peano arithmetic.

Otherwise these are rather ordinary sequences, I remember writing a short and elegant bc script that calculated them. The catch, of course, is the script could barely run, because after the first few steps it stops on any computer with a memory overflow.

Members of Goodstein Sequences grow pretty fast (before they start to decrease), there is simply not enough space in the physical world to hold their representation (not even an ideal storage device with some 10⁷⁰ bit/m² information density).

• John Baez says:

Proving that Goodstein sequences go to zero uses induction on ordinals up to $\epsilon_0$… not using uncountable cardinals to index countable ordinals. I explained this in week236 of This Week’s Finds. In the Addendum there, Kevin Buzzard shows that the Goodstein sequence starting with the number 4 takes

$\sim 6.89 \times 10^{121210694}$

steps to reach zero.

• Jesse C. McKeown says:

Indeed, one may well view the whole study of ordinal notations as the study of exotic-yet-manageable well-orderings on $\mathbb{N}$. One can then consider huge collections of disjoint families corresponding to $\lambda + n$ for notable — er, notatable — limit ordinals $\lambda$; of necessity, most of these will tend to become large rather quickly as $n$ increases.

11. TEST. Unimportant comment/story lost 2nd time. Giving up.

• John Baez says:

Hey, that’s nice, Tom. Let me show it here:

12. [...] Bird submitted a link to Enormous Integers, saying, “It’s still a common enough misconception that pure mathematics research must be about larger and larger numbers, but it’s still nice to sometimes play up to this stereotype, as John Baez’s blogpost on Azimuth about ‘Enormous Integers’ does. Comments are worth a look too.” [...]

13. isomorphismes says:

I don’t understand the statement of the puzzle. What are n and m?

• John Baez says:

n is any natural number and m is any larger natural number. That’s implicit in this statement:

How long can a word be if no block of letters in this word, from the nth letter to the 2nth, is allowed to appear as a subsequence of a bigger block from the mth letter to the 2mth?

By saying “no block” we mean n can be anything, and by saying “no bigger block” we mean m can be anything bigger than m.

14. Keith Thompson says:

The URL for Greg Egan’s “Dark Integers” story doesn’t work. Try http://www.asimovs.com/_issue_0710/Dark.shtml

15. Jeff says:

When looking at large numbers constructed differently, like n(3), Graham’s number, or the “Moser”, how do you even approach determining which is larger?

• John Baez says:

I’m not an expert on this subject, so I don’t know. If anyone knows a good example of a proof like this, please point us to it!

I imagine some sort of inductive argument could work, showing that some sort of construction will beat some other one at each step.

It should help that if you take any two of the numbers, one is usually much, much larger than the other, so an argument can make lots of approximations and still succeed.

16. By the way, this post is a kind of followup to my post on enormous integers. Insanely long proofs are related to enormous integers: if you know a quick way to describe an enormous integer, you can write down a short theorem with an enormous proof.

17. Ikosarakt says:

n(x) grows as fast as linear arrays with arbitrary length in Jonathan Bowers’ array notation…