If you followed the business about Edward Nelson’s claim to have proved the inconsistency of arithmetic, you might have heard people mention this paper:
• Shira Kritchman and Ran Raz, The surprise examination paradox and the second incompleteness theorem, AMS Notices 57 (December 2010), 1454–1458.
It’s got a great idea in it, which I’d like to explain in a really sloppy way so everyone can understand it. Logic is cool, but most people never get to the cool part because they can’t fight their way through the rather dry technicalities.
You all know the surprise examination paradox, right? The teacher says one day he’ll give a quiz and it will be a surprise. So the kids think “well, it can’t be on the last day then—we’d know it was coming.” And then they think “well, so it can’t be on the day before the last day, either!—we’d know it was coming.” And so on… and they convince themselves it can’t happen at all.
But then the teacher gives it the very next day, and they’re completely surprised.
People still argue a lot about how to settle this paradox. But anyway, Kritchman and Raz use a rigorous version of this paradox together with Chaitin’s incompleteness theorem to demonstrate Gödel’s second incompleteness theorem—which says, very roughly, that:
If math can prove itself consistent, it’s not.
If you’re a logician, I bet this sort of sloppy statement really annoys you. Yeah? Does it? Take a chill pill, dude: this post isn’t for you—it’s for ordinary mortals. I said I want to summarize Kritchman and Raz’s argument in a really sloppy way. If you want precision and details, read their paper.
Okay, here we go:
Chaitin’s theorem, which is already astounding, says there’s some length L such that you can’t prove any particular string of bits needs a program longer than L to print it out. At least, this is so if math is consistent. If it’s not consistent, you can prove anything!
On the other hand, there’s some finite number of programs of length ≤ L. So if you take a list of more numbers than that, say 1, 2, …, N, there’s got to be at least one that needs a program longer than L to print it out.
Okay: there’s got to be at least one. How many? Suppose just one. Then we can go through all programs of length ≤ L, find those that print all the other numbers on our list… and thus, by a process of elimination, find the culprit.
But that means we’ve proved that this culprit is a number can only be computed by a program of length > L. But Chaitin’s theorem says that’s impossible! At least, not if math is consistent.
So there can’t be just one. At least, not if math is consistent.
Okay: suppose there are just two. Well, we can pull the same trick and find out who they are! So there can’t be just two, either. At least, not if math is consistent.
We can keep playing this game until we rule out all the possibilities… and then we’re really stuck. We get a contradiction. At least, if math is consistent.
So if we could prove math is consistent, we’d know it’s not!