Could we grow the whole universe with all its seeming complexity starting from a little seed? How much can you do with just a little information?
People have contests about this. My programmer friend Bruce Smith points out this animation, produced using a program less than 4 kilobytes long:
As he notes:
…to be fair, the complexity of some of the OS, graphics drivers, and hardware should be included, but this is a lot less than you might think if you imagine rewriting it purely for compactness rather than for speed, and only including what this sort of program needs to produce output.
Sampo Syreeni pointed me to this video, all generated from under 64 kilobytes of x86 code:
As he points out, one trick is to use symmetry.
These fancy images produced from tiny amounts of information are examples of the ‘demoscene’. a computer art subculture where people produce demos: non-interactive audio-visual computer presentations that run in real time.
According to the Wikipedia article on the demoscene:
Recent computer hardware advancements include faster processors, more memory, faster video graphics processors, and hardware 3D acceleration. With many of the past’s challenges removed, the focus in making demos has moved from squeezing as much out of the computer as possible to making stylish, beautiful, well-designed real time artwork […]
The old tradition still lives on, though. Demo parties have competitions with varying limitations in program size or platform (different series are called compos). On a modern computer the executable size may be limited to 64 kB or 4 kB. Programs of limited size are usually called intros. In other compos the choice of platform is restricted; only old computers, like the 8-bit Commodore 64, or the 16-bit Amiga, or Atari ST, or mobile devices like handheld phones or PDAs are allowed. Such restrictions provide a challenge for coders, musicians and graphics artists and bring back the old motive of making a device do more than was intended in its original design.
What else can you do with just a little information? Bruce listed a couple more things:
• Bill Gates’ first commercial success was an implementation of a useful version of BASIC in about 4000 bytes;
• the complete genetic code of an organism can be as short as a few hundred thousand bytes, and that has to be encoded in a way that doesn’t allow for highly clever compression schemes.
So if quite complex things can be compressed into fairly little information, you can’t help but wonder: how complex can something be?
The answer: arbitrarily complex! At least that’s true if we’re talking about the Kolmogorov complexity of a string of bits: namely, the length of the shortest computer program that prints it out. Lots of long strings of bits can’t be compressed. You can’t print most of them out using short programs, since there aren’t enough short programs to go around.
Of course, we need to fix a computer language ahead of time, so this is well-defined. And we need to make sure the programs are written in binary, so the comparison is fair.
So, things can be arbitrarily complex. But here’s a more interesting question: how complex can we prove something to be?
The answer is one of the most astounding facts I know. It’s called Chaitin’s incompleteness theorem. It says, very roughly:
There’s a number L such that we can’t prove the Kolmogorov complexity of any specific string of bits is bigger than L.
Make sure you understand this. For any number,
we can prove there are infinitely many bit strings with Kolmogorov complexity bigger than that. But we can’t point to any particular bit string and prove its Kolmogorov complexity is bigger than !
Over on Google+, Allen Knutson wrote:
That’s an incredibly disturbing theorem, like driving to the edge of the universe and finding a wall.
I call the complexity barrier. So one question is, how big is ? It’s hard, or perhaps even impossible, to find the smallest that does the job. But we can certainly find numbers that work. And they’re surprisingly small!
My friend Bruce estimates that the complexity barrier is a few kilobytes.
I’d like to see a program a few kilobytes long that produces a video showing a big bang, the formation of stars and galaxies, then planets, including one where life evolves, then intelligent life, then the development of computers… and finally someone writing the very same program.
I can’t prove it’s possible… but you can’t prove it isn’t!
Let’s see why.
For starters, we need to choose some axioms for our system of math, so we know what ‘provable’ means. We need a system that’s powerful enough to prove a bunch of basic facts about arithmetic, but simple enough that a computer program can check if a proof in this system is valid.
When you have a system of math like this, Gödel’s first incompleteness theorem kicks in: if the system is consistent, it can’t be complete. In other words, there are some questions it leaves unsettled. This is why we shouldn’t be utterly shocked that while a bunch of bit strings have complexity more than , we can’t prove this.
Gödel’s second incompleteness theorem also kicks in: if the system can prove that it’s consistent, it’s not! (If it’s not consistent, it can prove anything, so you shouldn’t trust it.) So, there’s a sense in which we can never be completely sure that our system of math is consistent. But let’s assume it is.
Given this, Chaitin’s theorem says:
There exists a constant such that no string of bits has Kolmogorov complexity that provably exceeds .
How can we get a number that does the job? Any number with
will do. Here:
• is the length of a program where if you input a natural number , it will search through all proofs in Peano arithmetic until it finds one that proves some bit string has Kolmogorov complexity . If it finds one, then it outputs this bit string. If it never finds one, it grinds on endlessly. (Of course, if , it will never find one!)
• is a small overhead cost: the length of the extra 'glue' to create a bigger program that takes the number , written out in binary, and feeds it into the program described above.
The length of written out in binary is about . This bigger program thus has length
and for the proof of Chaitin’s incompleteness theorem to work, we need this to be smaller than Obviously we can accomplish this by making big enough, since grows slower than
Given all the stuff I’ve told you, the proof of Chaitin’s theorem almost writes itself! You run this bigger program I just described. If there were a bit string whose Kolmogorov complexity is provably greater than , this program would print one out. But that’s a contradiction, because this program has length less than .
So, we just need to pick a computer language and a suitable system of math, and estimate and, less importantly because it’s so much smaller, . Then will be just a bit bigger than .
I picked the language C and Peano arithmetic and asked Bruce if he could guess, roughly, what answer we get for . He replied:
I don’t think it can be done in C, since C semantics are not well-defined unless you specify a particular finite machine size. (Since C programs can do things like convert pointers to integers and back, tell you the size of any datatype, and convert data of any specified datatype to bytes and back.) On a finite machine of bits, all programs either finish in time less than about or take forever.
But if you take “C without size-specific operations”, or a higher level language like Python, or for that matter a different sort of low-level language like a Turing machine, then that’s not an issue — you can define a precise semantics that allows it to run a program for an arbitrarily long time and allocate an arbitrary number of objects in memory which contain pointers to each other. (To stick with the spirit of the question, for whatever language you choose, you’d want to disallow use of any large external batch of information like a “standard library”, except for whatever is so basic that you think of it as part of the native language. This is not a serious handicap for this problem.)
The main things that the program ‘‘ (I’d rather call the program itself ‘U’ than call its length ‘‘) needs to do are:
• recognize a syntactically correct statement or proof;
• check the validity of a purported proof;
• recognize certain statements as saying or implying “The Kolmogorov complexity of is more than ” for some and . (It’s not necessary to recognize all such statements, just at least one for each and ; so it can just recognize a statement that consists of some template with specific values of and inserted into it at certain places.)
Assuming that expresses the proofs it wants to check in a practical proof language (which will be more like what a practical theorem-prover like Coq uses than like what a traditional logician would recognize as “straight Peano arithmetic”, but which will not be excessively powerful in the spirit of this question), I’d estimate that the most complex part is checking proof validity, but that that can still be expressed in at most a few dozen syntactic rules, each expressible in a few lines of code. (The authors of a system like Coq, which includes code to actually do that, would know better, as long as they remember that the vast majority of their system’s actual code is not needed for this problem.)
This makes me think that even without trying to compact it much, in a reasonable language we could write in a few hundred lines of code, or (after a bit of simple compression) a few thousand bytes. (And perhaps much less if we tried hard to compact the whole program in clever ways.)
So will also be “a few thousand” (bytes or digits), or perhaps less, rather than some number you can never possibly count to.
Does anyone know if Chaitin or someone actually went ahead a wrote a program that showed a specific value of would work?