You may know this institute under their previous name: the Singularity Institute. It seems to be the brainchild of Eliezer Yudkowsky, a well-known advocate of ‘friendly artificial intelligence’, whom I interviewed in week311, week312 and week313 of This Week’s Finds. He takes an approach to artificial intelligence that’s heavily influenced by mathematical logic, and I got invited to the workshop because I blogged about a paper he wrote with Mihaly Barasz, Paul Christiano and Marcello Herreshoff on probability theory and logic.
I only have the energy to lay the groundwork for a good explanation of what happened in the workshop. So, after you read my post, please read this:
• Benja Fallenstein, Results from MIRI’s December workshop, Less Wrong, 28 December 2013.
The workshop had two main themes, so let me tell you what they were.
Scientific induction in mathematics
The first theme is related to that paper I just mentioned. How should a rational agent assign probabilities to statements in mathematics? Of course an omniscient being could assign
• probability 1 to every mathematical statement that’s provable,
• probability 0 to every statement whose negation is provable,
• to every statement that is neither provable nor disprovable.
But a real-world rational agent will never have time to check all proofs, so there will always be lots of statements it’s not sure about. Actual mathematicians always have conjectures, like the Twin Prime Conjecture, that we consider plausible even though nobody has proved them. And whenever we do research, we’re constantly estimating how likely it is for statements to be true, and changing our estimates as new evidence comes in. In other words, we use scientific induction in mathematics.
How could we automate this? Most of us don’t consciously assign numerical probabilities to mathematical statements. But maybe an AI mathematician should. If so, what rules should it follow?
It’s natural to try a version of Solomonoff induction, where our probability estimate, before any evidence comes in, favors statements that are simple. However, this runs up against problems. If you’re interested in learning more about this, try:
• Jeremy Hahn, Scientific induction in probabilistic mathematics.
It’s a summary of ideas people came up with during the workshop. I would like to explain them sometime, but for now I should move on.
The Löbian obstacle
The second main theme was the ‘Löbian obstacle’. Löb’s theorem is the flip side of Gödel’s first incompleteness theorem, less famous but just as shocking. It seems to put limitations on how much a perfectly rational being can trust itself.
Since it’s the day after Christmas, let’s ease our way into these deep waters with the Santa Claus paradox, also known as Curry’s paradox.
If you have a child who is worried that Santa Claus might not exist, you can reassure them using this sentence:
Call it P, for short.
Assume, for the sake of argument, that P is true. Then what it says is true: “If P is true, Santa Claus exists.” And we’re assuming P is true. So, Santa Claus exists.
So, we’ve proved that if P is true, Santa Claus exists.
But that’s just what P says!
So, P is true.
So, Santa Claus exists!
There must be something wrong about this argument, even if Santa Claus does exist, because if it were valid you could you use it to prove anything at all. The self-reference is obviously suspicious. The sentence in question is a variant of the Liar Paradox:
since we can rewrite the Liar Paradox as
and then replace “0=1” by any false statement you like.
However, Gödel figured out a way to squeeze solid insights from these dubious self-referential sentences. He did this by creating a statement in the language of arithmetic, referring to nothing but numbers, which nonetheless manages to effectively say
If it were provable, you’d get a contradiction! So, either arithmetic is inconsistent or this sentence is unprovable. But if it’s unprovable, it’s true. So, there are true but unprovable statements in arithmetic… unless arithmetic is inconsistent! This discovery shook the world of mathematics.
Here I’m being quite sloppy, just to get the idea across.
For one thing, when I’m saying ‘provable’, I mean provable given some specific axioms for arithmetic, like the Peano axioms. If we change our axioms, different statements will be provable.
For another, the concept of ‘true’ statements in arithmetic is often shunned by logicians. That may sound shocking, but there are many reasons for this: for example, Tarski showed that the truth of statements about arithmetic is undefinable in arithmetic. ‘Provability’ is much easier to deal with.
So, a better way of thinking about Gödel’s result is that he constructed a statement that is neither provable nor disprovable from Peano’s axioms of arithmetic, unless those axioms are inconsistent (in which case we can prove everything, but it’s all worthless).
Furthermore, this result applies not just to Peano’s axioms but to any stronger set of axioms, as long as you can write a computer program to list those axioms.
In 1952, the logician Leon Henkin flipped Gödel’s idea around and asked about a sentence in the language of arithmetic that says:
He asked: is this provable or not? The answer is much less obvious than for Gödel’s sentence. Play around with it and see what I mean.
But in 1954, Martin Hugo Löb showed that Henkin’s sentence is provable!
And Henkin noticed something amazing: Löb’s proof shows much more.
At this point it pays to become a bit more precise. Let us write to mean the statement is provable from the Peano axioms of arithmetic. Gödel figured out how to encode statements in arithmetic as numbers, so let’s write for the Gödel number of any statement And Gödel figured out how to write a statement in arithmetic, say
which says that the statement with Gödel number is provable using the Peano axioms.
Using this terminology, what Henkin originally did was find a number such that the sentence
has Gödel number So, this sentence says
What Löb did was show
In other words, he showed that Henkin sentence really is provable from the Peano axioms!
What Henkin then did is prove that for any sentence in the language of arithmetic, if
In other words, suppose we can prove that the provability of implies Then we can prove !
At first this merely sounds nightmarishly complicated. But if you think about it long enough, you’ll see it’s downright terrifying! For example, suppose is some famous open question in arithmetic, like the Twin Prime Conjecture. You might hope to prove
Indeed, that seems like a perfectly reasonable thing to want. But it turns out that proving this is as hard as proving the Twin Prime Conjecture! Why? Because if we can prove the boldface sentence above, Löb and Henkin’s work instantly gives us a proof of Twin Prime Conjecture!
What does all this have to do with artificial intelligence?
Well, what I just said is true not only for Peano arithmetic, but any set of axioms including Peano arithmetic that a computer program can list. Suppose your highly logical AI mathematician has some such set of axioms, say You might want it to trust itself. In other words, you might want
for every sentence This says, roughly, that whatever the AI can prove it can prove, it can prove.
But then Löb’s theorem would kick in and give
for every sentence And this would be disastrous: our AI would be inconsistent, because it could prove everything!
This is just the beginning of the problems. It gets more involved when we consider AI’s that spawn new AI’s and want to trust them. For more see:
• Eliezer Yudkowsky and Marcello Herreshoff, Tiling agents for self-modifying AI, and the Löbian obstacle.
At workshop various people made progress on this issue, which is recorded in these summaries:
• Eliezer Yudkowsky, The procrastination paradox.
Abstract. A theorem by Marcello Herresho, Benja Fallenstein, and Stuart Armstrong shows that if there exists an infinite series of theories extending where each proves the soundness of , then all the must have only nonstandard models. We call this the Procrastination Theorem for reasons which will become apparent.
• Benja Fallenstein, An infinitely descending sequence of sound theories each proving the next consistent.
Here Fallenstein constructs a different sequence of theories extending Peano arithmetic such that each proves the consistency of and all the theories are sound for sentences—that is, sentences with only one quantifier outside the rest of the stuff.
The following summaries would take more work to explain:
• Nate Soares, Fallenstein’s monster.
• Nisan Stiennon, Recursively-defined logical theories are well-defined.
• Benja Fallenstein, The 5-and-10 problem and the tiling agents formalism.
• Benja Fallenstein, Decreasing mathematical strength in one formalization of parametric polymorphism.
Again: before reading any of these summaries, I urge you to read Benja Fallenstein’s post, which will help you understand them!