Probability Theory and the Undefinability of Truth

31 March, 2013

In 1936 Tarski proved a fundamental theorem of logic: the undefinability of truth. Roughly speaking, this says there’s no consistent way to extend arithmetic so that it talks about ‘truth’ for statements about arithmetic. Why not? Because if we could, we could cook up a statement that says “I am not true.” This would lead to a contradiction, the Liar Paradox: if this sentence is true then it’s not, and if it’s not then it is.

This is why the concept of ‘truth’ plays a limited role in most modern work on logic… surprising as that might seem to novices!

However, suppose we relax a bit and allow probability theory into our study of arithmetic. Could there be a consistent way to say, within arithmetic, that a statement about arithmetic has a certain probability of being true?

We can’t let ourselves say a statement has a 100% probability of being true, or a 0% probability of being true, or we’ll get in trouble with the undefinability of truth. But suppose we only let ourselves say that a statement has some probability greater than a and less than b, where 0 < a < b < 1. Is that okay?

Yes it is, according to this draft of a paper:

• Paul Christiano, Eliezer Yudkowsky, Marcello Herresho ff and Mihaly Barasz, De finability of “Truth” in Probabilistic Logic
(Early draft)
, 28 March 2013.

But there’s a catch, or two. First there are many self-consistent ways to assess the probability of truth of arithmetic statements. This suggests that the probability is somewhat ‘subjective’ . But that’s fine if you think probabilities are inherently subjective—for example, if you’re a subjective Bayesian.

A bit more problematic is this: their proof that there exists a self-consistent way to assess probabilities is not constructive. In other words, you can’t use it to actually get your hands on a consistent assessment.

Fans of game theory will be amused to hear why: the proof uses Kakutani’s fixed point theorem! This is the result that John Nash used to prove games have equilibrium solutions, where nobody can improve their expected payoff by changing their strategy. And this result is not constructive.

In game theory, we use Kakutani’s fixed point theorem by letting each player update their strategy, improving it based on everyone else’s, and showing this process has a fixed point. In probabilistic logic, the process is instead that the thinker reflects on what they know, and updates their assessment of probabilities.

The statement

I have not yet carefully checked the proof of Barasz, Christiano, Herreshoff and Yudkowsky’s result. Some details have changed in the draft since I last checked, so it’s probably premature to become very nitpicky. But just to encourage technical discussions of this subject, let me try stating the result a bit more precisely. If you don’t know Tarski’s theorem, go here:

Tarski’s undefinability theorem, Wikipedia.

I’ll assume you know that and are ready for the new stuff!

The context of this work is first-order logic. So, consider any language L in first-order logic that lets us talk about natural numbers and also rational numbers. Let L' be the language L with an additional function symbol \mathbb{P} thrown in. We require that \mathbb{P}(n) be a rational number whenever n is a natural number. We want \mathbb{P}(n) to stand for the probability of the truth of the sentence whose Gödel number is n. This will give a system that can reflect about probability that what it’s saying is true.

So, suppose T is some theory in the language L'. How can we say that the probability function \mathbb{P} has ‘reasonable opinions’ about truth, assuming that the axioms of T are true?

The authors have a nice way of answering this. First they consider any function P assigning a probability to each sentence of L'. They say that P is coherent if there is a probability measure on the set of models of L' such that P(\phi) is the measure of the set of models in which \phi is satisfied. They show that P is coherent iff these three conditions hold:

1) P(\phi) = P(\phi \wedge \psi) + P(\phi \wedge \lnot \psi) for all sentences \phi, \psi.

2) P(\phi) = 1 for each tautology.

3) P(\phi) = 0 for each contradiction.

(By the way, it seems to me that 1) and 2) imply P(\phi) + P(\lnot \phi) = 1 and thus 3). So either they’re giving a slightly redundant list of conditions because they feel in the mood for it, or they didn’t notice this list was redundant, or it’s not and I’m confused. It’s good to always say a list of conditions is redundant if you know it is. You may be trying to help your readers a bit, and it may seem obvious to you, but it you don’t come out and admit the redundancy, you’ll make some of your readers doubt their sanity.)

(Also by the way, they don’t say how they’re making the set of all models into a measurable space. But I bet they’re using the σ-algebra where all subsets are measurable, and I think there’s no problem with the fact that this set is very large: a proper class, I guess! If you believe in the axiom of universes, you can just restrict attention to ‘small’ models… and your probability measure will be supported on a countable set of models, since an uncountable sum of positive numbers always diverges, so the largeness of the set of these models is largely irrelevant.)

So, let’s demand that P be coherent. And let’s demand that P(\phi) = 1 whenever the sentence \phi is one of the axioms of T.

At this point, we’ve got this thing P that assigns a probability to each sentence in our language. We’ve also got this thing \mathbb{P} in our language, such that \mathbb{P}(n) is trying to be the probability of the truth of the sentence whose Gödel number is n. But so far these two things aren’t connected.

To connect them, they demand a reflection principle: for any sentence \phi and any rational numbers 0 < a < b < 1,

a < P(\phi) < b \implies P(a < \mathbb{P}(\ulcorner \phi \urcorner) < b) = 1

Here \ulcorner \phi \urcorner is the Gödel number of the sentence \phi. So, this principle says that if a sentence has some approximate probability of being true, the thinker—as described by \mathbb{P}—knows this. They can’t know precise probabilities, or we’ll get in trouble. Also, making the reflection principle into an if and only if statement:

a < P(\phi) < b \iff P(a < \mathbb{P}(\ulcorner \phi \urcorner) < b) = 1

is too strong. It leads to a contradictions, very much as in Tarski’s original theorem on the undefinability of truth! However, in the latest draft of the paper, the authors seem to have added a weak version of the converse to their formulation of the reflection principle.

Anyway, the main theorem they’re claiming is this:

Theorem (Barasz, Christiano, Herresho ff and Yudkowsky). There exists a function P assigning a probability to each sentence of L', such that

1) P is coherent,

2) P(\phi) = 1 whenever the sentence \phi is one of the axioms of T,

and

3) the reflection principle holds.