Probability Theory and the Undefinability of Truth

31 March, 2013

In 1936 Tarski proved a fundamental theorem of logic: the undefinability of truth. Roughly speaking, this says there’s no consistent way to extend arithmetic so that it talks about ‘truth’ for statements about arithmetic. Why not? Because if we could, we could cook up a statement that says “I am not true.” This would lead to a contradiction, the Liar Paradox: if this sentence is true then it’s not, and if it’s not then it is.

This is why the concept of ‘truth’ plays a limited role in most modern work on logic… surprising as that might seem to novices!

However, suppose we relax a bit and allow probability theory into our study of arithmetic. Could there be a consistent way to say, within arithmetic, that a statement about arithmetic has a certain probability of being true?

We can’t let ourselves say a statement has a 100% probability of being true, or a 0% probability of being true, or we’ll get in trouble with the undefinability of truth. But suppose we only let ourselves say that a statement has some probability greater than a and less than b, where 0 < a < b < 1. Is that okay?

Yes it is, according to this draft of a paper:

• Paul Christiano, Eliezer Yudkowsky, Marcello Herresho ff and Mihaly Barasz, De finability of “Truth” in Probabilistic Logic
(Early draft)
, 28 March 2013.

But there’s a catch, or two. First there are many self-consistent ways to assess the probability of truth of arithmetic statements. This suggests that the probability is somewhat ‘subjective’ . But that’s fine if you think probabilities are inherently subjective—for example, if you’re a subjective Bayesian.

A bit more problematic is this: their proof that there exists a self-consistent way to assess probabilities is not constructive. In other words, you can’t use it to actually get your hands on a consistent assessment.

Fans of game theory will be amused to hear why: the proof uses Kakutani’s fixed point theorem! This is the result that John Nash used to prove games have equilibrium solutions, where nobody can improve their expected payoff by changing their strategy. And this result is not constructive.

In game theory, we use Kakutani’s fixed point theorem by letting each player update their strategy, improving it based on everyone else’s, and showing this process has a fixed point. In probabilistic logic, the process is instead that the thinker reflects on what they know, and updates their assessment of probabilities.

The statement

I have not yet carefully checked the proof of Barasz, Christiano, Herreshoff and Yudkowsky’s result. Some details have changed in the draft since I last checked, so it’s probably premature to become very nitpicky. But just to encourage technical discussions of this subject, let me try stating the result a bit more precisely. If you don’t know Tarski’s theorem, go here:

Tarski’s undefinability theorem, Wikipedia.

I’ll assume you know that and are ready for the new stuff!

The context of this work is first-order logic. So, consider any language L in first-order logic that lets us talk about natural numbers and also rational numbers. Let L' be the language L with an additional function symbol \mathbb{P} thrown in. We require that \mathbb{P}(n) be a rational number whenever n is a natural number. We want \mathbb{P}(n) to stand for the probability of the truth of the sentence whose Gödel number is n. This will give a system that can reflect about probability that what it’s saying is true.

So, suppose T is some theory in the language L'. How can we say that the probability function \mathbb{P} has ‘reasonable opinions’ about truth, assuming that the axioms of T are true?

The authors have a nice way of answering this. First they consider any function P assigning a probability to each sentence of L'. They say that P is coherent if there is a probability measure on the set of models of L' such that P(\phi) is the measure of the set of models in which \phi is satisfied. They show that P is coherent iff these three conditions hold:

1) P(\phi) = P(\phi \wedge \psi) + P(\phi \wedge \lnot \psi) for all sentences \phi, \psi.

2) P(\phi) = 1 for each tautology.

3) P(\phi) = 0 for each contradiction.

(By the way, it seems to me that 1) and 2) imply P(\phi) + P(\lnot \phi) = 1 and thus 3). So either they’re giving a slightly redundant list of conditions because they feel in the mood for it, or they didn’t notice this list was redundant, or it’s not and I’m confused. It’s good to always say a list of conditions is redundant if you know it is. You may be trying to help your readers a bit, and it may seem obvious to you, but it you don’t come out and admit the redundancy, you’ll make some of your readers doubt their sanity.)

(Also by the way, they don’t say how they’re making the set of all models into a measurable space. But I bet they’re using the σ-algebra where all subsets are measurable, and I think there’s no problem with the fact that this set is very large: a proper class, I guess! If you believe in the axiom of universes, you can just restrict attention to ‘small’ models… and your probability measure will be supported on a countable set of models, since an uncountable sum of positive numbers always diverges, so the largeness of the set of these models is largely irrelevant.)

So, let’s demand that P be coherent. And let’s demand that P(\phi) = 1 whenever the sentence \phi is one of the axioms of T.

At this point, we’ve got this thing P that assigns a probability to each sentence in our language. We’ve also got this thing \mathbb{P} in our language, such that \mathbb{P}(n) is trying to be the probability of the truth of the sentence whose Gödel number is n. But so far these two things aren’t connected.

To connect them, they demand a reflection principle: for any sentence \phi and any rational numbers 0 < a < b < 1,

a < P(\phi) < b \implies P(a < \mathbb{P}(\ulcorner \phi \urcorner) < b) = 1

Here \ulcorner \phi \urcorner is the Gödel number of the sentence \phi. So, this principle says that if a sentence has some approximate probability of being true, the thinker—as described by \mathbb{P}—knows this. They can’t know precise probabilities, or we’ll get in trouble. Also, making the reflection principle into an if and only if statement:

a < P(\phi) < b \iff P(a < \mathbb{P}(\ulcorner \phi \urcorner) < b) = 1

is too strong. It leads to a contradictions, very much as in Tarski’s original theorem on the undefinability of truth! However, in the latest draft of the paper, the authors seem to have added a weak version of the converse to their formulation of the reflection principle.

Anyway, the main theorem they’re claiming is this:

Theorem (Barasz, Christiano, Herresho ff and Yudkowsky). There exists a function P assigning a probability to each sentence of L', such that

1) P is coherent,

2) P(\phi) = 1 whenever the sentence \phi is one of the axioms of T,


3) the reflection principle holds. 

Mathematics 1001

6 December, 2011

Just in time for your holiday gift-giving, here’s my suggestion:

• Richard Elwes, Mathematics 1001, Quercus Books, 2010.

The idea: 1001 bite-sized, juicy explanations of math topics ranging from the elementary:

• how to do long division
• why multiplying two negative numbers gives a positive one
• why 0.9999… equals 1

to the slightly less elementary:

• what’s the equation of a straight line?
• what are the basic identities that trig functions obey?
• how do you prove the product rule in calculus?

to the fun and easy:

• What’s Goldbach’s conjecture and how much evidence do we have for it?
• What’s the golden ratio?
• What’s a Koch snowflake?

to the fun and harder:

• What’s the abc conjecture?
• What’s a Julia set?
• What’s Chaitin’s number Ω?
• What’s Tarski’s geometric decidability theorem?

to the stuff all mathematicians need to know, but ordinary folks don’t:

• What’s the Cauchy-Schwarz inequality?
• What’s a Fourier series?
• What are grad, div and curl?
• What’s the classification of closed surfaces?
• What’s a permutation group?

to the stuff all mathematicians want to understand, but find intimidating:

• What is a scheme?
• What’s the Langlands program?
• What’s forcing?
• What’s a derived category?

The more advanced topics are covered in a sketchy way, so you will not walk out knowing complete answers to all these questions. And that’s just as well, because otherwise the book would be a lot longer than 409 pages, and nobody would be able to read it all!

What you will get is some rough sense of all what all these things are… and you’ll have a lot of fun in the process. You can start reading the book anywhere: it’s not a treatise, it’s a bunch of tasty appetizers. So, you can set it by your bed or somewhere in the living room, and dip into it whenever you have some spare time.

I really, really wish I’d had a book like this when I was younger. Math is a huge enchanted kingdom with many highways and byways, dark forests and dragon-infested mountain ranges. It took me decades to wander across it, extricate myself from various tar pits, and get some sense of the overall lay of the land. This book would have sped that process immensely, in a very pleasant way.

So if you’d like to know more about math, get this book! And if you know a kid, or friend, who is interested in math but not yet an expert, give them this book!

As a professional showoff and know-it-all, I was chagrined (though also secretly pleased) that there are some things in this book I did not know about. They’re mainly open problems in number theory. I guess this subject holds a lot of the most easily stated and hard-to-prove conjectures, so Elwes included a bunch, including these I’d never heard of:

Legendre’s conjecture.
de Polignac’s conjecture.
Hypothesis H.
Andrica’s conjecture.
Beals’ conjecture.

By the way, if you saw the recent post Richard Elwes and I wrote on Babylonian mathematics, you may think it’s some sort of conflict of interest for me to be touting his book now. However, that would be putting Descartes before the horse! In fact what happened is this. He sent me a copy of his book, I liked it, I wanted to post an article about it, I saw an interesting post of his about a Babylonian clay tablet on Google+, I decided to expand it with him here on this blog, and finally I’m talking about his book!