Tarski’s definition of truth based on metalanguage. In fact it may mean formally defined truth relative to meta-theory ( in case PA it is usually Set Theory). So something is true ( in PA) if it is provable in Set Theory.

true may mean – provable in all of the models of PA ( so a lot of sentences is “non-true” which mean completely different thing than “not-true” or false. Among them are sentences which are provable in Set theory, but not in PA. It is far from intuition…

Or true in informal language we use in everyday life if one stand on philosophical position natural language is least kind of language in language hierarchy. In that case we have no tools to check or decide if something really is true in any other meaning than by finding examples or counter-examples for various statements. Of course in this case we are far from deductive reasoning.

There are others theories of truth, not compatible, but at least different than Tarski Truth Theory. In fact there is a lot of them. They are built because Tarski’s approach, useful for ordinary mathematical practice, is far from satisfying in various situations (*** see below). I cannot say what is it relation to PA truth, but probably there is some.

*** First of all Tarski approach eliminates a lot of circular ( self-referencing) sentences. It is a price of eliminating some known paradoxes, but it costs a lot: we through away a lot of pretty good statements. Take for example sentence: “This sentence is true”. It does not harm, but in Tarski approach it is not valid logical sentence!

Another example of issues is related to language hierarchy conception. For example the Tarski language-metalanguage hierarchy has some problems. Take for example the following statement. Let N be a level of (meta)-language. That is: formal language of the theory (say PA) is on level N=0, metalanguage of this theory (say set Theory) on level N=1, (meta-meta)-language on level N=2 etc. Consider the following sentence: “for every N of (meta)-language, this sentence is not provable”. The question is: what is the level of (meta)-language we use here? it looks like or we are creating another hierarchy, or this sentence has no logical meaning. But if so: what is the hierarchy level of Tarski Theory?

]]>First question: I believe that logicians are empirically discovering an ordering of interesting additional axioms for arithmetic, where the “better” axiom proves all the arithmetic statements that the “worse” one does. The ordering of these axioms, called “proof-theoretic strength”, can be measured using countable ordinals. For a good introduction to this idea, see:

• Wikipedia, Ordinal analysis.

Over in set theory, a vaguely analogous trend seems to be emerging, where large cardinal axioms—axioms asserting the existence of various kinds of large cardinals—let us prove new things about set theory and analysis. These axioms seem to be approximately linearly ordered, but this is more mysterious to me, since while cardinals are well-ordered, I don’t see how that puts a linear ordering on *large cardinal axioms*.

Now let’s assume that this trend will at some point actually be made into a theorem. (In particular, each of the above words in double quotes, except for “real”, will now have a precise, settled meaning.) My first question is, would this suffice to determine up to essential equivalence? By which I mean, is there some understood way to “take the (co?)limit” of this ordered class of equivalence classes of models, or would this be a whole other new problem in need of a separate solution?

I believe this is a whole other new problem in need of a separate solution. I believe people aren’t ready for this problem yet. (I’d love to be wrong—I’ve wondered about this.)

Second question: it seems that the continuum falls outside the general trend: neither the continuum hypothesis nor its negation has greater arithmetic proof-theoretic strength. A closely related fact is that neither the continuum hypothesis nor its negation imply, or are implied by, known large cardinal axioms. Joel David Hamkins has some papers on this. This seems to be a less technical one:

• Joel David Hamkins, Is the dream solution to the continuum hypothesis attainable?

]]>If I correctly understood the gist of this 25-point summary, for each “intensely set theoretic” statement which is undecidable in ZFC it has always happened so far that, up to some notion of “essential equivalence”, there are just two “natural” choices of a candidate for the phantomatic “real” model : one in which said statement is true, and one where it is false. And in general, the experts don’t agree on whether the statement should be held to be true or false.

On the other hand, it’s always turned out to be the case, so far, that either one of the models is interpretable in the other or vice versa, hence that one of the two models proves more arithmetic statements than the other. (I’m not sure I got the relevant points (18)-(21) right, though, so forgive me if the questions below won’t make sense.)

So of course this pattern suggests that, once we’ve quotiented out essential equivalence, all such natural set-theoretic models will form a total ordering, and that by following this ordering we will be pointed in the direction of the Holy Grail, . I suppose here the expert all agree we want to go towards greater arithmetic proof strength, right? At least, for the purpose of deciding which set theory is the most natural and desirable as foundations.

Now let’s assume that this trend will at some point actually be made into a theorem. (In particular, each of the above words in double quotes, except for “real”, will now have a precise, settled meaning.) My first question is, would this suffice to determine up to essential equivalence? By which I mean, is there some understood way to “take the (co?)limit” of this ordered class of equivalence classes of models, or would this be a whole other new problem in need of a separate solution?

My second question is: according to this arithmetic proof-strength criterion, how should we settle CH? By choosing Gödel’s model, or Cohen’s?

If my questions reveal some misunderstanding, I would greatly appreciate to be corrected!

]]>Sam, thanks for this important insight.

]]>As it happens, this notion is useful in NSA (Nelson’s IST) as follows:

Call a total Turing machine “standard total” if for standard input it halts after standard many steps with standard output. Call it “standard partial” otherwise.

One can then prove theorems about “standard partial” objects, but this is more than a curiosum: From such theorems, one can extract relative computability results from fragments of IST using the term extract algo in [1].

Reverse math results concerning the Gandy-Hyland functional

(not involving NSA) have been obtained in this way in [2].

[1] van den Berg, Briseid, Safarik, A functional interpretation for nonstandard arithmetic, APAL2012

[2] Sanders, The Gandy-Hyland functional and NSA, arXiv

]]>ideally you’d just be able to see that things are true.

Ideally Peano Arithmetic would be complete and all theorems would have proofs in one page.

Hard to see how this kind of wishful thinking helps understand anything.

Unless you think this is practicable, in which case, let me know when you can “just see” any of several intrinscally quantitative statements, like the prime number theorem, or that Margulis constructions really are expanders, or that the largest clique in a random graph on n vertices almost surely has size (2+o(1))log_2(n).

]]>It is true that the standard “rulebook” for mathematical rigor – standard f.o.m. – is almost always only in the background. This is because anybody capable of being a professional (pure) mathematician must absorb what must go into a rigorous proof as part of their second nature blood, in order to function reasonably. For this, they do not need to consult the “rulebook”. The “rulebook” has been implicitly absorbed by their teachers and the general mathematical atmosphere, since the “rulebook” is so so so intuitive and simple.

Nevertheless, the great importance of having this “rulebook” in the archives cannot be underestimated. There were periods of great confusion, reaching a head around 1800, before the great pieces of the “rulebook” were starting to emerge and work together properly during the 1800’s and early 1900’s.

Of course, the great David Hilbert needed a “rulebook” to even formulate his doomed program, (more or less) destroyed by Goedel. And the great further advances in Incompleteness depend on the “rulebook”. If you want to prove that something or other cannot be proved by rigorous mathematical argument, you are going to have to have a “rulebook”, even if you and your friends are not personally consulting it.

Of course, the great great great foundational issue is whether the rulebook allows us to do the interesting math that we are interested in. Experience shows that Incompleteness is most disturbing to people when it involves mathematical statements that people are strongly compelled to consider “matters of fact”. E.g., you can argue that the continuum hypothesis is not, is not clearly, matter of fact. That what sets of reals are is maybe subject to interpretation – i.e., relative to what means you have for constructing arbitrary sets of reals. Evidence for this kind of point of view might be that if you only consider Borel measurable sets of reals, then the continuum hypothesis is a theorem, with no foundational difficulties.

However, if the incompleteness involves only, say, finite sets of rational numbers, which ultimately means through encodings, only natural numbers, then it gets harder to question the matter of factness, and incompleteness becomes more serious.

Even clearer still would be say, sets of 1000-tuples of rational numbers of height at most 2^1000. There are only finitely many of these, and it gets hard to deny the matter of factness.

Now an interesting mathematical statement, or any statement, in this realm, is definitely going to be provable or refutable in ZFC by enumeration of all of the possibilities.

However, such a proof by enumeration is not going to literally be given in ZFC with any reasonable size. Reasonable size like the proof of FLT.

So such a result would cast serious doubt about the adequacy of ZFC at a new matter-of-factness level.

]]>So in conclusion we have a very nice result, but which is much more important for arithmetic as for computability.

]]>