Meta-Rationality

On his blog, Eli Dourado writes something that’s very relevant to the global warming debate, and indeed most other debates.

He’s talking about Paul Krugman, but I think with small modifications we could substitute the name of almost any intelligent pundit. I don’t care about Krugman here, I care about the general issue:

Nobel laureate, Princeton economics professor, and New York Times columnist Paul Krugman is a brilliant man. I am not so brilliant. So when Krugman makes strident claims about macroeconomics, a complex subject on which he has significantly more expertise than I do, should I just accept them? How should we evaluate the claims of people much smarter than ourselves?

A starting point for thinking about this question is the work of another Nobelist, Robert Aumann. In 1976, Aumann showed that under certain strong assumptions, disagreement on questions of fact is irrational. Suppose that Krugman and I have read all the same papers about macroeconomics, and we have access to all the same macroeconomic data. Suppose further that we agree that Krugman is smarter than I am. All it should take, according to Aumann, for our beliefs to converge is for us to exchange our views. If we have common “priors” and we are mutually aware of each others’ views, then if we do not agree ex post, at least one of us is being irrational.

It seems natural to conclude, given these facts, that if Krugman and I disagree, the fault lies with me. After all, he is much smarter than I am, so shouldn’t I converge much more to his view than he does to mine?

Not necessarily. One problem is that if I change my belief to match Krugman’s, I would still disagree with a lot of really smart people, including many people as smart as or possibly even smarter than Krugman. These people have read the same macroeconomics literature that Krugman and I have, and they have access to the same data. So the fact that they all disagree with each other on some margin suggests that very few of them behave according to the theory of disagreement. There must be some systematic problem with the beliefs of macroeconomists.

In their paper on disagreement, Tyler Cowen and Robin Hanson grapple with the problem of self-deception. Self-favoring priors, they note, can help to serve other functions besides arriving at the truth. People who “irrationally” believe in themselves are often more successful than those who do not. Because pursuit of the truth is often irrelevant in evolutionary competition, humans have an evolved tendency to hold self-favoring priors and self-deceive about the existence of these priors in ourselves, even though we frequently observe them in others.

Self-deception is in some ways a more serious problem than mere lack of intelligence. It is embarrassing to be caught in a logical contradiction, as a stupid person might be, because it is often impossible to deny. But when accused of disagreeing due to a self-favoring prior, such as having an inflated opinion of one’s own judgment, people can and do simply deny the accusation.

How can we best cope with the problem of self-deception? Cowen and Hanson argue that we should be on the lookout for people who are “meta-rational,” honest truth-seekers who choose opinions as if they understand the problem of disagreement and self-deception. According to the theory of disagreement, meta-rational people will not have disagreements among themselves caused by faith in their own superior knowledge or reasoning ability. The fact that disagreement remains widespread suggests that most people are not meta-rational, or—what seems less likely—that meta-rational people cannot distinguish one another.

We can try to identify meta-rational people through their cognitive and conversational styles. Someone who is really seeking the truth should be eager to collect new information through listening rather than speaking, construe opposing perspectives in their most favorable light, and offer information of which the other parties are not aware, instead of simply repeating arguments the other side has already heard.

All this seems obvious to me, but it’s discussed much too rarely. Maybe we can figure out ways to encourage this virtue that Cohen and Hanson call ‘meta-rationality’? There are already too many mechanisms that reward people for aggressively arguing for fixed positions. If Krugman really were ‘meta-rational’, he might still have his Nobel Prize, but he probably wouldn’t be a popular newspaper columnist.

The Azimuth Project, and this blog, are already doing a lot of things to prevent people from getting locked into fixed positions and filtering out evidence that goes against their views. Most crucial seems to be the policy of forbidding insults, bullying, and overly repetitive restatement of the same views. These behaviors increase what I call the ‘heat’ in a discussion, and I’ve decided that, all things considered, it’s best to keep the heat fairly low.

Heat attracts many people, so I’m sure we could get a lot more people to read this blog by turning up the heat. A little heat is a good thing, because it engages people’s energy. But heat also makes it harder for people to change their minds. When the heat gets too high, changing ones mind is perceived as a defeat, to be avoided at all costs. Even worse, people form ‘tribes’ who back each other up in every argument, regardless of the topic. Rationality goes out the window. And meta-rationality? Forget it!

Some Questions

Dourado talks about ways to “identify meta-rational people.” This is very attractive, but I think it’s better to talk about “identifying when people are behaving meta-rationally”. I don’t think we should spend too much of our time looking around for paragons of meta-rationality. First of all, nobody is perfect. Second of all, as soon as someone gets a big reputation for rationality, meta-rationality, or any other virtue, it seems they develop a fan club that runs a big risk of turning into a cult. This often makes it harder rather than easier for people to think clearly and change their minds!

I’d rather look for customs and institutions that encourage meta-rationality. So, my big question is:

How can we encourage rationality and meta-rationality, and make them more popular?

Of course science, and academia, are institutions that have been grappling with this question for centuries. Universities, seminars, conferences, journals, and so on—they all put a lot of work into encouraging the search for knowledge and examining the conditions under which it thrives.

And of course these institutions are imperfect: everything humans do is riddled with flaws.

But instead of listing cases where existing institutions failed to do their job optimally, I’d like to think about ways of developing new customs and institutions that encourage meta-rationality… and linking these to the existing ones.

Why? Because I feel the existing institutions don’t reach out enough to the ‘general public’, or ‘laymen’. The mere existence of these terms is a clue. There are a lot of people who consider academia as an ‘ivory tower’, separate from their own lives and largely irrelevant. And there are a lot of good reasons for this.

There’s one you’ve heard me talk about a lot: academia has let its journals get bought by big multimedia conglomerates, who then charge high fees for access. So, we have have scientific research on global warming paid for by our tax dollars, and published by prestigious journals such as Science and Nature… which unfortunately aren’t available to the ‘general public’.

That’s like a fire alarm you have to pay to hear.

But there’s another problem: institutions that try to encourage meta-rationality seem to operate by shielding themselves from the broader sphere that favors ‘hot’ discussions. Meanwhile, the hot discussions don’t get enough input from ‘cooler’ forums… and vice versa!

For example: we have researchers in climate science who publish in refereed journals, which mostly academics read. We have conferences, seminars and courses where this research is discussed and criticized. These are again attended mostly by academics. Then we have journalists and bloggers who try to explain and discuss these papers in more easily accessed venues. There are some blogs written by climate scientists, who try to short-circuit the middlemen a bit. Unfortunately the heated atmosphere of some of these blogs makes meta-rationality difficult. There are also blogs by ‘climate skeptics’, many from outside academia. These often criticize the published papers, but—it seems to me—rarely get into discussions with the papers’ authors in conditions that make it easy for either party to change their mind. And on top of all this, we have various think tanks who are more or less pre-committed to fixed positions… and of course, corporations and nonprofits paying for advertisements pushing various agendas.

Of course, it’s not just the global warming problem that suffers from a lack of public forums that encourage meta-rationality. That’s just an example. There have got to be some ways to improve the overall landscape a little. Just a little: I’m not expecting miracles!

Details

Here’s the paper by Aumann:

• Robert J. Aumann, Agreeing to disagree, The Annals of Statistics 4 (1976), 1236-1239.

and here’s the one by Cowen and Hanson:

• Tyler Cowen and Robin Hanson, Are disagreements honest?, 18 August 2004.

Personally I find Aumann’s paper uninteresting, because he’s discussing agents that are not only rational Bayesians, but rational Bayesians that share the same priors to begin with! It’s unsurprising that such agents would have trouble finding things to argue about.

His abstract summarizes his result quite clearly… except that he calls these idealized agents ‘people’, which is misleading:

Abstract. Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows is, 1 knows that 2 knows that 1 knows it, and so on.

Theorem. If two people have the same priors, and their posteriors for an event A are common knowledge, then these posteriors are equal.

Cowen and Hanson’s paper is more interesting to me. Here are some key sections for what we’re talking about here:

How Few Meta-rationals?

We can call someone a truth-seeker if, given his information and level of effort on a topic, he chooses his beliefs to be as close as possible to the truth. A non-truth seeker will, in contrast, also put substantial weight on other goals when choosing his beliefs. Let us also call someone meta-rational if he is an honest truth-seeker who chooses his opinions as if he understands the basic theory of disagreement, and abides by the rationality standards that most people uphold, which seem to preclude self-favoring priors.

The theory of disagreement says that meta-rational people will not knowingly have self-favoring disagreements among themselves. They might have some honest disagreements, such as on values or on topics of fact where their DNA encodes relevant non-self-favoring attitudes. But they will not have dishonest disagreements, i.e., disagreements directly on their relative ability, or disagreements on other random topics caused by their faith in their own superior knowledge or reasoning ability.

Our working hypothesis for explaining the ubiquity of persistent disagreement is that people are not usually meta-rational. While several factors contribute to this situation, a sufficient cause that usually remains when other causes are removed is that people do not typically seek only truth in their beliefs, not even in a persistent rational core. People tend to be hypocritical in have self-favoring priors, such as priors that violate indexical independence, even though they criticize others for such priors. And they are reluctant to admit this, either publicly or to themselves.

How many meta-rational people can there be? Even if the evidence is not consistent with most people being meta-rational, it seems consistent with there being exactly one meta-rational person. After all, in this case there never appears a pair of meta-rationals to agree with each other. So how many more meta-rationals are possible?

If meta-rational people were common, and able to distinguish one another, then we should see many pairs of people who have almost no dishonest disagreements with each other. In reality, however, it seems very hard to find any pair of people who, if put in contact, could not identify many persistent disagreements. While this is an admittedly difficult empirical determination to make, it suggests that there are either extremely few meta-rational people, or that they have virtually no way to distinguish each other.

Yet it seems that meta-rational people should be discernible via their conversation style. We know that, on a topic where self-favoring opinions would be relevant, the sequence of alternating opinions between a pair of people who are mutually aware of both being meta-rational must follow a random walk. And we know that the opinion sequence between typical non-meta-rational humans is nothing of the sort. If, when responding to the opinions of someone else of uncertain type, a meta-rational person acts differently from an ordinary non-meta-rational person, then two meta-rational people should be able to discern one another via a long enough conversation. And once they discern one another, two meta-rational people should no longer have dishonest disagreements. (Aaronson (2004) has shown that regardless of the topic or their initial opinions, any two Bayesians have less than a 10% chance of disagreeing by more than a 10% after exchanging about a thousand bits, and less than a 1% chance of disagreeing by more than a 1% after exchanging about a million bits.)

Since most people have extensive conversations with hundreds of people, many of whom they know very well, it seems that the fraction of people who are meta-rational must be very small. For example, given N people, a fraction f of whom are meta-rational, let each person participate in C conversations with random others that last long enough for two meta-rational people to discern each other. If so, there should be on average f^2CN/2 pairs who no longer disagree. If, across the world, two billion people, one in ten thousand of who are meta-rational, have one hundred long conversations each, then we should see one thousand pairs of people with only honest disagreements. If, within academia, two million people, one in ten thousand of who are meta-rational, have one thousand long conversations each, we should see ten agreeing pairs of academics. And if meta-rational people had any other clues to discern each another, and preferred to talk with one another, there should be far more such pairs. Yet, with the possible exception of some cult-like or fan-like relationships, where there is an obvious alternative explanation for their agreement, we know of no such pairs of people who no longer disagree on topics where self-favoring opinions are relevant.

We therefore conclude that unless meta-rationals simply cannot distinguish each other, only a tiny non-descript percentage of the population, or of academics, can be meta-rational. Either few people have truth-seeking rational cores, and those that do cannot be readily distinguished, or most people have such cores but they are in control infrequently and unpredictably. Worse, since it seems unlikely that the only signals of meta-rationality would be purely private signals, we each seem to have little grounds for confidence in our own meta-rationality, however much we would like to believe otherwise.

Personally, I think the failure to find ‘ten agreeing pairs of academics’ is not very interesting. Instead of looking for people who are meta-rational in all respects, which seems futile, I’m more interested in to looking for contexts and institutions that encourage people to behave meta-rationally when discussing specific issues.

For example, there’s surprisingly little disagreement among mathematicians when they’re discussing mathematics and they’re on their best behavior—for example, talking in a classroom. Disagreements show up, but they’re often dismissed quickly when one or both parties realize their mistake. The same people can argue bitterly and endlessly over politics or other topics. They are not meta-rational people: I doubt such people exist. They are people who have been encouraged by an institution to behave meta-rationally in specific limited ways… because the institution rewards this behavior.

Moving on:

Personal policy implications

Readers need not be concerned about the above conclusion if they have not accepted our empirical arguments, or if they are willing to embrace the rationality of self-favoring priors, and to forgo criticizing the beliefs of others caused by such priors. Let us assume, however, that you, the reader, are trying to be one of those rare meta-rational souls in the world, if indeed there are any. How guilty should you feel when you disagree on topics where self-favoring opinions are relevant?

If you and the people you disagree with completely ignored each other’s opinions, then you might tend to be right more if you had greater intelligence and information. And if you were sure that you were meta-rational, the fact that most people were not might embolden you to disagree with them. But for a truth-seeker, the key question must be how sure you can be that you, at the moment, are substantially more likely to have a truth-seeking, in-control, rational core than the people you now disagree with. This is because if either of you have some substantial degree of meta-rationality, then your relative intelligence and information are largely irrelevant except as they may indicate which of you is more likely to be self-deceived about being meta-rational.

One approach would be to try to never assume that you are more meta-rational than anyone else. But this cannot mean that you should agree with everyone, because you simply cannot do so when other people disagree among themselves. Alternatively, you could adopt a “middle” opinion. There are, however, many ways to define middle, and people can disagree about which middle is best (Barns 1998). Not only are there disagreements on many topics, but there are also disagreements on how to best correct for one’s limited meta-rationality.

Ideally we would want to construct a model of the process of individual self-deception, consistent with available data on behavior and opinion. We could then use such a model to take the observed distribution of opinion, and infer where lies the weight of evidence, and hence the best estimate of the truth. [Ideally this model would also satisfy a reflexivity constraint: when applied to disputes about self-deception it should select itself as the best model of self-deception. If people reject the claim that most people are self-deceived about their meta-rationality, this approach becomes more difficult, though perhaps not impossible.]

A more limited, but perhaps more feasible, approach to relative meta-rationality is to seek observable signs that indicate when people are self-deceived about their meta-rationality on a particular topic. You might then try to disagree only with those who display such signs more strongly than you do. For example, psychologists have found numerous correlates of self-deception. Self-deception is harder regarding one’s overt behaviors, there is less self-deception in a galvanic skin response (as used in lie detector tests) than in speech, the right brain hemisphere tends to be more honest, evaluations of actions are less honest after those actions are chosen than before (Trivers 2000), self-deceivers have more self-esteem and less psychopathology, especially less depression (Paulhus 1986), and older children are better than younger ones at hiding their self-deception from others (Feldman & Custrini 1988). Each correlate implies a corresponding sign of self-deception.

Other commonly suggested signs of self-deception include idiocy, self-interest, emotional arousal, informality of analysis, an inability to articulate supporting arguments, an unwillingness to consider contrary arguments, and ignorance of standard mental biases. If verified by further research, each of these signs would offer clues for identifying other people as self-deceivers.

Of course, this is easier said than done. It is easy to see how self-deceiving people, seeking to justify their disagreements, might try to favor themselves over their opponents by emphasizing different signs of self-deception in different situations. So looking for signs of self-deception need not be an easier approach than trying to overcome disagreement directly by further discussion on the topic of the disagreement.

We therefore end on a cautionary note. While we have identified some considerations to keep in mind, were one trying to be one of those rare meta-rational souls, we have no general recipe for how to proceed. Perhaps recognizing the difficulty of this problem can at least make us a bit more wary of our own judgments when we disagree.

46 Responses to Meta-Rationality

  1. Uhm… Those strong assumptions made by Aumann really *are* strong: they concern a bunch of truthful Bayesians under a certain peculiar truth-preserving belief update rule. Arguably the most natural one of them, but not the only one, especially in a crowd larger than N=2. In that crowd, any self-serving prior would be updated to be not so self-serving, so that eventually the fixed point theorem underlying Aumann’s result eventually takes hold: you will end up with a fully shared belief consistent with the the original (unfounded) priors plus all extra evidence offered in between by anybody, in any order (that commutativity is essentially what makes the update rule unique).

    (BTW, what easily escapes the reader in these kinds of theorems is how far reaching they were intended to be, or could be. I mean, truly hardcore Bayesians will claim all of human knowledge and makeup, right downto physical makeup and its sequelae such as persona, really can be reduced to a (really big) probability distribution. That’s your prior when you enter a debate, and then under the proper conditions and update rules you will agree to that level. Silly? Obviously, but what the description doesn’t cover still mostly follows the rules under the assumptions. Including how you think about yourself. But see below.)

    Hanson &al.’s result is a nice, robust extension of the theory to where some of the assumptions are broken, but obviously not the same ballgame at all. It modifies the assumptions enough to be considered wholly separate, and not just a perturbation or a sensitivity analysis of the original.

    Most crucially it makes at least two separate, rather lifelike and then OTOH mathematically funky, anti-Aumannian assumptions:

    1) what “I am” is somehow separate from what “I” “know/externalize/am” (breaks all of the usual linearity and continuity assumptions, at the very least), and

    2) what I know about myself is somehow separate from the rest of the “total probability density function of the universe of discourse”; that invokes a kind of self-reference foreign to Bayesian reasoning, and it admits logical inconsistency where you usually *must* start with self-consistency in both priors and the information yielded; yet now we’re near to building a probabilistic version of the hierarchy problem innate in Russell’s antimony; (breaks pretty much the whole Bayesian machinery as such).

    Originally the game being analysed was a pure Bayesian one, with no hint of (at least) those two kinks. It’s still being analysed properly, but you shouldn’t mix the two games with each other in the least.

    (A nice topic by the way since I promised to talk about consequentialist morality just tomorrow. The analysis of lying/disingenuity might well prove valuable there. :)

  2. How can we encourage rationality and meta-rationality

    If nothing else helps: Shame and ridicule. Nobody wants to be publicly exposed as a liar or a stupid. Especially the Very Serious People. By proxy, nobody then wants to take these serious.

    This seems quite difficult in U.S. “debate culture”. Just look at poor Mr. Obama. What happens in U.S. congress is mind-boggling for Europeans. Inhofe, Shimkus, Ryan, etc. treated as serious people. Congressional hearings with Monty Python characters like Viscount Monckton. Professional liars like Pat Michaels, his profession called “advocacy science”. Yuck! Plus, renowned journalists spreading their lies (e.g. Andy Revkin asking Pat Michaels for comment) for the sake of giving each “side” credit.

    Sometimes it is necessary to name a rose a rose. Or use the technical term BS when encountering BS. Otherwise my heat goes up. But if you want to discuss instead and make much words, then be evil: Trap the bonehead into a crash against the wall of reality and logic, so the spectators have at least some fun.

  3. Meta-rationality may be encouraged by two rules:
    1) Make predictions where possible that can be checked reasonably soon. Let your argument stand or fall on that. Those who won’t predict will have less credibility.
    2) If you oppose a measure, you must apply the same standards to all others’ proposed measures, and your own. Where useful, use quantitative comparisons. impress.

    • Allan Erskine says:

      These rules are pretty spot-on. Even if rule 1) above were weakened (strengthened?) to “make propositions that can be checked reasonably soon” it would go a long way.

      The only thing missing is an element of feedback, the absence of which allows people to have long enough arguments to polarise the debate in the first place.

      If we could keep your rule 1) but with instantaneous highlighting of said predictions, in pink say, meaning “probably wrong”, or pale green for “could be right, you know” then I subscribe.

      On a related note, http://hypothes.is/ is an interesting organization I learned about through the Azimuth blog which aims to provide a consensus driven mechanism for judging credibility.

    • TheOtherHobbes says:

      But testing predictions is hardly an assumption-free process.

      Even allowing for personal or financial bias, it’s pretty damn hard to get definitive data about certain kinds of research from empirical testing.

      While we’d all like to live in a perfect empirical world, the reality is that some experiments or fields aren’t funded or investigated for political or ideological reasons, or because they’re academically unfashionable, or because it’s too expensive to investigate them properly, or because it takes too long to collect the data.

      The other issue is that politics is driven by rhetoric and power, not fact. It’s perfectly possible to have the most ridiculous multiply-falsified nonsense driving policy if you spend enough money on promoting it.

      Pretty much everything in economic theory as practiced by Western governments has worked like this since at least the 1970s. The causes and remedies for recessions and depressions aren’t a mystery. But because policy is owned by people who benefit hugely from denying the truth, and because democracy is rhetorical not rational, those in power continue to destroy both collective wealth and collective intelligence for petty personal gain.

  4. There is nothing new to meta-rationality, it is basically just a call for intellectual honesty. I think it was better (and much more succinctly and without needless jargon and rationality assumptions) Bertrand Russell. Popper also advocates for intellectual honesty and critical discussion both in science and philosophy:

    Yet criticism will be fruitful only if we state our problem as clearly as we can and put our solution in a sufficiently definite form — a form in which it can be critically discussed.

    In general, this idea of rationality or meta-rationality leading to agreement is a bit silly. It presupposes an objective reality over which our beliefs have no effect. Sure, when I have my physicist-hat on, it is a convenient assumption. However, for an economist, anthropologist, or sociologist? Not so much. Our social world is shaped as much by our beliefs (and whatever irrationality they are based on) as by some ‘objective’ reality. In the words of the historian J.M Roberts:

    what humans do is so much a matter of what they believe they can do … it is the making of a culture that is its pulse, not the making of a nation or an economy

    Further, there is a need to distinguish between the concepts of subjective and objective rationality. The assumption of identical prior knowledge is completely unreasonable. However, the evolutionary comments of why one might disagree or be irrational are interesting; such suggestions should not be left to words, but should be modelled.

  5. I hope to find time to read your article carefully in the future, but I wanted to draw attention to something in the article you posted:

    “If we have common “priors” and we are mutually aware of each others’ views, then if we do not agree ex post, at least one of us is being irrational.”

    But of course, in general there isn’t any particular reason why you would have common priors. A “prior” is just a way to formalise someone’s prior beliefs, and we all believe different things, due to having experienced different lives with different predispositions. (There are cases where there is a “correct” prior for a given inference problem, which might be obtained from maximum entropy or symmetry considerations for example, but if we’re using Bayesian theory to model debates about climate change or macroeconomics I think we’re far outside the realm of applicability of such techniques.)

    This is really important, because if two agents have different priors then it’s perfectly rational for them to disagree. I encourage anyone interested in this stuff to read this lovely book chapter by Edwin Jaynes: http://www-biba.inrialpes.fr/Jaynes/cc05e.pdf

    In it, he gives an explicit example where two Bayesian agents rationally draw different conclusions from the same data. The example is quite relevant to the climate debate: one of the agents tends towards skepticism (regarding extra-sensory perception in Jaynes’ example) and the other tends toward believing that ESP does exist but is covered up by a conspiracy. Neither is completely hard-line, in that they both assign probabilities strictly between 0 and 1. They are then both told that a new study has concluded that ESP does not occur. The skeptic, quite rationally, takes this as further evidence against ESP and becomes more skeptical. But the believer, *entirely rationally*, takes the same data as further evidence for a cover-up.

    This is not relativism – in the end one of them is right and the other wrong, and given *enough* data the prior will eventually be overridden and they’ll converge to the same views. But it shows – with a vivid, numerically worked-out example – that we can’t conclude someone is being irrational just because they disagree with us.

    Coming back to the example you posted, if Eli Dourado and Paul Krugman are both rational and are both working from the same prior then they can’t help but agree. But if their priors are different then in general we can’t say anything about whether they’ll agree or not. They might just have to disagree. The relevant “prior” here is not just the probability assigned to a particular “hypothesis” but instead must cover every existing belief relating to anything might impact one’s opinions on macroeconomics. To me it seems inconceivable that two people could share the exact same prior in this sense, and so the existence of disagreements – even among these hypothesised “meta-rational” people – seems not only expected but essentially inevitable.

    • John Baez says:

      Hi, Nathaniel! Thanks for detailing how unlikely it is for two people, even devout Bayesians, to share the same priors. This is why later in the article I wrote:

      Personally I find Aumann’s paper uninteresting, because he’s discussing agents that are not only rational Bayesians, but rational Bayesians that share the same priors to begin with! It’s unsurprising that such agents would have trouble finding things to argue about.

      His abstract summarizes his result quite clearly… except that he calls these idealized agents ‘people’, which is misleading:

      Abstract. Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows is, 1 knows that 2 knows that 1 knows it, and so on.

      Theorem. If two people have the same priors, and their posteriors for an event A are common knowledge, then these posteriors are equal.

      I’m amazed that people consider this result worth discussing, since it’s a bit like saying that two identical particles obeying the same equations of motion with the same initial conditions will do the same thing.

      • OTOH, the theorem isn’t too sensitive wrt the extent that the priors are shared. Wikipedia knows of Scott Aaronson’s work there, but there’s much more out there; Bayesian agreement under the assumption of “fair and truthful” partakers is still a rather well-behaved notion. And again, as even Robin Hanson’s earlier work says and Wikipedia of all things knows, that shared basis/prior — broadly speaking — just pretty much has to be there.

        So, while the analysis is far from conclusive, it does shift the debate a bit from “agreeing to disagree” towards “if you disagree you’re prolly doing it wrong”.

      • Firstly, many apologies for not spotting that part of the article before posting. (For better or worse, I was pressed for time and wanted to get the thought down before it evaporated.)

        But I think there’s another important point to be made from Jaynes’ example. It has to do with the difficulty of knowing whether other agents are meta-rational. I know that this is touched on in your quotation from Cowen and Hanson’s paper, but I think Jaynes’ argument points toward a reason for why identifying other meta-rational agents might be difficult. This in turn might lead to and explanation for some people’s meta-irrational behaviour, and perhaps to concrete strategies for dealing with the problem.

        I might be mis-remembering the set-up of Jaynes’ example slightly, but to expand on the version of it I described, the reason the two agents’ posteriors are able to diverge is that there are two different hypotheses that have to be evaluated in order explain the data (the “data” in this case being the publication of a paper that refutes ESP). Hypothesis A is that ESP does not exist, and an agent that completely trusted the paper would update her posterior in the usual way, according to the probability of the data given A and given not-A. But there is also hypothesis B, which is that ESP does exist but the paper is part of a conspiracy designed to conceal this fact. The skeptic assigns a low prior probability (but not 0) to B, and as a result sees the data as evidence against B. But the believer has a high prior probability (but not 1) for B, and therefore sees the data as evidence in favour of B. If you believe that such a conspiracy exists then the publication of such a paper is exactly what you’d expect, so it lends support to the conspiracy hypothesis.

        I think something similar can happen when it comes to identifying meta-rational agents. Let’s say that I’m meta-rational, and I that suspect you are too. Nonetheless my prior assigns a small non-zero probability to the possibility that you either harbour self-serving beliefs, or have simply been misinformed. Let this probability be p(B) .

        Now let’s say you make a statement A that I find surprising. From my point of view there are two possible explanations for this. The first is that the statement is a true inference from data you have access to but I don’t, and the second is that you’re not being meta-rational after all.

        Let the symbol S represent the fact of you making the statement A . Now, in order to calculate my posterior, I have to calculate not only p(H_i | S) for every hypothesis H_i that could lead to A being true, but also p(B | S) , the posterior probability of you being meta-irrational.

        For the sake of simplicity, let’s say I think you will definitely say statement A if you are not meta-rational. Then p(B | S) = p(S|B)p(B)/p(S) = p(B)/p(S), where p(S) = p(S | B)p(B) + p(S | A)p(A) = p(A) + p(B) under this assumption. In this case, if p(A)+p(B)<1 then my posterior probability for B is greater than my prior, i.e. I will rationally (but falsely) take your statement as evidence that you are irrational, rather than as evidence for the proposition A .

        Of course, by symmetry we also have p(A | S) = p(A)/(p(A)+p(B)) , so my level of belief in the statement A also increases. But if p(A) \ll p(B) then my posterior for A will still be tiny, and my posterior for B could be nearly one. If you make a number of different statements A_1, A_2, etc. to which my prior assigns very low probabilities, I could rapidly end up distrusting you entirely. I don't have to start out distrusting you very much in order for this to happen, as long as the statement you make is very unlikely according to my prior.

        I would like to put forward the tentative hypothesis that often, when someone seems to be behaving meta-irrationally in a conversation such as the climate debate, it is for essentially this reason. They have more-or-less rationally (but perhaps wrongly) decided the other person is not behaving meta-rationally, and as a consequence have switched to a strategy of trying to expose the error or deception in what they're saying.

        If this is right then if we want to encourage meta-rational behaviour then we should focus on avoiding this situation. I don't have any particularly good ideas for how to go about this, but this reasoning does suggest that that a central issue is trust. If we want people to behave meta-rationally towards us, perhaps we need to find ways to help them see that we are being meta-rational ourselves.

        More concretely, it suggests that starting out by making strong, forceful arguments might be counterproductive. Maybe we need work out which parts of the argument sound the most plausible to someone with a climate-skepticism-leaning prior and say those things first. Although in the Bayesian analysis it doesn't matter in what order we make our statements, someone is more likely to continue listening to us if they believe our arguments are not self-serving, and for that reason it probably matters quite a lot.

        • This really rings true, and more so since it is a little counter-intuitive. I know I find it very hard not to lead off with my strongest, newest, most surprising argument! It is common sense that looking to describe common ground is a good way to begin a productive debate, but it is very hard to go even further and begin by admitting weaknesses in my own reasoning. Isn’t that going backwards, wasting valuable time? Maybe not.

        • srp says:

          Nice formal statement of the importance of ethos in persuasion, as noted by Aristotle. Logos and pathos are not sufficient, in general.

  6. Another serious problem with this is that people are not good at recognizing superior skills or knowledge in others. Here is an excellent summary of the problem:
    http://arstechnica.com/science/2012/05/revisiting-why-incompetents-think-theyre-awesome/

    It’s never enough to say “let’s get rid of those other, dishonest guys” because we are all “those other, dishonest guys”. The system of review, promotion, and recognition has to be robust enough that the collective effect of everyone’s ego-powered blinders do not distort the ultimate results.

    Actually, we have a name for such a robust system — science.

    I’m currently writing a letter/article about exactly this problem in the interface between mathematics and natural science, and I was planning to send you a draft once it is complete in the coming weeks, since I reference Azimuth in a footnote as an example of things working well overall.

    • John Baez says:

      I look forward to your draft, Abraham! I agree that science works quite well within its sphere of influence… what bothers me is that it often seems isolated from the realms of discourse where decisions get made. Perhaps that’s necessary, but actually I think the situation could be improved.

    • Interesing, i’d like to see a draft too if possible, or even read the article once it’s ready.

  7. Once you leave science and enter the realm of politics, and that is what you do with your project you should not overly rely on the fact that there is an objective truth upon which agents can agree. In a way disagreement is necesary to efficiently deal with situations in which we do not have full information. Meta rational behaviour is then just another strategy to manipulate people.

    • John Baez says:

      Metarationality consists of taking other people’s views seriously, and trying not to take ones own views more seriously just because they’re ones own. Why is this ‘just another strategy to manipulate people’? This seems overly cynical. Pretending to take other people’s views seriously might be just another strategy to manipulate people, and it can be very hard to distinguish between faking it and the real thing, but I believe there’s a difference.

  8. John Roe says:

    Brief comment: “All it should take, according to Aumann, for our beliefs to converge is for us to exchange our views”

    I found this hard to understand. Two exchanges will leave our views the same as they were before. Why should the process converge?

  9. It is very important to learn to recognize and avoid the automatic “fight or flight” response that takes over your brain in situation of perceived emergencies.

    This probably should be taught in school, as also has deep ramifications and interconnections with other things such as the ability to delay a gratification and the ability to focus / de-focus attention at will. All this also is strongly correlates to “success in life” loosely and roughly speaking.

    Courses based on books like this one, for example, have become standard in many companies at least for customer-facing people. They explain, among other things, how to put your emotions on the table in ways that are constructive and won’t hijack the conversation towards unproductive or damaging places.

    They probably should be a mandatory read in interest groups, or in blogs.

  10. Lee Bloomquist says:

    “…I’d like to think about ways of developing new customs and institutions that encourage meta-rationality… and linking these to the existing ones.”

    There is something that might be related which is supported by 1st Vice President of the European Parliament, Gianni Pittella. Professor Baez, for his answer to your question (should you be interested because of his already having introduced something totally new in custom and institution to Europe) here is his website with contact information:

    http://www.giannipittella.it/

    Vice President Pittella was instrumental in introducing to the EU Parliament the “Pledge to Peace” on this website:

    http://www.associazionepercorsi.com/?page_id=3431&lang=en

    I suspect he is greatly concerned with the topics now being discussed in the Azimuth Forum and does have experience in what it takes to establish a new institution– one that might, in some way, be similar the one that you are thinking about.

  11. nad says:

    In their paper on disagreement, Tyler Cowen and Robin Hanson grapple with the problem of self-deception. Self-favoring priors, they note, can help to serve other functions besides arriving at the truth. People who “irrationally” believe in themselves are often more successful than those who do not.

    I think a crucial point here to discuss is what accounts for as being “successful”. Like in that Cowen, Hanson paper the example of a sales assistant is mentioned. And I guess a lot of people often render “economically sucessful” equal to “sucessful”, which is very debateable.

    I don’t think we should spend too much of our time looking around for paragons of meta-rationality.

    I agree. I could imagine that the ability to desist from one’s own viewpoints, if they are clearly wrong (…and this is of course also debateable) is linked to the emotional intelligence of the person and its environment. In this context you may eventually also want to read this little essay:

    Why we overestimate our competence.
    So it’s seems “meta-rationality” is a gradual capability, which is by the way also very dependent on context. In particular not being meta-rational may also be due to blatant hypocrisy.

    That is as pointed out here already a couple of times (I hope though that this isn’t an “overly repetitive restatement of the same view.”) the “navigation space” for arguments is linked to a persons social situation. The question of how much you are willing or able to give up (financially, in terms of social environment etc.) for your views may play a big role. And that doesn’t mean that “Everybody is corrupt” (A headline I had read spomewhere recently). This appears to me as oversimplifying to the extend of being wrong. That is there is a huge difference between a hypocrisy if you are about to starve or if you have to face jail and the hypocrisy or corruptness someone does because he or she needs a new private jet.

    But instead of listing cases where existing institutions failed to do their job optimally, I’d like to think about ways of developing new customs and institutions that encourage meta-rationality… and linking these to the existing ones.

    I think it is important to find out why existing institutions failed to do their job optimally. Sure one could develop new customs in that direction, but there are already also quite some existing means, which apparently “didn’t sicker enough into the public mind”.
    And I am not sure, wether forming “institutions” is the best way to adress these problems, in particular it shouldnt be the only way.

    • nad says:

      I could imagine that the ability to desist from one’s own viewpoints, if they are clearly wrong (…and this is of course also debateable) is linked to the emotional intelligence of the person and its environment.

      For completeness I want to add that of course if a person constantly changes his/her mind regarding relevant questions then this may of course also be problematic – even if the mind change may not happen out of opportunism.

      Unfortunately – especially politicians – are often pressured into making final decisions that is a longer period of fact search and argument weighing may often be mistaken as indecisiveness by the public. This may lead to premature decisions, which especially if these were decisions with regard to relevant questions with immediate consequences may be hard to revise.

  12. Arrow says:

    Bias towards ones own views is very important and greatly facilitates the collective process of finding and testing new knowledge.

    It is much better to have different people champion different interpretations of the data than to have everyone stick to the one view that is currently considered the most likely (however that would be established). Putting egos on the line is what gives people extra motivation to try to find support for their own views and poke holes in opponent ones. Provided everyone involved subscribes to scientific method and people have a threshold after which they abandon their views if enough empirical data contradicts them the process will converge as such data accumulates. (It is actually enough if young people are more likely to pick better empirically supported views to champion, death will do the rest). Judging by our scientific progress this process works exceptionally well.

    Also the fact that different people champion different views gives us an easy way to asses the relative strength of different views without having to reexamine all the supporting empirical evidence. Of course this argument by consensus is not bulletproof as climate science clearly shows ;) but it’s still very useful.

    So overall the bias towards ones own views is very important and serves common good. It is only harmful when people are dishonest and champion views they do not believe in for personal gain. In case of single individuals such noise doesn’t matter, but if enough people pick particular view for that reason it may skew the overall perception of the issue. Science will still converge to the right answer as more empirical data becomes available but it may take longer.

    • srp says:

      Thank you for writing this so I didn’t have to. Developing evidence is hard work, and many an important discovery has come out of an “I’ll show them” mentality. Furthermore, assessing evidence is cognitively costly, so recruiting the motivated reasoning, debate-winning part of the brain can be a powerful way to unearth flaws in others’ reasoning.

      Self-deception problems can then be addressed by a dialectical debating process witnessed by a passive audience whose motivation is being right rather than being seen to have been right.

      My short tagline for this line of reasoning is “disinterested is uninterested.” (It works better for older people who recall that usage distinction.)

  13. I too regard meta-rationality equivalent to intellectual honesty. Mindfulness/biofeedback are tools that can be taught in various institutions (educational, medical) which then would work outside those institutions in continuing to assist these people in observing their own intellectuality/emotionality.

    At one time, washing hands before surgery was considered daft, so something as ‘weird’ as mindfulness may some day be more common place.

    http://en.wikipedia.org/wiki/Jon_Kabat-Zinn

  14. This “meta-rationality” strikes me as in line with what I call Cowen’s / Hanson’s / Caplan’s tendency to “arguments by plausibility”. Which amounts to forcing one’s own priors down others’ throats rather than gathering, chewing, digesting facts.

    Stating the problem as “meta-rationality” gives it too much academic or logical tone, in my opinion. Part of life is figuring out whom to trust, who is a liar, who is self-deceiving on what issues, who is an expert, and so on.

    Other than “nifty” results, I don’t think there will be some generally useful formula for uncovering who is trustworthy on what. At least, it won’t do better than instinct–so “Educating the public (laymen)” can and does reduce to [a] presenting carefully-researched facts in a coherent narrative, [b] debunking the arguments of another person. Attacks on character can be useful sometimes (“This is the same woman who claimed the world would end in 1997!”) but not all the time (“Even though I didn’t rebut what he said, Professor Baez is a jerk and doesn’t shower.”)

    Relatedly, I think this rubs up against “mathematical proof” versus “legal proof”. Judges are handed facts (or “facts”) in the real world, not by axiom, and have spent hundreds of years developing Tests for (to use the academic term) satisficing in conditions of liars, self-deceivers, and self-interested information-twisters. I would look to them before I’d look to economists (who, in the common joke, “assume a ladder”).

  15. Shwell Thanksh says:

    I don’t think there will be some generally useful formula for uncovering who is trustworthy on what.

    General usefulness is a slippery creature, but one intriguing candidate for such a formula lies in the axiom, “follow the money”.

  16. Charles says:

    Your search for “contexts that encourage meta-rationality” reminded me of Hardy & Littlewood’s rules for collaboration:

    When one wrote to the other, it was completely indifferent whether what they wrote was right or wrong. When one received a letter from the other, he was under no obligation to read it, let alone answer it. Although it did not really matter if they both simultaneously thought about the same detail, still it was preferable that they should not do so. It was quite indifferent if one of them had not contributed the least bit to the contents of a paper under their common name.

    Enjoyed your thoughts on the Cowen and Hanson paper!

    Ref:
    http://www-history.mcs.st-andrews.ac.uk/Biographies/Littlewood.html

  17. I spend probably too much time arguing with AGW skeptics on climate science blogs and see meta-irrationality.

    Here is my problem: I often come to an impasse during an extended discussion. I can no longer tell if the arguer is actually taking a real stance or is simply trying to prank the argument, either by becoming increasingly preposterous or silly.

    I don’t know what to call this other than a kind of trolling prank that seems to be in vogue. There is certainly evidence that this exists based on ridiculous survey responses . It also occurs on TV shows such as Jay Leno where people intentionally appear as clueless or embarrassing as possible when asked questions on current events.

    Watch for it and you will see what I mean. For some reason much of it comes out of Australia, where mocking authority seems to be the national past-time.

  18. Graham Jones says:

    I don’t think there is much point in trying to define or measure ‘rationality’ if the process stops at knowledge or belief (or in Bayesian terms, if it stops at the posterior). You have to complete the process, to where someone acts or behaves in some way. In Bayesian terms, that mean adding a utility function to evaluate possible actions.

    A utility function is allowed to be self-serving. It is a way of formalizing what matters to you. You can think of it as a set of personal answers to questions like: if you acted in a certain way X, and it later turned out that the true situation was Y, how good or bad would the result be?

    I see the advantages as:

    1. You avoid arguing with people who come to the same decision as you for different reasons.

    2. You avoid spending energy convincing people of some statement, only to find they don’t care about the consequences anyway.

    3. If someone claims to be rational, you can test their beliefs against their actions.

    4. By providing an explicit place for people to express their self-serving impulses, you reduce the temptation to smuggle them into the prior.

    5. You don’t end up saying things like this:

    “We can call someone a truth-seeker if, given his information and level of effort on a topic, he chooses his beliefs to be as close as possible to the truth. A non-truth seeker will, in contrast, also put substantial weight on other goals when choosing his beliefs.”

    In general, we don’t know the truth about a topic. (If we do, we can stop thinking about it, rationally, irrationally, or any other way.) If we do not know the truth how are we supposed to judge the closeness of a belief to the truth? If we do know the truth (but the person we are judging does not yet), what measure of closeness are we supposed to use? For example, were the people who developed the theory of epicycles getting closer to the truth as their predictions became more accurate?

    Utility functions circumvent these difficulties.They allow decisions to be taken, and those decisions to be judged, even when the truth is never known.

  19. I’d agree with Abraham Smith’s point above: meta-rationality is very much akin to the bag of tricks and rules of thumbs that is the scientific method. Why? Because both of them consider the open-ended empirical facts of human interaction, rather than what can be logically shown to be true in a narrow axiomatic framework. The latter — including Cowen and Hanson’s paper — is a useful model which points out certain problems in communication, to be solved. The former is our best current estimate of how we can/should deal with all of said problems — many of which we don’t even know about yet, much less the optimal response to them — via an heuristic toolchain which can actually be followed by real people.

    Or at least that’s the kind of thinking I just advocated as the basis of morality and ethics as a whole, in my lecture on consequentialist ethics being all there really is. :)

    Hanson & Cowen’s paper then brings in all of the haziness from without the formal setup utilized by Aumann. It tries to limit its scope to something that can still be formally analyzed, but it doesn’t succeed there like Aumann originally did in his work. Not coincidentally it then fails to rise beyond a discussion piece, into a bona fide result/proof.

    Essentially what the paper does is show you that “if you have an extrinsic incentive to disagree, you as a rationalist/’Bayesian wannabe’ will optimally disagree”. Which hardly rises to the level of a new result. Essentially what is being said is that “Aumann assumed people were truthful, people are not truthful in practice even to themselves, and the dissonance between these two facts can be resolved by assuming that people have an external reason to be dishonest, even to themselves”.

    Perhaps you could formalize this notion by requiring that in an extensive form game the transfer of information must explicitly be modelled as a move against an external incentive (i.e. Nature as the third participant playing in a way which punishes you and not the other player for divulging private information contrary to your reputation in other games, or perhaps even against the basic sociobiological tendency to knock a person down in the social pecking order for certain honest admissions). Yet, that is not done in the paper, even as an example, which I think leaves it somewhat lacking, if also useful as a starting point for further inquiry in moral philosophy.

    I take the viewpoint that a) you should take the idea of “priors” as seriously as you can, b) even a so called “Bayesian” cannot be just that because he’s rife with all sorts of unrelated intrapersonal “stuff” he cannot control even if he wanted and really tries, and so c) even the best approximation to a real life Bayesian diverges hugely, consistently and irreparably from the axiomatic kind, in more than one way. Also that that is one of the most pressing problems this sort of moral philosophy ought to be solving, despite how messy its formal analysis may then become.

    From that starting point, Hanson&Cowen’s analysis once again falls apart. There is nothing intrinsically irrational or perhaps sometimes even “wrong” in 1) you don’t yourself do, or even 2) you do, you know you do, you know everybody else too does wrong, and you still condemn as does everybody else. “It’s all part of the moral game, some of which is about signaling, self-deception, how you want to do two contrary things at the same time being that you’re not built to be rational…”

    (For those among us who like their math games, I’d even claim moral reasoning shouldn’t be attempted under the usual Aristotelian logic. At the very least we should start with axioms which are both intuitionistic and paraconsistent, at the same time. No provability, leading to empiricism and a Popperian view of things, and also limited tolerance of logical inconsistency. Because, well, where else did the deontic, temporal and all of the other alternative logics start if not precisely with the practical, inextricable from real life complexity that moral philosophy buys you?)

    The article tries to bring in all kinds of variegated “stuff” from the human sciences with the writers so well know, and does so interestingly, but, seriously, does it even formalize/justify its basic tenet that disagreement about what is rational or good-for-me-only-as-the-yardstick makes a disagreement dishonest (and by extension somehow bad?)? Above and beyond that, why should you be rational to begin with in these kinds of things; doesn’t that kind of make you a bit autistic and thus worthy of exclusion from society as whole? (Again, I’m not the one who brought in these extraneous, messy factors, it was the original article which started talking about them, with no stated limits on what could be considered and/or why.)

    (There’s a lot more lying under the logical fixture of the article. The best single example might be: “Imagine that John believes that he reasons better than Mary, independent of any evidence for such superiority, and that Mary similarly believes that she reasons better than John.” Well, you just invoked the nastiest problem underlying all of Bayesianism, twice in the same sentence: http://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors . Personally I’d claim the idea that an uninformative prior ought to be flat is just hogwash, and moreover that its closer analysis shows the whole Bayesian fixture to only apply to what you really do know, so that it cannot model at *all* anything you do not.)

    The idea of pre-prior doesn’t really work, because if you can deceive yourself in other ways, you can deceive yourself with such a prior as well. So can everybody else, even with the pre-pre-prior that everybody is logical and all useful information comes from a truthful source. (Think, gods who tell you to kill your children horribly for whatever.) And of course in the original theory there is no distinction: even your genes, upbringing, physical laws, everything you know, everything which ever influenced you ideas in anywhere and which could have knowingly, are part of your prior, as far as you know. Part of the udpate rule in Aumann’s theory is that when somebody points such a blind point out in yourself, you immediately become cognizant of it and all its sequelae.

    So, why should we as rational Bayesians believe the article? Each other even? And so on…

    I’m a huge libertarian fan of Cowen and and Hanson, but seriously now… :)

  20. Lee Bloomquist says:

    On mathematicians agreeing about a proof but disagreeing on other things–

    I imagine one saying to the other about a theorem, “Don’t believe what I or anybody else is saying, know this fact for yourself.”

    By “know for yourself” is meant: apply for yourself the tools of logic which have become an institution in the profession of mathematics and see where they lead you.

    Since these tools are an institution in mathematics, the two have the know-how to correct each other should they make mistakes in using the tools.

    In Bertolt Brecht’s play about Galileo a similar conversation occurs, where Galileo urges a Cardinal not to believe him, but see for himself by looking through the telescope. Of course the Cardinal refuses because he “knows” this tool by which one can know for oneself that there are shadows of peaks on the moon is a tool of the devil, and thus will control his mind as it has done to Galileo should the Cardinal look through it.

    IMO this is really what Zeno’s paradox is about.

    Parmenides tells of a path to a door to a land within. Apparently some felt that Parmenides was asking listeners to be true believers. In response Zeno told the story of Achilles and the tortoise, which to me illustrates that Parmenides was in fact urging the listener– not to believe him or anybody else– but instead to know this path for one’s self.

    Zeno’s story (retold): Say that the night before the race, Achilles and the tortoise are at a bar and make a bar bet. First the tortoise asks Achilles if he believes that between every two points on the race path there is a midpoint. Achilles agrees. So then, like Bart Simpson, the tortoise asks Achilles again and again if he believes he can get to the endpoint without passing through yet another midpoint, once he has reached a midpoint. Then the tortoise bets that because of this, Achilles will lose the race.

    Is it even remotely plausible that Achilles will payoff this bar bet to the tricky tortoise without even running the race? Simply because of what he and his group of friends at the bar might, beforehand, believe?

    No matter what the beliefs impressed upon Achilles by those at the bar, and no matter how many drinks the tortoise buys him, he knows for himself that he will win this race against the tortoise. The beliefs of others do not shake him. He knows that he has the tools. And, he knows that he has the know how to use those tools to win. No sweat.

    Parmenides was not looking for true believers. To me Zeno’s story illustrates that, instead, Parmenides urged each individual to know for one’s self that the path he described exists. And you can’t get there simply by being a true believer. You have to know for yourself.

    In this context the agreement between mathematicians about proofs comes– not from beliefs required to be accepted by a group– but from the tools they have which enable each to “know for one’s self” a mathematical fact in question.

    Tools that enable an individual to “know for one’s self” might be a bridge to new institutions.

  21. David Lyon says:

    I think that climate change deniers and other dishonest agents understand this issue better than honest people would like, and perhaps even better than intellectually honest people do. These agents are not interested in finding truth, but in obscuring it. The problem they solve is then how to insure that a solid enough consensus about the truth to cause them trouble never arises. The answer to this problem comes from asking: What counts as information? What evidence do people use to update their beliefs? That the answers to these questions are known by large non-governmental organizations is perhaps the largest danger facing those of us truly worried about climate change and the fate of the vast bulk of humanity now extant.

    In the theory of human communication, the fact that ones target audience is those who disagree only a small amount with ones position is well known. People seem to naturally reject evidence with high surprisal, that is, new information which would cause a large update to their prior beliefs. This is the key fact which renders the meta-rationality conclusions false. In order to cause people’s beliefs to cluster around a certain area, one has to simply provide a coherent body of information that confirms those beliefs which is both large enough and far enough away from the actual truth so as to form an independent basin of attraction. The underlying veracity of this story is not important, only that it, as a whole, acts as an attractor for people whose beliefs would otherwise wander towards the truth. This is essentially the “Big Lie” theory of Adolf Hitler, created in 1925 and ironically, still true.

    Here’s a map of political blogs in the USA, c. 2009. http://politicosphere.net/map/

    Note that the red and blue factions have very few direct links between each other. Users typically follow links between pages of the same color which tend to confirm what they’ve read on previous pages. Beliefs of people in both basins become more sharp over time as they encounter more confirming evidence. This is a model for how multiple attractors can form, even though at most one of them contains the objective truth. The prior, the first information encountered about the issue, dominates the future trajectory rather than fading over time with exposure to more information, because the next information one encounters on the web is not uncorrelated with the prior but is in fact almost completely determined by it.

    The brilliance of this scheme is that these clusters become self-sustaining once formed. If enough fake evidence is created, a nucleus of people who believe the evidence begin to spontaneously add information which supports this story. Scientists are not immune to this phenomenon. There was a great story in the New Yorker in 2010 called “The Decline Effect” which said, in effect, that scientists who want to be published can not disagree too much with orthodoxy, no matter what the truth eventually turns out to be. Editors of scientific journals also want to tell a coherent story and only want to publish articles whose surprisal falls within a narrow but nonzero range.

    http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

  22. JenniferRM says:

    It appears that most of the authors here either lack an explicit understanding of language pragmatics or want to pretend that they do. If readers are interested in patching ignorance in this area, try googling subjects like “Grice’s Maxims” and “Pinker Indirect Speech”. These aren’t enough to explain everything by a long shot but they can help you establish articulable standards to start recognizing certain kinds of tricksy communication gambits.

    As near as I can tell, in the most general case, a full steam “naive meta-rational conversation” is not a good thing to engage in, without setting up pre-conditions to ensure the safety of both parties. If you don’t know who your interlocutor is in advance, nor who is in the audience, it seems like it could cause harm to self and/or others.

    QUOTING BAEZ(?): “institutions that try to encourage meta-rationality seem to operate by shielding themselves from the broader sphere that favors ‘hot’ discussions. Meanwhile, the hot discussions don’t get enough input from ‘cooler’ forums… and vice versa!”

    This is evidence. Are you smarter than these institutions? If not then maybe you should update in their direction? I think your characterization of conversations as “hot” and “cold” might be misleading you into thinking of them (1) as basically axiologically equivalent (like stoves and freezers) and (2) likely to equalize into something roughly in between.

    If you thought of them as “polluted” and “purified” (or maybe “dangerous” and “safe”?) I think you might find the proscriptive advice coming out differently. Mixing food with sewage is not generally a good plan. It reduces the total amount of food in a world that contains hunger.

    QUOTING COWEN AND HANSON: “Yet, with the possible exception of some cult-like or fan-like relationships, where there is an obvious alternative explanation for their agreement, we know of no such pairs of people who no longer disagree on topics where self-favoring opinions are relevant.”

    Hanson is clever and a half… I don’t think he’s writing in good faith here but is perhaps instead laying out a certain kind of bait? I mean seriously, the paper *already* proposed that discernment of a single meta-rational person by another meta-rational person is hard and likely to run into sampling problems. Then the authors just blithely assert that they’ve never sampled and discerned a meta-rational pair? Haha!!

    If they really want to report on the incidence of meta-rationalist congregations they should start thinking about schelling points, and then go search out (or create?) some plausible meeting places, and then report their results like proper ivory tower empiricists…

  23. patrioticduo says:

    normal thinking will get you only so far in your personal milieu – but meta-thinking will take you afar.

  24. John Baez says:

    Thanks to everyone for their thoughtful comments here! Unfortunately I’ve been busy grading homework and exams for my game theory course, so I haven’t had time to join the fray. I am, however, updating my priors.

    • Lee Bloomquist says:

      Drat! I had hoped to inject an example–

      “Robotic Petri Nets 1 and 2.”

      RPN1 Classes:

      - Developing Robotic Petri Nets in the Google Earth iPad interface.

      - Writing Haskell scripts for robotic Petri nets which can use them to re-draw themselves on-the-fly.

      RPN1 Internship Requirement:
      - Sequential internships at an auditing firm, a ratings firm, a financial firm, and a financial regulating agency.

      RPN1 Internship Deliverable:
      - For each firm, a specific case of their capital budgeting process fully animated in Robotic Petri Nets and fully navigable using the Google Earth iPad interface.

      RPN2 Classes
      (Robotic Petri nets which can re-draw themselves on-the-fly using Haskell scripts are just as powerful as Turing machines.)

      - Using Haskell to prove theorems about Robotic Petri Nets
      - EconoEngineering Thermodynamics

      RPN2 Internship Requirement:
      - Design and develop micro-drones to sample input and output material of a plant specifically assigned for the project.

      RPN2 Internship Deliverable:
      - Full animations supported by the Google Earth iPad interface of the input and output consumable and waste streams of the target firm, based on drone data, writing theorems as needed in Haskell that are of high enough quality to be used as evidence in court.

      Career Opportunities:

      - Head Navigation Officer for Planet Earth.

      - Head Engineering Office for Planet Earth.

      Typical Duties:

      -Report possible and probable course headings to Earth Head Pilot.

      - Maintain theorems in good order.

      - Develop situation-specific drones and theorem-proving robots, as required, based on formal specifications written in Haskell.

      (The best science fiction tells stories about institutions.)

  25. Nick Thompson says:

    How much this all reminds me of the “classical” pragmatism of CS Peirce and John Dewey, How much I long the discourse of the 50′s when meta-rationality was a value we all shared, rather than thinking it just another fetish of the ruling class.

  26. Wolfgang says:

    The search for “meta-rational” people is nothing more than the search for a new messias and seems to be a cornerstone for the creation of any new ideology. Even if people like Krugman are much smarter than the average, they do not own the truth and there should be no attempt to invoke such thinking.

  27. Lee Bloomquist says:

    Robotic Petri nets 3:
    Apoptosis

    Class content

    It’s well known how to capture programmed cell death i.e., “apoptosis,” in terms of Petri nets.

    The virtue of remodeling apoptosis in terms of robotic Petri nets (as would be well understood by the imagined graduates of Petri nets 1 and 2 :) is that a robotic Petri net engineer knows how to “know for one’s self” how to generate mathematical theorems about robotic Petri nets using Haskell.

    We should expect that the imagined robotic Petri net engineers will find agreement with each other over theorems as much as mathematicians now find agreement with each other over theorems.

    Next: remodeling apoptosis in terms of robotic Petri nets brings to bear a fresh mathematical tool in the attack against the diseases now associated with apoptosis run amok, like cancer.

    When robotic Petri nets get involved, theorems about those robotic Petri nets (written in Haskell) get involved.

    It might be a new role in the world for theorem proving.

    Computer lab:

    Convert a given Petri net model of apoptosis into an animation performed by robotic Petri nets. Prove theorems about the animation using the functional programming language Haskell.

You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,796 other followers