This Week’s Finds (Week 311)

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

In 1958, the mathematician Stanislaw Ulam wrote about some talks he had with John von Neumann:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

In 1965, the British mathematician Irving John Good raised the possibility of an "intelligence explosion": if machines could improve themselves to get smarter, perhaps they would quickly become a lot smarter than us.

In 1983 the mathematician and science fiction writer Vernor Vinge brought the singularity idea into public prominence with an article in Omni magazine, in which he wrote:

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

In 1993 wrote an essay in which he even ventured a prediction as to when the singularity would happen:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

You can read that essay here:

• Vernor Vinge, The coming technological singularity: how to survive in the post-human era, article for the VISION-21 Symposium, 30-31 March, 1993.

With the rise of the internet, the number of people interested in such ideas grew enormously: transhumanists, extropians, singularitarians and the like. In 2005, Ray Kurzweil wrote:

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one’s view of life in general and one’s particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a "singularitarian".

He predicted that the singularity will occur around 2045. For more, see:

• Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology, Viking, 2005.

Yudkowsky distinguishes three major schools of thought regarding the singularity:

Accelerating Change that is nonetheless somewhat predictable (e.g. Ray Kurzweil).

Event Horizon: after the rise of intelligence beyond our own, the future becomes absolutely unpredictable to us (e.g. Vernor Vinge).

Intelligence Explosion: a rapid chain reaction of self-amplifying intelligence until ultimate physical limits are reached (e.g. I. J. Good and Eliezer Yudkowsky).

Yukdowsky believes that an intelligence explosion could threaten everything we hold dear unless the first self-amplifying intelligence is "friendly". The challenge, then, is to design “friendly AI”. And this requires understanding a lot more than we currently do about intelligence, goal-driven behavior, rationality and ethics—and of course what it means to be “friendly”. For more, start here:

• The Singularity Institute of Artificial Intelligence, Publications.

Needless to say, there’s a fourth school of thought on the technological singularity, even more popular than those listed above:

Baloney: it’s all a load of hooey!

Most people in this school have never given the matter serious thought, but a few have taken time to formulate objections. Others think a technological singularity is possible but highly undesirable and avoidable, so they want to prevent it. For various criticisms, start here:

Technological singularity: Criticism, Wikipedia.

Personally, what I like most about singularitarians is that they care about the future and recognize that it may be very different from the present, just as the present is very different from the pre-human past. I wish there were more dialog between them and other sorts of people—especially people who also care deeply about the future, but have drastically different visions of it. I find it quite distressing how people with different visions of the future do most of their serious thinking within like-minded groups. This leads to groups with drastically different assumptions, with each group feeling a lot more confident about their assumptions than an outsider would deem reasonable. I’m talking here about environmentalists, singularitarians, people who believe global warming is a serious problem, people who don’t, etc. Members of any tribe can easily see the cognitive defects of every other tribe, but not their own. That’s a pity.

And so, this interview:

JB: I’ve been a fan of your work for quite a while. At first I thought your main focus was artificial intelligence (AI) and preparing for a technological singularity by trying to create "friendly AI". But lately I’ve been reading your blog, Less Wrong, and I get the feeling you’re trying to start a community of people interested in boosting their own intelligence—or at least, their own rationality. So, I’m curious: how would you describe your goals these days?

EY: My long-term goals are the same as ever: I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI. But that’s a bit of a large project for one person, and after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity and the affect heuristic and the concept of marginal expected utility, so they can see why the intuitively more appealing option is the wrong one. So I know it sounds strange, but in point of fact, since I sat down and started explaining all the basics, the Singularity Institute for Artificial Intelligence has been growing at a better clip and attracting more interesting people.

Right now my short-term goal is to write a book on rationality (tentative working title: The Art of Rationality) to explain the drop-dead basic fundamentals that, at present, no one teaches; those who are impatient will find a lot of the core material covered in these Less Wrong sequences:

Map and territory.
How to actually change your mind.
Mysterious answers to mysterious questions.

though I intend to rewrite it all completely for the book so as to make it accessible to a wider audience. Then I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)

JB: I can think of lots of big questions at this point, and I’ll try to get to some of those, but first I can’t resist asking: why do you want to study math?

EY: A sense of inadequacy.

My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof. (Robin Hanson spends a lot of time usefully discussing which activities are most prestigious in academia, and it would be a Hansonian observation, even though he didn’t say it AFAIK, that complicated proofs are prestigious but it’s much more important to figure out which theorem to prove.) Even so, I was a spoiled math prodigy as a child—one who was merely amazingly good at math for someone his age, instead of competing with other math prodigies and training to beat them. My sometime coworker Marcello (he works with me over the summer and attends Stanford at other times) is a non-spoiled math prodigy who trained to compete in math competitions and I have literally seen him prove a result in 30 seconds that I failed to prove in an hour.

I’ve come to accept that to some extent we have different and complementary abilities—now and then he’ll go into a complicated blaze of derivations and I’ll look at his final result and say "That’s not right" and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math. This kind of perceptual sense of truth and falsity does seem to be very much important in figuring out which theorems to prove. But I still get the impression that the next steps in developing a reflective decision theory may require me to go off and do some of the learning and training that I never did as a spoiled math prodigy, first because I could sneak by on my ability to "see things", and second because it was so much harder to try my hand at any sort of math I couldn’t see as obvious. I get the impression that knowing which theorems to prove may require me to be better than I currently am at doing the proofs.

On some gut level I’m also just embarrassed by the number of compliments I get for my math ability (because I’m a good explainer and can make math things that I do understand seem obvious to other people) as compared to the actual amount of advanced math knowledge that I have (practically none by any real mathematician’s standard). But that’s more of an emotion that I’d draw on for motivation to get the job done, than anything that really ought to factor into my long-term planning. For example, I finally looked up the drop-dead basics of category theory because someone else on a transhumanist IRC channel knew about it and I didn’t. I’m happy to accept my ignoble motivations as a legitimate part of myself, so long as they’re motivations to learn math.

JB: Ah, how I wish more of my calculus students took that attitude. Math professors worldwide will frame that last sentence of yours and put it on their office doors.

I’ve recently been trying to switch from pure math to more practical things. So I’ve been reading more about control theory, complex systems made of interacting parts, and the like. Jan Willems has written some very nice articles about this, and your remark about complicated proofs in mathematics reminds me of something he said:

… I have almost always felt fortunate to have been able to do research in a mathematics environment. The average competence level is high, there is a rich history, the subject is stable. All these factors are conducive for science. At the same time, I was never able to feel unequivocally part of the mathematics culture, where, it seems to me, too much value is put on difficulty as a virtue in itself. My appreciation for mathematics has more to do with its clarity of thought, its potential of sharply articulating ideas, its virtues as an unambiguous language. I am more inclined to treasure the beauty and importance of Shannon’s ideas on errorless communication, algorithms such as the Kalman filter or the FFT, constructs such as wavelets and public key cryptography, than the heroics and virtuosity surrounding the four-color problem, Fermat’s last theorem, or the Poincaré and Riemann conjectures.

I tend to agree. Never having been much of a prodigy myself, I’ve always preferred thinking of math as a language for understanding the universe, rather than a list of famous problems to challenge heroes, an intellectual version of the Twelve Labors of Hercules. But for me the universe includes very abstract concepts, so I feel "pure" math such as category theory can be a great addition to the vocabulary of any scientist.

Anyway: back to business. You said:

I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI.

I bet a lot of our readers would happily agree with your first sentence. It sounds warm and fuzzy. But a lot of them might recoil from the next sentence. "So we should build robots that take over the world???" Clearly there’s a long train of thought lurking here. Could you sketch how it goes?

EY: Well, there’s a number of different avenues from which to approach that question. I think I’d like to start off with a quick remark—do feel free to ask me to expand on it—that if you want to bring order to chaos, you have to go where the chaos is.

In the early twenty-first century the chief repository of scientific chaos is Artificial Intelligence. Human beings have this incredibly powerful ability that took us from running over the savanna hitting things with clubs to making spaceships and nuclear weapons, and if you try to make a computer do the same thing, you can’t because modern science does not understand how this ability works.

At the same time, the parts we do understand, such as that human intelligence is almost certainly running on top of neurons firing, suggest very strongly that human intelligence is not the limit of the possible. Neurons fire at, say, 200 hertz top speed; transmit signals at 150 meters/second top speed; and even in the realm of heat dissipation (where neurons still have transistors beat cold) a synaptic firing still dissipates around a million times as much heat as the thermodynamic limit for a one-bit irreversible operation at 300 Kelvin. So without shrinking the brain, cooling the brain, or invoking things like reversible computing, it ought to be physically possible to build a mind that works at least a million times faster than a human one, at which rate a subjective year would pass for every 31 sidereal seconds, and all the time from Ancient Greece up until now would pass in less than a day. This is talking about hardware because the hardware of the brain is a lot easier to understand, but software is probably a lot more important; and in the area of software, we have no reason to believe that evolution came up with the optimal design for a general intelligence, starting from incremental modification of chimpanzees, on its first try.

People say things like "intelligence is no match for a gun" and they’re thinking like guns grew on trees, or they say "intelligence isn’t as important as social skills" like social skills are implemented in the liver instead of the brain. Talking about smarter-than-human intelligence is talking about doing a better version of that stuff humanity has been doing over the last hundred thousand years. If you want to accomplish large amounts of good you have to look at things which can make large differences.

Next lemma: Suppose you offered Gandhi a pill that made him want to kill people. Gandhi starts out not wanting people to die, so if he knows what the pill does, he’ll refuse to take the pill, because that will make him kill people, and right now he doesn’t want to kill people. This is an informal argument that Bayesian expected utility maximizers with sufficient self-modification ability will self-modify in such a way as to preserve their own utility function. You would like me to make that a formal argument. I can’t, because if you take the current formalisms for things like expected utility maximization, they go into infinite loops and explode when you talk about self-modifying the part of yourself that does the self-modifying. And there’s a little thing called Löb’s Theorem which says that no proof system at least as powerful as Peano Arithmetic can consistently assert its own soundness, or rather, if you can prove a theorem of the form

□P ⇒ P

(if I prove P then it is true) then you can use this theorem to prove P. Right now I don’t know how you could even have a self-modifying AI that didn’t look itself over and say, "I can’t trust anything this system proves to actually be true, I had better delete it". This is the class of problems I’m currently working on—reflectively consistent decision theory suitable for self-modifying AI. A solution to this problem would let us build a self-improving AI and know that it was going to keep whatever utility function it started with.

There’s a huge space of possibilities for possible minds; people makethe mistake of asking "What will AIs do?" like AIs were the Tribe that Lives Across the Water, foreigners all of one kind from the same country. A better way of looking at it would be to visualize a gigantic space of possible minds and all human minds fitting into one tiny little dot inside the space. We want to understand intelligence well enough to reach into that gigantic space outside and pull out one of the rare possibilities that would be, from our perspective, a good idea to build.

If you want to maximize your marginal expected utility you have to maximize on your choice of problem over the combination of high impact, high variance, possible points of leverage, and few other people working on it. The problem of stable goal systems in self-improving Artificial Intelligence has no realistic competitors under any three of these criteria, let alone all four.

That gives you rather a lot of possible points for followup questions so I’ll stop there.

JB: Sure, there are so many followup questions that this interview should be formatted as a tree with lots of branches instead of in a linear format. But until we can easily spin off copies of ourselves I’m afraid that would be too much work.

So, I’ll start with a quick point of clarification. You say "if you want to bring order to chaos, you have to go where the chaos is." I guess that at one level you’re just saying that if we want to make a lot of progress in understanding the universe, we have to tackle questions that we’re really far from understanding—like how intelligence works.

And we can say this in a fancier way, too. If we wants models of reality that reduce the entropy of our probabilistic predictions (there’s a concept of entropy for probability distributions, which is big when the probability distribution is very smeared out), then we have to find subjects where our predictions have a lot of entropy.

Am I on the right track?

EY: Well, if we wanted to torture the metaphor a bit further, we could talk about how what you really want is not high-entropy distributions but highly unstable ones. For example, if I flip a coin, I have no idea whether it’ll come up heads or tails (maximum entropy) but whether I see it come up heads or tails doesn’t change my prediction for the next coinflip. If you zoom out and look at probability distributions over sequences of coinflips, then high-entropy distributions tend not to ever learn anything (seeing heads on one flip doesn’t change your prediction next time), while inductive probability distributions (where your beliefs about probable sequences are such that, say, 11111 is more probable than 11110) tend to be lower-entropy because learning requires structure. But this would be torturing the metaphor, so I should probably go back to the original tangent:

Richard Hamming used to go around annoying his colleagues at Bell Labs by asking them what were the important problems in their field, and then, after they answered, he would ask why they weren’t working on them. Now, everyone wants to work on "important problems", so why areso few people working on important problems? And the obvious answer is that working on the important problems doesn’t get you an 80% probability of getting one more publication in the next three months. And most decision algorithms will eliminate options like that before they’re even considered. The question will just be phrased as, "Of the things that will reliably keep me on my career track and not embarrass me, which is most important?"

And to be fair, the system is not at all set up to support people who want to work on high-risk problems. It’s not even set up to socially support people who want to work on high-risk problems. In Silicon Valley a failed entrepreneur still gets plenty of respect, which Paul Graham thinks is one of the primary reasons why Silicon Valley produces a lot of entrepreneurs and other places don’t. Robin Hanson is a truly excellent cynical economist and one of his more cynical suggestions is that the function of academia is best regarded as the production of prestige, with the production of knowledge being something of a byproduct. I can’t do justice to his development of that thesis in a few words (keywords: hanson academia prestige) but the key point I want to take away is that if you work on a famous problem that lots of other people are working on, your marginal contribution to human knowledge may be small, but you’ll get to affiliate with all the other prestigious people working on it.

And these are all factors which contribute to academia, metaphorically speaking, looking for its keys under the lamppost where the light is better, rather than near the car where it lost them. Because on a sheer gut level, the really important problems are often scary. There’s a sense of confusion and despair, and if you affiliate yourself with the field, that scent will rub off on you.

But if you try to bring order to an absence of chaos—to some field where things are already in nice, neat order and there is no sense of confusion and despair—well, the results are often well described in a little document you may have heard of called the Crackpot Index. Not that this is the only thing crackpot high-scorers are doing wrong, but the point stands, you can’t revolutionize the atomic theory of chemistry because there isn’t anything wrong with it.

We can’t all be doing basic science, but people who see scary, unknown, confusing problems that no one else seems to want to go near and think "I wouldn’t want to work on that!" have got their priorities exactly backward.

JB: The never-ending quest for prestige indeed has unhappy side-effects in academia. Some of my colleagues seem to reason as follows:

If Prof. A can understand Prof. B’s work, but Prof. B can’t understand Prof. A, then Prof. A must be smarter—so Prof. A wins.

But I’ve figured out a way to game the system. If I write in a way that few people can understand, everyone will think I’m smarter than I actually am! Of course I need someone to understand my work, or I’ll be considered a crackpot. But I’ll shroud my work in jargon and avoid giving away my key insights in plain language, so only very smart, prestigious colleagues can understand it.

On the other hand, tenure offers immense opportunities for risky and exciting pursuits if one is brave enough to seize them. And there are plenty of folks who do. After all, lots of academics are self-motivated, strong-willed rebels.

This has been on my mind lately since I’m trying to switch from pure math to something quite different. I’m not sure what, exactly. And indeed that’s why I’m interviewing you!

(Next week: Yudkowsky on The Art of Rationality, and what it means to be rational.)


Whenever there is a simple error that most laymen fall for, there is always a slightly more sophisticated version of the same problem that experts fall for. – Amos Tversky

66 Responses to This Week’s Finds (Week 311)

  1. cwillu says:

    Was this the whole interview? It ends kinda abruptly, where I might otherwise expect a “page 2″ link.

    • John Baez says:

      No, the interview is just getting started! There will probably be two more parts, which will appear in “Week 312″ and “Week 313″.

      The first sentence in this blog entry is:

      This week I’ll start an interview with Eliezer Yudkowsky…

      but I’ve now added a sentence at the end describing what’s coming next.

  2. DavidTweed says:

    It’s worth noting that it seems like Yudkowsky is primarily from the “logical” branch of work on Artificial Intelligence (a term I personally don’t like, but what can you do…). Here there’s a big focus on constructing ideas and algorithms that are fully reflect and are consistent with the situation being addressed.

    There’s also a very strong branch, which arguably has seen greater deployment thus far, which one could call “pattern” branch of AI, where techniques which are based around dealing with finding and utilising patterns in the situation being addressed, even if they’re not complete or even, at the extremes, inconsistent. (As a classic example, consider the incredibly primitive AI implemented in web search engines: it’s absolutely ridiculous that very simple things like clusters induced by terms, linking wgts, etc, that don’t even model the search domain should actually tend to produce a reasonably high percentage of documents that are relevant to the intent of what we were searching for on a typical search. Yet it seems to work reasonably well.)

    It’s certainly conceivable that the first machines that could plausibly be called semi-independent AIs may arise from more pattern based machines becoming more and more complicated so that they resemble “logical” machines, rather than building a “logical” machine directly. (Of course it’s conceivable that semi-independent AIs never arise at all.)

    • cata says:

      It’s undeniably true, David, that the “pattern” branch has seen more incremental, clear success in many domains. However, I think that the point EY is trying to make is that AI created through the “pattern” method without a logical foundation is just a really risky proposition. Unless we have very rigorous reasons to believe that an AI will pursue and continue to pursue human values, it probably won’t; and then if the AI was any good (or can self-improve) we’re fucked.

  3. Web Hub Tel says:

    Because on a sheer gut level, the really important problems are often scary. There’s a sense of confusion and despair, and if you affiliate yourself with the field, that scent will rub off on you.

    This is the contrast: People don’t want to study problems that impose stark constraints. Yet people will study problems that have open-ended possibilities. That is why I can get paid to develop design automation software, with whatever AI that entails, but I only work on oil depletion topics as a side-interest. This foible of human nature comes up over and over when as EY says:

    We can’t all be doing basic science, but people who see scary, unknown, confusing problems that no one else seems to want to go near and think “I wouldn’t want to work on that!” have got their priorities exactly backward.

  4. Neel Krishnaswami says:

    I’m skeptical that any reflective decision theory (assuming my understanding of the phrase is anything like Yudkowsky’s) will require only simple mathematics. If you look for fixed point theorems, you are on a very direct road into topology. From there, it’s a short hop to topological interpretations of constructive mathematics, and now you are in a field which will eat as much mathematical sophistication as you can feed it.

    You can already see hints of this in E.T. Jaynes’ comments on the various decision-theoretic paradoxes, in chapter 15 of his book “Probability Theory: the Logic of Science.” He observed that they typically all rely on a failure of continuity, in that they typically give a case that the limit of a sequence of expectations fails to be equal to the expectation of the limit. (Van McGee’s “An Airtight Dutch Book” offers a very nice example of this.) So, said Jaynes, one should only consider quantifiers over infinite sets arrived at as the limit of finite processes. Which is to say, you should only consider quantifiers with a sufficiently continuous interpretation..!

    I don’t mean this as a reason to avoid the problem: F.W. Lawvere wrote some of the best mathematical papers I have ever read with reflections in this style. (I particularly recommend “Metric Spaces, Generalized Logic, and Closed Categories”. I wish I understood enough to understand his “Some Thoughts on the Future of Category Theory”, which attempts to formalize(!) the duality between Being and Becoming.)

    But the simplicity of Lawvere’s thinking clearly reflect the fact that he knows a terrifying amount of mathematics. It requires enormous amounts of knowledge to understand multiple disciplines well enough to see significant analogies between them. It’s a worthy goal, but don’t underestimate the work it takes to become the kind of person who can do that.

  5. John F says:

    Important stuff. First about friendliness. I will distinguish between tame and feral. It turns out that, contrary to previous suppositions of thousands of years for domestication, tameness can be bred for in just a few generations. The classic results along this line are the Russian fox experiments

    http://en.wikipedia.org/wiki/Domesticated_silver_fox

    It is nature, not nurture. Feral foxes raised in tame families are still feral, and vice versa. These tame foxes also play well, or at least better, with other species besides humans including their own.

    A single self-improving AI may encounter the error of Colossus, taking over the world,

    http://en.wikipedia.org/wiki/Colossus_(novel)

    but a democracy of such AIs should instantly breed tameness even if not tameness enforced, as long as the probability of behavior variation including tameness is not absolutely eliminated.

    Regarding sound consistent logic, one type of solution which AIs may enjoy is that solution which requires all statements to explicitly assert themselves (eliminating the Liar statement), or contain their own proofs (eliminating the Gödel statement), however you want to fomulate it. The set theory version of course requires sets to contain themselves.

    • John Baez says:

      Having all statements contain their own proofs is fairly similar to the idea behind Gentzen’s proof of the consistency of arithmetic. His argument uses the idea of ‘cut-free proofs’. In a cut-free proof of a list of statements, all the statements appearing in the proof appear in the list of the statements being proved. So, while the list of statements being proved doesn’t exactly ‘contain its own proof’, it tells you what statements appear in the proof.

      Thanks to Gödel’s theorem, Gentzen’s proof must use a principle that goes beyond arithmetic. But it uses one that’s intuitively obvious: induction on the set of finite trees. So, it’s not quite as silly as you might think… though it still doesn’t get around Gödel’s theorem!

      By the way, a lot of people describe the principle that goes beyond arithmetic in Gentzen’s proof as ‘induction up to the ordinal ε0‘. I think that conveys the impression that it’s something esoteric and dubious. But when I finally took the trouble to study what’s going on, I quickly discovered that it’s just doing induction on the set of finite trees! Once you understand this, you’ll probably agree that this principle seems “obviously correct’. After I realized that, I decided Gentzen’s theorem was a lot cooler than I’d first thought.

      If you want to see how big the ordinal ε0 is, read the story in “week236″. It’s big! But it’s still just countable… and much more to the point, ordinals up to ε0 correspond to finite trees.

      (Finite rooted planar trees, to be precise.)

      • Eliezer Yudkowsky says:

        Depending on how you order finite trees – determine whether one finite tree is more or less than another – you end up with an ordering represented by a different infinite ordinal. There are orderings of finite trees which represent ordinals very much larger than epsilon-zero, so saying “induction up to epsilon zero” is correct. E.g. Finite trees as ordinals.

        Btw, for a mathematically amusingly fast-growing function based on labeled trees see http://en.wikipedia.org/wiki/Kruskal%27s_theorem

        • Eliezer Yudkowsky says:

          Ah, sorry, didn’t see the specification of “planar trees” until just now.

        • John Baez says:

          Eliezer wrote:

          Ah, sorry, didn’t see the specification of “planar trees” until just now.

          Sorry, I have a nasty habit of putting the technical details near the end so that people don’t quit reading me early on. “Induction up to ε0” is definitely the most precise way to describe the principle needed for Gentzen’s proof of the consistency of arithmetic; my only complaint was that it sounds so technical that people may not notice that this principle is utterly obvious!

          Btw, for a mathematically amusingly fast-growing function based on labeled trees see http://en.wikipedia.org/wiki/Kruskal%27s_theorem

          Cool! What really delights me is that to explain how fast this function grows, the article invokes the ‘Fefferman-Schütte ordinal’, which is one of the more monstrously huge — but still countable! — ordinals I discussed in week236. This ordinal makes ε0 look like a pathetically puny pimple on the face of mathematics.

          I don’t understand it as well as I’d like, but you could very roughly say that the Fefferman-Schütte ordinal is the first ordinal that’s impossible to describe without mentioning a set that contains it.

          More precisely, it’s the first ordinal that can only be defined impredicatively. I believe that this is property even counts as a definition of the Fefferman-Schütte ordinal—but an impredicative one, of course.

          Now, here’s a digression for other folks out there reading this, who don’t know as much math as Eliezer:

          Gentzen’s proof uses induction on the set of proofs, and he draws these proofs sort of like trees. So, it’s quite plausible that he’d need some sort of induction on the set of trees to make his argument rigorous. But amusingly, while this kind of induction is completely believable once someone explains it, it’s impossible to justify it using the most common axioms of arithmetic, namely ‘Peano arithmetic’.

          This in turn means that there are lots of completely believable facts about natural numbers that you can’t prove using Peano arithmetic. For example, the fact that every Goodstein sequence eventually reaches zero!

          To write down a Goodstein sequence, you start with any natural number and write it in "recursive base 2", like this:

          2^{2^2 + 1} + 2^1

          Then you replace all the 2’s by 3’s:

          3^{3^3 + 1} + 3^1

          Then you subtract 1 and write the answer in "recursive base 3":

          3^{3^3 + 1} + 1 + 1

          Then you replace all the 3’s by 4’s, subtract 1 and write the answer in recursive base 4. Then you replace all the 4’s by 5’s, subtract 1 and write the answer in recursive base 5. And so on.

          You can try some examples using the applet on this site:

          • National Curve Bank, Goodstein’s theorem.

          At the end of week236 I talk about what happens if you look at the Goodstein sequence starting with the number 4:

          4, 26, 41, 60, 83, 109, 139, 173, 211, 253, 299, 348, …

          You’ll see a proof that it takes about 3 × 1060605351 steps to reach zero!

          The resemblance of these ‘recursive base n’ expressions to trees is no coincidence.

        • Todd Trimble says:

          (This is in response to John.) Well, geez, having gone that far in describing Goodstein’s sequences, you might as well have mentioned the simple idea which indicates why they all eventually hit zero: replace all the bases 2, 3, etc. by \omega, and you get a decreasing sequence of ordinals!

        • John Baez says:

          Good point, Todd. I explained the idea in more detail in week236, in case anyone’s interested.

        • John F says:

          What about ordinals that can only be defined imprecatively? Proverbs 26:2 may be relevant.

      • John F says:

        I think requiring statements to contain their own proofs actually does get around several things, or maybe instead doesn’t allow some things.

        I think one of the very first things an intelligent exploratory, trusting but verifying, AI would do is see how big infinity is, or in a current implementation see how big a number in memory can be before it has to be paged. Right after that it would discover the concept of the meta-pagefile.

    • John Baez says:

      My previous remark was just an excuse to talk about Gentzen’s proof of the consistency of arithmetic. More to the point:

      I don’t think proof is that big a part of everyday life. If I limited my remarks to things I could prove, I’d be a lot quieter than I am. And I’m a mathematician! Most people would have to shut up entirely.

      So, I don’t actually think mathematical logic (at least as currently constituted) will play more than a limited role in artificial intelligence.

      • Eliezer Yudkowsky says:

        The main thing you want to prove in Friendly AI is that a program is performing a lawful manipulation of its own uncertainty.

      • Neel Krishnaswami says:

        Proof is not a big part of life, but evidence is — and (structural) proof theory tells us how to view proofs as evidence of propositions. Furthermore, in real life people believe things for reasons; if I make a claim, you can ask me why I believe it, and (quite rightly) be skeptical if I fail to supply any evidence for the claim.

        However, note that the theory of Bayesian reasoning allows for reasoners who are completely amnesiac about evidence! That is, the two important things in Bayesian reasoning are (1) your prior, and (2) updating according to Bayes’ law when you encounter new evidence. But the point of decision theory is that your prior encodes everything. So a Bayesian reasoner can forget all of the evidence he or she has seen, just as long as they update the probabilities associated with each hypothesis before forgetting. This means that they don’t have to remember any evidence, and if you ask an amnesiac Bayesian why they believe something, they can answer “I don’t know”!

        This is rather useful for writing computer programs, because storing a floating point number uses much less memory than storing all the evidence ever seen. However, it’s not so useful if you want your algorithm to tell you why to believe certain facts. Now, when you look at probability theory, you see that a language of propositions is a sigma-algebra, and a prior is on this algebra. So somehow the right idea to handle evidence should involve some kind of categorification of measures.

        • John F says:

          You can only forget if you think that there will be no need for other hypotheses than the ones you are currently updating. But what is the probability that no other hypotheses will ever be needed?

      • DavidTweed says:

        I think that proof feels unusual because of the direction: you “obtain” a conclusion (and admittedly it’s normally a broader conclusion than statements in everyday life) and then proceed to reason why it holds. In contrast as Neel says, a lot of human life is about reasoning about things you observe to reach conclusions, but that seems to me to be just a reordering of steps in a proof. However, the other big difference is that we use a lot of heuristic, and even downright erroneous, steps in reasoning that shouldn’t be used in a proof :-) .

  6. While I find the interview interesting, I generally tend to get annoyed easilly. For example, whenever somebody refers to himself as a math prodigy.

    So if you want to make the interview even better, focus more on presenting your ideas precisely, less on talking about yourself or academia community or how current ideas are inadequate (I guess this is a remark to EY).

    But otherwise very nice interview :-).

    • John Baez says:

      As an interviewer, I am eager to let the personality of the person being interviewed show through. It’s not just about disembodied ideas; it’s about people. If someone is a bit annoying to you, well, I think it’s good for you to know that.

      On the other hand, I always let my subjects look over what they’ve said and okay it before publishing my interviews. Unlike most journalists, I don’t try to get people to say things they’d later regret.

      Of course, when I’m an interviewee I try to be careful about the image I project. I don’t want all the ghastly features of my actual personality to show through.

      Personally I don’t think it’s so annoying to admit one was a math prodigy. It’s not something one has much control over. It can be annoying if someone rests on their laurels and keeps harping on how they once were a prodigy, but Eliezer Yudkowsky doesn’t actually do that.

  7. Giampiero Campa says:

    The internal model principle (I tend to regard it as the control engineer’s version of the no free lunch theorem) states (roughly) that for a system to carry out a nontrivial task in a certain environment, it has to rely (implicitly or explicitly) on an internal model of such environment.

    From this standpoint, an intelligent system is just one that has a very good internal model of its environment. This also suggests that there are limits on how intelligent a system can be, because once you understand the world very well you probably can’t improve that much anymore.

    Also, this suggests that the problem with AI is that computers still lack a decent model of their surrounding environment (that is human beings, society, physical world). Doug Lenat is leading a project to painstakingly and systematically build a database of common sense knowledge (I would call it a sensible model of the world) for computers to use.

    Finally, brainpower alone only gets you so far, because no matter how intelligent you are, without experimentation you cannot understand the world (which is no surprise to physicists of course).

    Now, i am not sure i agree with the tacit assumption that technological change is accelerating, while the opposite can in fact be argued.

    However, even without acceleration, a future in which a lot of work can be carried out by machines (perhaps after this century) seems likely. Whether the transition will be gradual enough for society (and for the economic system) to adapt is another issue.

  8. Bruce Smith says:

    I want to suggest a partial solution to the problem of a self-modifying decision system, as discussed in the interview.

    First let me make explicit what I think we’re talking about: there is an intelligent system S which performs actions, trying to further some goal G. As a component S has a theorem-proving system P, which includes as an axiom a theory of the world (including an interpretation of G, and predicted effects of actions S can perform, and beliefs about the state of the world, including sensory data), and some “control process” that (among other things) uses P to try to prove things, and to decide what to try to prove. Sometimes it proves things like “taking action A right now (instead of doing nothing) would make satisfying goal G more likely”, or “under conditions C, taking action A1 would be better than taking action A2 (regarding goal G)”; when it does prove such things, the control process takes action A, or installs a rule which takes action A1 under conditions C.

    (There is a *lot* to criticize about an architecture like that, and I don’t mean to imply that either Eliezer or I think it could literally be that simple or straightforward; but as a toy system I think it can capture the logical issue involving self-modification which is being discussed here. One could also criticize the basic premise that the goals we really care about can be formalized at all, enough for theorem-proving related to them to be useful (let alone to be solely relied upon for safety); I take that criticism seriously, but it’s beyond the scope of this comment, which is specifically about whether goals which *can* be formalized can be correctly pursued by a self-improving system.)

    Anyway, to make that system S self-improving, we want to let its theorem-proving system P vary with time, and to let it take actions like “replace P1 with an improved version, P2″ (after proving that would be a good action to take, as usual).

    (We might want to let S improve its other components as well, not only P; but for this discussion let me assume the other components’ operation was all capturable in conclusions reached by P about what they should do or how they should be constructed, so we can reduce the problem of safely improving the other components to safely improving P or even to merely using P.)

    Now I can express Eliezer’s dilemma in terms of this system: how can we expect S to use P1 to prove that P2 is a better theorem-proving system than P1 (for S to use as P), if it can’t even prove that P1 itself (let alone P2) is an *acceptable* system for S to use as P? (Which it can’t, since as discussed, if P1 could prove it was consistent, it’s not, and an inconsistent theorem-prover can prove anything, and is therefore not safe to rely on for vetting actions towards any goal. Of course, a P might be consistent but still wrong — consistency is only a necessary condition for being acceptable, not a sufficient one — but this just makes the dilemma more acute, since we’re stuck at a much more basic level than even worrying about whether P is *correct*.)

    Now for the solution:

    1. We give up on S proving that its actions are correct, taking seriously Gödel’s proof that this is not possible. It’s not just a technical limitation due to a specific architecture for S — it’s a general property of anything that thinks; there is no way it can prove it’s consistent (and also *be* consistent), not even in special cases (as illustrated by Löb’s theorem).

    2. But we also realize that we were asking for more than we needed to, in wanting S to prove P2 consistent (not to mention correct, and better than P1) before replacing P1 with P2.

    For us, the designers of S, to think it would be a good idea for S to replace P1 by P2, we don’t need S to first prove either P2 or P1 is consistent. (To the extent we ourselves rely on P1 being consistent, we have no choice but to take that on faith and/or on inductive evidence, since we ourselves are subject to the limitations identified by Gödel. Once we’ve built S to rely on P1, we’ve already made that decision (to base S’s reliability on the unprovable consistency of P1).)

    What we *do* need S to prove is that P2 is *better* than P1. Most aspects of what “better” means here are beyond the scope of this discussion; the one that isn’t is “not less reliable”, i.e. “can’t prove wrong things any more than P1 can”. (This alone doesn’t prove the replacement is a good thing, but as long as we also know “P2 is at least as powerful as P1 in practice”, it can make it an acceptable thing.)

    Logically, we are not trying to prove “P2 is consistent”, but instead, “if P1 is consistent, then P2 is consistent”. (And also, separately, we’d want to prove “if P1 can prove X, then P2 can prove X”, but I won’t focus on that since it’s easy in practice.)

    This is called “relative consistency” (of P2, relative to P1), and there is no barrier to P1 proving it.

    So the way we make S self-improving is to add a rule to its list of ways of using P (using P1 to name the current version of P at the time):

    if you can prove that P2 is relatively consistent to P1 (i.e. that P1’s consistency implies P2’s consistency), and that P2 is at least as powerful as P1, then replacing P1 with P2 is acceptable, meaning you can do it if and when you decide it would be an improvement (i.e. that it’s “better” in the ways not discussed here).

    There is one other point too related to not mention here — there are two different ways P2 can relate to P1 while being “an acceptable replacement” as defined above:

    – it might prove exactly the same set of theorems as P1 (if it’s better, that’s presumably by its proving them more efficiently);

    – it might prove things that were undecidable in P1 (if it’s better, it’s because the system S wants to take these things as new axioms for some reason; we’re just proving it can do this safely, not that doing so is a good idea).

    In practice, S might make use of both of these cases, but for different reasons: P2 might be more efficient (at proving the same theorems), or might incorporate a new axiom Q — either since it’s a definition of a useful new term, or since S has decided that as long as Q is undecidable, the evidence is in favor of it and it’s worth taking it as correct (for example, it would have to do something like that to believe new measurements of sensory data, or at least, measurements from a newly designed and added sensory subsystem).

    • John Baez says:

      Hi, Bruce! I’m glad you’re commenting on this. Yes, I think a relative consistency proof is all we need to justify a proposed change in our reasoning methods, not an absolute consistency proof.

      Eliezer wrote:

      Right now I don’t know how you could even have a self-modifying AI that didn’t look itself over and say, “I can’t trust anything this system proves to actually be true, I had better delete it”.

      If I understand you correctly, Bruce, you’re also saying that Löb’s theorem should not make our AI ‘lose confidence in itself’ in this way.

      • Bruce Smith says:

        Well, sort of but not quite — it can’t lose what it didn’t have. It was doing things which P1 proved were ok to do, not because it could prove that meant they were actually ok, but because its designers (who believed, unprovably, that that meant those things were ok to do — or at least who were willing to act that way) programmed it to follow that policy. The point is just that, once it proves P2 is no worse than P1 (which doesn’t require proving either one is “ok”), both it and its designers know that replacing P1 with P2 doesn’t make it any more likely to prove proposed actions are ok that aren’t, than it already was.

        BTW the above informal wording is a bit misleading (compared to what I spelled out in the first comment), since “ok” sounds more like it refers to correctness than consistency. Of course it would be even better if the system could prove relative *correctness* for a replacement of reasoning (or more generally, sensing/acting) rules, as opposed to just relative *consistency* of reasoning rules. I can’t recall now whether I ever analyzed that possibility; certainly it would not be conceivable (except in trivial cases) unless the system believed fully in some formal model of itself and the surrounding physical world, but if it did (or was willing to accept mere “correctness under that assumption”), then my guess off the top of my head is that it ought to be possible and (in principle) straightforward.

  9. streamfortyseven says:

    From Yudkowsky’s paper “Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic Draft of August 31, 2006, available at http://singinst.org/upload/artificial-intelligence-risk.pdf, he states “[y]et before we can pass out of that stage of adolescence, we must, as adolescents, confront an adult problem: the challenge of smarter-than-human intelligence. … Artificial Intelligence is one road into that challenge; and I think it is the road we will end up taking. … I do not want to play down the colossal audacity of trying to build, to a precise purpose and design, something smarter than ourselves. But let us pause and recall that intelligence is not the first thing human science has ever encountered which proved difficult to understand.“

    He’s leaving out two fundamental necessities for artificial intelligence to be truly intelligent and not just a cleverly-fashioned automaton which appears to pass the Turing Test: consciousness and volition, although he talks about volition at great detail and greater length in “Coherent Extrapolated Volition” (see http://singinst.org/upload/CEV.html) without reaching much of any conclusion – and the last revision to that page was in 2005, almost 6 years ago.

    In “Intentional Cognitive Models with Volition”, Ammar Qusaibaty, Newton Howard, and Colette Rolland of the Center for Advanced Defense Studies (see http://www.c4ads.org/files/cads_report_inmodels_060206.pdf), state that “Maurice Merleau-Ponty argues that conscious life—cognition, perception or desire—is subtended by an “intentional arc” that projects an individual’s past, future, human setting and the physical, ideological and moral circumstances of that individual. This intentional arc brings about the unity of sense, intelligence, sensibility and mobility. (Merleau-Ponty 2002: 157) Intentionality may thus be conceived as central to describing human cognition and intelligence” citing Merleau-Ponty, M. (1945; 1962 trans.) 2002. Phenomenology of Perception. London: Routledge, at page 157.

    Yudkowsky talks a lot about rationality, but that can arise from a rule-based AI interacting with a knowledge base – a seemingly cunning automaton may be produced, but there’s no intentionality present intrinsic to the AI outside of the person who writes the original set of rules and provides the original knowledge base.

    Qusaibaty et al. go on to state that “The philosophical problems of “free will” are caused by three gaps in practical human reasoning and action, as discussed by Searle (2001: 14–15):
    (1)“Reasons for the decision are not sufficient to produce the decision,”
    (2)“Decision is not causally sufficient to produce the action,”
    (3)“Initiation of the action is not sufficient for action continuation or completion.”
    According to Zhu, (2004) volition bridges these gapes. Unlike rationality and intentionality, volition is neither explicitly nor implicitly included in Newell’s list of functional requirements for cognitive models [cited by Anderson and Lebiere, 2003]. In their requirements for a theory of intention, Cohen and Levesque implicitly included the concept of volition: “agents track the success of their intentions and are inclined to try again if their attempts fail.” (Cohen & Levesque 1990) For Cohen and Levesque, intention is a choice with commitment. Beyond commitment to goals, however, volition is a self-imposed vehicle for ideas to be actions, for the intangible to be real and for the mental to be physical through rationality and choice. If intentionality is the mother of rationality then volition is the mother of intentionality.”

    a. Cohen, P. R. and Levesque, H. J. 1990. “Intention is choice with commitment.” Artificial Intelligence 42: 213-261. (see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.8441&rep=rep1&type=pdf)

    b. Zhu, Jing. June 2004. “Understanding Volition.” Philosophical Psychology 17.2: 247-273. (see http://philpapers.org/rec/ZHUUV [paywall = $47.39])

    c. Anderson, John R. and Christian Lebiere, 2003. “The Newell Test for a theory of cognition.” Behavioral and Brain Sciences 26.5: 587-601. (see http://bit.ly/enMS9o)

    but Yudkowsky may address these questions about volition and intentionality in Part II…

    • John Furey says:

      I’m not sure it is volition per se that is required, but instead diligence. Diligence is moreover externally observable (or at least according to taskmasters would be observable if you were exhibiting it).

      • John Baez says:

        So you’re wanting an AI that does what it’s told rather than what it wants? I think the dream of ‘friendly AI’ is that it does what it wants, but it wants things that are nice.

        • John F says:

          Sorry, I meant required according to the definitions discussed by stream47 – diligence fits them as good as the more evocative word volition.

          Volition to me implies both the ability to choose (whether rationally or irrationally) and the ability to irrationally prefer. Volition is not the ability to make “free will” choices that are only either 1) obviously correct decisions 2) random, but includes, for example, Koko the gorilla preferring to watch men rather than women.

        • Kevembuangga says:

          So you’re wanting an AI that does what it’s told rather than what it wants?

          Would not that be a more reasonable approach than introducing unbridled autonomy to worry thereafter about its ill effects?
          The scare of the “evil AI” sound silly and gratuitous see discussion and counter arguments by Nick Szabo.

  10. Allan E says:

    Baloney: it’s all a load of hooey

    Here’s another possibility — if you combine an intelligence explosion with “there’s plenty room at the bottom” one can imagine our technological future effectively disappearing from us, and perhaps for safe-measure, shutting the door behind itself!

    I call this the “we’re left out to graze” scenario! Similar ideas appear in sci-fi, specifically Iain M Bank’s “Sublimed” civilizations.

    • John Baez says:

      I love thinking about all these scenarios… and anyone who hasn’t already read every SF novel by Iain M. Banks should do so immediately.

      However in my more practical life I prefer to think about this:

      At what rate will technological progress of various sorts occur, and at what rate will global warming, the depletion of oil reserves, and other civilization-threatening processes occur?

      If for example there were a good chance of developing powerful AI that could help us with our problems before various known processes pose a serious threat, it would make a lot of sense to work on AI.

      But if it’s almost certain to come too late, maybe we should focus more energy on something else, for now.

      I should add that my main fear is that oil shortages and erratic weather conditions will lead to economic problems, famines and wars that become so grave that they threaten ‘civilization as we know it’. I have little sense of how likely this scenario is. I feel that in the next 10-20 years, this scenario is more likely than an ‘intelligence explosion’. But what I really want is to get a better feel for the risks and opportunities we face. So, rather than going around telling people what I feel, I’m more inclined to gather information.

      • streamfortyseven says:

        I think we’ll be running into resource depletion problems a long time before anyone develops any kind of AI that can exhibit intentionality or volition – and the effect of resource depletion will financially prevent money from being spent on anything other than the rule-based automata that we refer to as AI.

        I think we need to be finding ways to get away from using oil and natural gas ASAP – go back to what it felt like in 1979, when farmers used methane generators run off of manure to run their tractors and farm trucks. Coming from Kansas, I knew a lot of rural people who did just that. Devising and improving technology and infrastructure planning for immediate use would be the most productive use of time in my opinion. Right now, we’ve got to play catch-up to where we were in the late 1970s.

        • John Baez says:

          It’s fascinating to me that we live in a world where some intelligent people think we need to put more effort into sophisticated artificial intelligence, while others think tractors powered by methane from manure are more important, and each thinks the other is being unrealistic.

        • streamfortyseven says:

          Well, as the power requirements for computers get less and less, and processors get faster and faster, perhaps we’ll have a peer-to-peer distributed network AI setup powered by chicken and cow manure, thus getting the best of both worlds and giving new meaning to the old joke about BS, MS, and PhD (pile it higher and deeper…) ;-)

        • John Baez says:

          Not to mention “garbage in, garbage out”!

        • DavidTweed says:

          The thing that’s easy to forget that there’s an existance proof that human level intelligence can be acheived with maybe 5kg of matter, namely the human brain. So I don’t think resource depletion is a problem that would stop the deployment of AI. The problem is that we currently don’t really have any breakthrough ideas about how to make AI work and, all things considered, a minuscule proportion of human effort spent on it, and maybe rapid resource depletion will prevent that effort being expanded.

      • Giampiero Campa says:

        … my main fear is that oil shortages and erratic weather conditions will lead to economic problems, famines and wars …

        I tend to agree. However I think/fear that the single most important thing that is slowly but surely driving us to economic problems, famines and wars and WILL sooner or later threaten ‘civilization as we know it’, is the fact that all the important policy decisions are constantly being taken by and in the interest of the richer 0.1% of the population.

        Unfortunately this is a pattern that has brought down many other social systems in the past, from the roman empire to the 4Kyrs old Indian caste system, and we can learn from history that we learn nothing from history :)

        The only antidote i can think about is the majority of the people understanding the economic inner working of society. I am talking really basic macroeconomics here, how money circulates and where they flow from, to where, and why. But i don’t see steps in this direction.

        And yes erratic weather conditions and especially oil shortages can definitely accelerate the process. Probably, as usual, poor people everywhere will suffer first and more.

        • Bruce Smith says:

          The only antidote i can think about is the majority of the people understanding the economic inner working of society…. But i don’t see steps in this direction.

          In my more optimistic moments, I see the internet/web/blogosphere as a big step in this direction (maybe not towards “the majority” understanding these and other important things, but towards “enough” understanding).

          I was aware of ideas like that before the blogosphere came into being, but since it did, I’ve been exposed to far more intelligent discussion of them; and I can say the same for many other kinds of ideas and knowledge. I think this is a general phenomenon which can increase “social intelligence” (even though the web also decreases it in some ways, e.g. by being an “echo chamber”).

          I also expect/hope to see much better collaborative idea-improving software platforms in the reasonably near future….

  11. Roger Witte says:

    We haven’t even started to think about what happens when you have several super-intelligent AIs, but this is very relevant. What kind of society/culture would the form between themselves? How would that impact on humans both as individuals and as societies/cultures?

    There is plenty of biological work that suggests that the reason animals develop high intelligence is so that they can construct more complex social systems. The ability to construct better concious models of the physical environment is a bonus (but if better understanding of the physical environment is all that’s required, evolution tends to come up with better ‘hard wired’ solutions rather than increasing intelligence per se).

    Note that an individual bee is fairly bright, as insects go, whereas individual ants do not seem to be as intelligent as individual bees. However, ant and bee colonies seem to have similar levels of intelligence (maybe the ant colonies have the edge).

    It is certainly not clear to me whether human cultures/societies are more or less intelligent than human individuals (my guess is that societies are more intelligent in the long run, but less intelligent short term). The maths for dealing with these systems is not well developed (but the quantitative economics and quantitative ecology fit in here somewhere. As does Arrow’s theorem which was discussed over at n-category café, although the discussion there wandered away from, rather that towards its original purpose, ie understanding the relationships between a human society and the individuals from which it is composed).

    I don’t know if the new ‘green maths’ that John is seeking here but there is certainly a requirement for new mathematics to usefully address these issues.

  12. […] “Azimuth”, the blog of mathematical physicist John Baez (author of the famous Crackpot Index): This week I’ll start an interview with Eliezer Yudkowsky, […]

  13. […] Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index): This week I’ll start an interview with Eliezer Yudkowsky, who […]

  14. News Bits says:

    […] new interview with Eliezer […]

  15. […] John Baez: This Week’s Finds (Week 311) […]

  16. Thomas says:

    I find if funny that one speculates on imaginated AI, but not on current developments like those ones.

    • Tim van Beek says:

      That’s a nice exposition of all the important aspects of human intelligence that we do not know anything about.

      The rapid stultification of the human population was cited, by the way, as a prove for the young earth hypothesis, the hypothesis that earth is only ca. 6000 years old (that’s roughly the number you get when you count backwards from the birth of Jesus of Nazareth through the old testament – indeed if you extrapolate the obvious stultification that has taken place between 1850 and today, then earth can hardly be that old, either).

      Since the human brain consumes ca. 20% of the overall energy budget, I’m surprised that no one has offered the obvious “math diet”: Lose weight by thinking harder! Let your brain burn all the calories! (Possible sideeffects: obsession, erratic soliloquy and inappropriate outbursts of the “now I finally got it” kind.)

    • DavidTweed says:

      One point the article doesn’t mention is that it’s the size of the human head that is the largest element needing to pass through the female pelvis during birth, and apparently heads larger than current ones significantly increase the risk of either mother or child dying. So one possible explanation is that a general decrease in human body size (which I’ve seen claimed for the transition from hunter-gatherer to argriculturalist) may have led to selection pressure for smaller volume skulls. Equally, the recent rising trend in brain volumes may be “allowed” by increasing human size (due to nutrition) and more sophisticated birthing techniques.

      Of course, that’s just a hypothesis.

  17. Thomas says:

    That BBC-video looks to me as the archetype of power of mind. If some advanced AI comes into existence, it would treat us that way. But Frank Wilczek remarks in this talk that he expects supersmart aliens to lose their interest in the physics/biologists universe.

  18. Max Cypher says:

    It appears to me that a permanent division between ‘natural’ and ‘artificial’ is assumed in this discussion. Given that we are already performing ‘artificial’ computations using DNA in novel ways and that we are already injecting computational (and ‘artificial’) nano-structures into biological cells, perhaps this assumption is a bit naive.

    My basic point is that we are just now gaining instrumentation sophisticated enough to really mine an incredibly deep and rich vein of technical knowhow via bio-mimicry. I suspect that it won’t be a matter of us vs. them, or even of us-as-pets-of-them. I suspect that the mostly likely road to singularity will be through the evermore intimate connection to our currently-primitive Internet: AI will emerge (perhaps purposely; but not necessarily so) from having biological (or semi-biological) brains coupled evermore tightly together.

    Furthermore, I suspect that as this emerging intelligence begins to integrate all the the scientific findings that are far too great in number for any single human to deal with, we will be able to update our current biological hardware into “Meatspace 2.0″, so to speak.

    I imagine that ego-driven experience in the above context would be analogous to how the ancient Greeks believed that any voices in their heads came from the Gods; but in this case the new mythology will be made up of the many facets or ‘faces’ of the growing and worldwide AI.

  19. […] We are trying to create a friendly artificial intelligence implement it and run the AI, at which point, if all goes well, we Win. We believe that rationality is very important to achieve […]

  20. […] Eliezer Yudkowsky in an interview with John Baez Take metaethics, a solved problem: what are the odds that someone who still thought […]

  21. […] The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: “…after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity…” (Reference: An interview with Eliezer Yudkowsky). […]

  22. Thomas Too says:

    “[what I ] like most about singularitarians is that they care about the future” The only evidence for that comes from their own statements about how much they care. It is much more likely that they care about getting federal funding from easily impressed federal administrators with more grant money than brains.

    After a hundred years of making better computers, we are not any closer to making intelligence than before. To make a faster computer, we make faster relays, the building block of math computations. Not only do we not know how to make a building block of intelligence, we don’t even know what it would be. All the computers every made in the world, but together, could not out smart a gerbil. If the programmers did not think of it before, the computer can not do it. A gerbil can think. There is not only no step down the path towards a thinking machine, there is no direction to go, and we don’t even know how to make the metaphorical feet to start down such a path. The singularity is a scam. The singularity is a federally funded scam. The singularity is a federally funded scam like the esp warriors of the 1970’s.

    • John Baez says:

      It’s perhaps a side point, but I think most singularitarians who are trying to raise money are seeking private donations, not federal funding. For example, in this video Eliezer Yudkowsky said:

      I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and donating a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute.

      More to the point, I can’t help but feel that the singularitarians who are donating money to the Singularity Institute and other such organizations really are concerned about the future!

  23. Thomas Too says:

    Oh, I don’t doubt that if a person is naive enough to persuaded by the singularity arguments, that they are naive enough to give money, but a substantial portion of their money is coming from institutions like universities, so, I guess you could say it is in large part, state money, but a major portion of state education funding comes from federal state grants, or work at companies that get federal funding, because the most gullible are federal employees.
    Singularity “research” has some similarities to a multi level marketing scheme. Get in at the beginning, donate money for seminars to get a credential as an expert, then teach a similar course to other people who want to scam the next greedy sucker behind them.
    Singularity has little in common with either science or philosoply. Science only occurs if you can do an experiment with a control, which has never happened in singularity research. Philosophy is a search for basic understanding that is open to all who are prepared to advance and defend their concepts, but for singularity, unless you have accepted the basic premise that it is coming, you are excluded as “naive.”
    Singularity has more in common with the self help and actualization movement, homeopathy, and alchemy. I If you don’t agree with the premise, then you “just don’t get it.” You “just don’t get it,” is an example of the special pleading logical fallacy, such as described by Carl Sagan.
    As I have written before, the singularity is the new philosphoer’s stone.
    So , While you have only responded to half my post, I respond in 4 parts to yours.
    It is not all private money.
    Singularity donations are like a ponzi scheme.
    Singularity is not science or philosphy.
    The singularity is the new philosophers stone.
    I was expecting a dismissive “just dont get it response,” so am a little surprised you made a fact argument. But as you dismiss my 4 parts in your response, please also address (to dismiss) the second part of my prior post, that there is no progress , real or expected, in intelligent machines. I will be counting how many times I “just dont get it.”

    • John Baez says:

      I never made any claims of rapid progress toward intelligent machines, so I have no position to defend and don’t feel like discussing this question now. But Alexander Kruel has been interviewing researchers about this, and you might be interested in what they say:

      • Alexander Kruel, Interview series on risks from AI.

      By the way, you write:

      Oh, I don’t doubt that if a person is naive enough to persuaded by the singularity arguments, that they are naive enough to give money, but a substantial portion of their money is coming from institutions like universities…

      What’s the evidence for this claim, please?

      • Thomas Too says:

        Look at the lists of attendees of singularity conferences. There are many people from universities and defense contractors. I have read the self deluded ramblings of the risks of AI.

        The risks of AI supposes that it might exist. There is no evidence that it will. A symposium on the risks of discovering the philosphers stone would have been as useful. The emperor has no clothes, but people who want to get money from the emperor stand in line to say how spendid is his raiment.

        • John Baez says:

          Thomas Too wrote:

          Look at the lists of attendees of singularity conferences. There are many people from universities and defense contractors.

          Okay, those people are attending the conferences—but now you’ve got me curious to know which kinds of people are donating money to the Singularity Institute. It’s not too hard to find out: just look at the list of top donors and use Google to find out what they do.

          I only had time now to look at the top five donors, and they are not people who work at universities. They’re a foundation, an “Estonian programmer who participated in the development of Skype and FastTrack/Kazaa”, a person who works on “Proximiant building technology to beam receipts straight to your phone”, an investment group, and a millionaire who “retired seven years ago after a successful career in Silicon Valley”. This matches what I expected. Perhaps you could go through the list and see what the typical contributor is like.

  24. Thomas Too says:

    Hey, you’re right. I assumed attendee were donors. Well, I’ve got to go now, I just found a list of people to whom I wish to sell my absolutely effective perpetual motion machines.

  25. “[…] and run the AI, at which point, if all goes well, we Win.)” — Eliezer Yudkowsky in an interview with John Baez

  26. Bryan Thomas says:

    For those interested in the 1983 Omni piece by Vinge, it is now available through the Internet Archive. https://archive.org/details/omni-magazine-1983-01

  27. […] Last week I attended the Machine Intelligence Research Institute’s sixth Workshop on Logic, Probability, and Reflection. You may know this institute under their previous name: the Singularity Institute. It seems to be the brainchild of Eliezer Yudkowsky, a well-known advocate of ‘friendly artificial intelligence’, whom I interviewed in week311, week312 and week313 of This Week’s Finds. […]

You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,869 other followers