In a comment on my last interview with Yudkowsky, Eric Jordan wrote:
John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.
I’ve been thinking about this a lot.
Indeed, the reason I quit work on my previous area of interest—categorification and higher gauge theory—was the feeling that more and more people were moving into it. When I started, it seemed like a lonely but exciting quest. By now there are plenty of conferences on it, attended by plenty of people. It would be a full-time job just keeping up, much less doing something truly new. That made me feel inadequate—and worse, unnecessary. Helping start a snowball roll downhill is fun… but what’s the point in chasing one that’s already rolling?
The people working in this field include former grad students of mine and other youngsters I helped turn on to the subject. At first this made me a bit frustrated. It’s as if I engineered my own obsolescence. If only I’d spent less time explaining things, and more time proving theorems, maybe I could have stayed at the forefront!
But by now I’ve learned to see the bright side: it means I’m free to do other things. As I get older, I’m becoming ever more conscious of my limited lifespan and the vast number of things I’d like to try.
But what to do?
This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.
For me this question was brought into sharp focus when I got a research position where I was allowed—nay, downright encouraged!—to follow my heart and work on what I consider truly important. In the ordinary course of life we often feel too caught up in the flow of things to do more than make small course corrections. Suddenly I was given a burst of freedom. What to do with it?
In my earlier work, I’d always taken the attitude that I should tackle whatever questions seemed most beautiful and profound… subject to the constraint that I had a good chance of making some progress on them. I realized that this attitude assumes other people will do most of the ‘dirty work’, whatever that may be. But I figured I could get away with it. I figured that if I were ever called to account—by my own conscience, say—I could point to the fact that I’d worked hard to understand the universe and also spent a lot of time teaching people, both in my job and in my spare time. Surely that counts for something?
I had, however, for decades been observing the slow-motion train wreck that our civilization seems to be engaged in. Global warming, ocean acidification and habitat loss may be combining to cause a mass extinction event, and perhaps—in conjunction with resource depletion—a serious setback to human civilization. Now is not the time to go over all the evidence: suffice it to say that I think we may be heading for serious trouble.
It’s hard to know just how much trouble. If it were just routine ‘misery as usual’, I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.
I’m not sure this complex of problems rises to the level of an ‘existential risk’—which Nick Bostrom defines as one where an adverse outcome would either annihilate intelligent life originating on Earth or permanently and drastically curtail its potential. But I see scenarios where we clobber ourselves quite seriously. They don’t even seem unlikely, and they don’t seem very far-off, and I don’t see people effectively rising to the occasion. So, just as I’d move to put out a fire if I saw smoke coming out of the kitchen and everyone else was too busy watching TV to notice, I feel I have to do something.
But the question remains: what to do?
Eliezer Yudkowsky had some unabashed advice:
I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.
So how do you go about protecting the future of intelligent life? Environmentalism? After all, there are environmental catastrophes that could knock over our civilization… but then if you want to put the whole universe at stake, it’s not enough for one civilization to topple, you have to argue that our civilization is above average in its chances of building a positive galactic future compared to whatever civilization would rise again a century or two later. Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.
Similarly with other short-term problems. Unless they’re little-known and unpopular problems, the marginal impact is not going to make sense, because millions of other people will already be working on them. And even if you argue that some short-term problem leverages existential risk, it’s not going to be perfect leverage and some quantitative discount will apply, probably a large one. I would be suspicious that the decision to work on a short-term problem was driven by warm glow, status drives, or simple conventionalism.
With that said, there’s also such a thing as comparative advantage—the old puzzle of the lawyer who works an hour in the soup clinic instead of working an extra hour as a lawyer and donating the money. Personally I’d say you can work an hour in the soup clinic to keep yourself going if you like, but you should also be working extra lawyer-hours and donating the money to the soup clinic, or better yet, to something with more scope. (See “Purchase Fuzzies and Utilons Separately” on Less Wrong.) Most people can’t work effectively on Artificial Intelligence (some would question if anyone can, but at the very least it’s not an easy problem). But there’s a variety of existential risks to choose from, plus a general background job of spreading sufficiently high-grade rationality and existential risk awareness. One really should look over those before going into something short-term and conventional. Unless your master plan is just to work the extra hours and donate them to the cause with the highest marginal expected utility per dollar, which is perfectly respectable.
Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point.
When I heard this, one of my first reactions was: “Of course I don’t want to do anything ‘conventional’, something that ‘millions of people’ are already doing”. After all, my sense of being just another guy in the crowd was a big factor in leaving work on categorification and higher gauge theory—and most people have never even heard of those subjects!
I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new. Though I always want more people to join in, we’ve already got some good projects going that take advantage of my ‘comparative advantage’: the ability to do math and explain stuff.
The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems. I’ve been putting most of my energy into that lately, and coming up with ideas faster than I can explain them. On top of that, Eric Forgy, Tim van Beek, Staffan Liljgeren, Matt Reece, David Tweed and others have other interesting projects cooking behind the scenes on the Azimuth Forum. I’ll be talking about those soon, too.
I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.
One can, of course, easily fool oneself into thinking one knows.
“Each time a man stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current that can sweep down the mightiest walls of oppression and resistance.” Robert F Kennedy, 1966 speech to South African students, Cape Town, S.A.
I’m not saying it doesn’t matter what one does, I’m saying it matters much more how one does it.
One piece of advice that I would like to share and which I have learned from dealing with my chronic illness:
For example, I have come to the conclusion that donations are useless. Sure, they send a very nice message about your intention, and they make you feel better; but what you should care about is the outcome, and only the outcome. What if donating to starving children in Africa actually perpetuates poverty there? It is much more useful to help your friend instead, who is right next to you and where you can make a difference; maybe he needs help you didn’t know about.
One general comment is that Yudkowsky and others who seem to share his views aren’t thinking about feedback effects (at least before the point that an AGI suddenly takes off). In contrast, that’s one of the major planks people like Kurzweil put forward for their view of future progress (indeed, the fact there doesn’t seem to be as much feedback as “predicted” is one of the reasons I’m on the fence about Kurzweil-style theories.)
To explain feedback, consider the task of in the late 1950s designing the level of high-performance computer chips that are the current latest generation (for specificity let’s call it an i7-class chip). One approach would be to focus on what kind of switching times, circuit complexity and memory densities you’d need and start a research project to develop those technologies. Other than building stuff to test the theories, you’re not going to produce a full chips until you’ve figured out enough stuff to produce an i7-class chip. On the plus side, you’ve got a clear goal that you’re always focussing on. However, …
Let’s look at what actually happened. Computer chips were continually being designed for near-term uses, which means that the technology being used in the first couple of generations in the early 60s had no chance whatsoever of being usable in an i7-class chip, so it was clearly a waste of time. As chip generations go on the technology is moving closer, but they still clearly need big innovations before they’d lbe suitable. Except that it provided some organisational support to the developers, with the current computers being used both directly in the design of the next generation of chips (simulation, layout, manufacture) as well as indirectly (email between developers, “powerpoint” for meetings, etc) and even managed to have the feedback of bringing in some money to finance the next generation. Obviously it’s difficult to know with a counter-factual experiment, but I suspect that the feedback-enabled design process produced i7-class chips in at most half the time of the “focus on the end goal only” approach.
That’s why I think that, however energetic they are, the Yudkowsky-style development teams aren’t going to be the first to produce an “artificial general intelligence” (assuming this is possible at all). In a more general sense, that’s why I’m sceptical of just going for a big goal: certainly a viable human colony on Mars would protect humanity from the existential threat of a pandemic virus, but it doesn’t have any effects until that time. In contrast work on, say, decreasing the time for designing and deploying vaccines is more mundane and work-a-day, but in addition to potentially stopping a pandemic virus, it could improve human productivity by stopping some of the health issues of people, including scientists working on decreasing vaccine time, due to less dramatic diseases.
At least in my personal choices, one thing I look for is the potential for the things I spend time on to feed-back into greater productivity/capacity for further work on the task.
Regarding feedback effects, some people also call this approach “bootstrapping”. For most current language compilers, each new version is bootstrapped by its previous version. In the case of a missing recent binary, you might have to face the absurd prospect of winding back to a very old set of source code and painstakingly bootstrapping your way up the chain. So I agree with you David that this is probably the way to go, while at the same time it has these interesting implications.
So I’d acknowledge that network theory, if the networks are occupied by learning agents, might create some nature-like systems and behaviors in a computing environment. So that creates a test bed for things imaginary. But isn’t the basic question how to identify the working parts of the *naturally occurring systems* in *our* environment??
Shifting attention back and forth between those two kinds of environments might give us hints of what to play with in our “nature test-bed system”, of course, but going back and forth between theory and subject is certainly a precondition for somehow finding a way to either work natures parts or avoid them. That’s the actual problem isn’t it?
I mean, if we don’t get some idea of how the steering of naturally occurred systems works, how are we going to know when it’s better to just get out of the way?
That’s certain a basic question. I think it’s a great question. I can rely on you to ask versions of this question in almost every comment you post here—and sometimes you even start answering it, which is even more interesting.
However, since I’m a mathematician who is good at quantum field theory and category theory, my skills lie elsewhere. And my work on networks is designed to take advantage of those skills. I can take existing formalisms for describing networks in many different fields, work out the general principles that lie behind them, and develop a unified framework that encompasses them all. This will make it easier for people to take tricks from one area (say quantum field theory) and apply them to another (say biochemistry or population biology).
You will be disappointed, because this isn’t what you think is most important. Indeed, it might not the most important thing to do. Luckily there’s someone uniquely placed to do what you think is important: namely, you.
John, the difference in our approaches results in our not responding to each other’s leads. My perception is that equations are used to represent natural systems with theoretical ones, that on comparison are so very different in design and behavior from their subjects, there is very clearly more than a statistical difference.
Basically my first day out with my post-graduate research project, to study how energy moved in buildings, I was completely floored by what I found, numerous completely undocumented phenomena. The main transport mechanism for indoor micro-climates is convection. One surprise was that many of the air currents you find behave much more schools of fish than like one thing pushing on another thing. Even more intriguing, I couldn’t find other scientists who were interested in that, evidently because it did not immediately appear to be contributing to helping them find equations for them.
I didn’t find scientists interested in exploring these undocumented behaviors. I DID find lots that would quickly draw conclusions about them, though, and discredit the competence of anyone who said they found a good way to study the self-animated forms of natural systems. So I developed a range of new methods on my own. Basically, if you want to study self-animated swarm behaviors, look for emerging swarm behaviors, it starts as simple as that.
What draws attention to them in the data you gather is inflection points in growth curves, generally (indicating a change of system). It helps to have statistical methods for extracting differentiable curves from noisy data, which is one thing I think I contributed new math to do. Because natural systems are not defined in equations, to understand their behaviors you don’t study your equations for them.
You study the differences between the assumed systems your equations represent and the real systems the natural systems are displaying. It’s that back and forth process between theoretical and natural subject that is the “chicken and egg” of natural systems science, i.e. the “life cycle” from which both chickens and eggs develop.
Does that help?
Yes, it sounds like a great methodology.
Personally what I’m really good at is scribbling on paper, connecting mathematical formalisms, and making sparks fly that way. So, if I were to switch to your style of doing things, I would need to team up with some people who enjoy doing experiments. I have no snobbery against experiments, I’m just no good at them. The last time I tried one, part of my winter coat dissolved in battery acid.
I certainly enjoy looking at data that comes out of experiments, though! The Milankovitch cycles and glacial cycles are full of unexplained patterns, and I plan to think about those a bit.
I agree that working on near-future problems iteratively is often the fastest way to solve far-future problems. The present non-existence of unfriendly strong AIs who need to be made friendly is not the only reason that working on friendly strong AI research is currently a waste of time, however.
I’m involved a little bit with the DARPA Physical Intelligence project. From the beginning of the project, some general principles of learning in physical systems have been apparent. One is that if you teach a system as slowly as possible, it will learn in a predictable way. As you increase the power to the system, it will learn more quickly, but also less predictably. This is because as power flowing through a system increases, nonlinear effects become more important in its dynamics. The fastest learning happens at the power level just below the level where the system becomes unstable and unable to learn at all. This is another manifestation of “adaptation to the edge of chaos” which has been known since 1985. In the lab, we often talk about how the brightest people often seeming to be on or just over the edge of sanity is no coincidence.
The friendly AI project is simply advocating teaching AIs much more slowly than their maximum learning rate, such that their behavior can be predicted, understood and controlled. However, these kinds of AIs will always be drastically outpaced by unpredictable AIs whose learning rate is limited only by physics. The best we can do is to try to teach these independent AIs to be good and hope that they grow up to be wise and kind. This is no different from how human children are raised, nor do I think it can be different.
Will these unconstrained AIs be, on balance, good or bad for humanity? The parameter space of superhuman ethics is not well known. However, one can assume that even if the ethics of superhuman AIs are distributed randomly in this space so that some are benign and some are malign, those who benefit humanity will be more successful than the ones who are harmful to humanity because we will reward the former and punish the latter. The net effect will be positive.
In human societies or in bacterial colonies, viewed in terms of the Prisoner’s Dilemma, there are proportions of cooperators and defectors which vary with time. When there are too many defectors, such as at present in the USA, societies tend to falter. However, there is a cost to punishing defectors such that utility is maximized with a nonzero number of defectors. I believe this will also be the case with humanity plus strong AIs.
Besides all of these arguments, won’t the future become a much more interesting place when it is inhabited by a variety of demigods and demons rather than only types of faster and larger but very much human-like intelligences?
David Lyon,
The Singularity Institute is trying to mathematically define binding and stable rules that, if implemented into the goal-system, would make an artificial general intelligence care about humans and not destroy them due to indifference while pursuing some instrumental goal.
It is critically to solve this problem before we know how to create artificial general intelligences.
To approach this problem they first wrote hundreds of posts on rationality (they are currently writing a book on rationality) and are now developing a reflective decision theory of self-modifying decision systems.
For more on this check out some of their publications.
For what it’s worth, I have enough trouble managing my life, which is actually benefiting from all the rationality-sharpening LW claims to be for. And the training in combat philosophy turns out to be a whole lot of fun. And combat philosophy is, I suspect, what FAI needs. So: try your hand at applied philosophy. See if it’s fun. If so, win.
John,
Perhaps you are feeling dissatisfied because you haven’t yet figured out how Azimuth fits into the big picture in concrete terms. Why Azimuth is so very important.
Your post made me think of Flemming Funch’s post:
http://ming.tv/flemming2.php/__show_article/_a000010-001974.htm
Getting to the bottom of what we really know and the best potential avenues of action is supremely important. Establishing a credible and trusted source is the first step. The second one is feeding it back into the rest of the system.
You’re missing the connection to the rest of the puzzle. What questions need answering? What should We do? Concrete and specific questions that need answering. That kind of thing. A more urgent purpose to the research.
Azimuth might not have all the answers right now, but it certainly can become the best place to get answers to complicated questions. Honest, plain, truthful answers we can trust about the state of our science.
There are a lot of really really smart people working on pieces to the various problems which Azimuth covers. We need to get things connected and enhance collaboration. While permitting each of us to work on that small little piece which best suits us.
Azimuth wants to be part of that bigger picture.
And you are uniquely suited to making this happen.
The current tools suck for this though. I’m looking at ways to improve this, so we can all figure out how to help each other more effectively. Lot of little projects that few know about won’t be as effective as one mesh of projects that everyone knows to use.
– Curtis
P.S. It might make sense to create an azimuth user on Quora then collaborate via the Wiki and Forum to answer really important relevant questions that get asked on Quora
That would be one way to greatly increase the visibility of the project while focusing the work at Azimuth more tightly for anyone that wants to join in.
Of course, we would add the material back into the Azimuth Library when it is ready.
Each question would also be great material for posts here.
Hi, Curtis!
Actually I know quite well why something like Azimuth could be important. What I don’t know is how to get that to happen. I know I want more people to get involved, so I can concentrate on the things I’m good at and let someone else do the things I’m bad it. Unfortunately one of the things I’m bad at is getting more people involved. I’m not completely absolutely utterly pathetic. But organizational work doesn’t thrill me in quite the way, say, discovering connections between stochastic Petri nets and quantum field theory does.
So, it would be great if some people with the right skills got really excited about the organizational and publicity aspects, and put a lot of energy into that. Like maybe you?
That sounds good. I tend to think of the problem as being: not enough people besides me wake up in the morning and think “what can I do today to make the Azimuth Project succeed?” I tend to think of it as a problem of getting people deeply involved. In part it’s a marketing problem, and in part it’s a problem of convincing people that this project can actually achieve something useful. But maybe it’s also a problem of tools.
I don’t even know what Quora is.
Maybe I’ll find it and ask “what’s Quora?”
Quora is a big question and answer site. It gets some very smart (some famous) people answering questions. It also lets you create custom feeds so you only see questions from your friends and topics you care about.
Check out:
http://www.quora.com/The-Environment
http://www.quora.com/Climate
And you’ll get some idea what the questions look like. The key would be to pick good questions to work on. Or write some that people might want to know the answer to and then answer them too.
Thanks for the info on Quora. But I wonder: should I really be spending time answering questions on some website, or should I be spending that time trying to round up academics and businesspeople—’movers and shakers’—who are interested in getting stuff done?
I suppose if Quora really has a lot of visibility (does it?) we could create an ‘Azimuth’ user and take turns fielding questions there.
I think you’d probably like to do a little of both really.
Answering questions is really what Azimuth is for, right? So we don’t answer dumb questions.
As a group, we should ask and then answer the most important questions. That’s the potential of the collaboration. All the information in the Azimuth Library is there to so someone can answer a question they have. Who is…? What is…? What’s the best way to…? What are the real risks of…? etc. The list goes on.
But YOU wouldn’t be answering the questions. We would. The whole group. Your blog readers, the forum members, the people who work on the Azimuth Library. And we would improve the answers on the Wiki if anyone brings up points we missed on Quora.
As far as rounding people up, it’s a good idea. I’m engaging in the rounding up side of things mostly these days. What I’m finding is that there are a lot of really great people working on parts of the problem. Tech people, sociologists, futurists, even Alex Bogusky, former king of the ad world, has joined in the fight.
My belief is that there is enormous waste due to duplication and competition for attention. We won’t achieve as much as long at this continues.
We need to get everyone working together, synchronized, and coherent.
I really like the idea of creating an Azimuth user on Quora. We could build the answers here on the Wiki as a group, and then submit it when it is finished.
Quora has a lot of visibility in the groups that need to connect into Azimuth. The web world, software developers, net savvy experts of various kinds young and old. So does Twitter.
So please try Twitter for a while. It is a great tool for finding your peers in other domains who might be kindred spirits.
Read this insightful post about what Twitter is before you write it off as a bad idea:
http://emergentbydesign.com/2009/12/21/how-to-use-twitter-to-build-intelligence/
Twitter may not be for you in the end, but I suspect you will like it.
Curtis wrote:
Oh, man, you probably don’t know just how good that made me feel.
I have no sense of how big and important Quora is. I’d never heard of it before. That doesn’t mean much, but I’d like to hear other opinions about Quora.
How many people here have heard of Quora, and what do you think about it?
I’ll nose around, too.
Suppose it’s the best thing of its kind and it’s actually influential. Can we think of a nice system where we (meaning ‘mainly not me’) take selected questions from Quora, dump ’em on this blog here, or something like that, and compose really great answers?
I would really love a feedback loop like that, especially one where I’m not constantly manning the pump.
Okay, I’ll consider it. But did you know I keep my cell phone turned off most of the time, and have only sent 2 text messages in my life? (Both in reply to Walter Blackstock, who texted me about meeting at some bar in Singapore.) Did you know that I let the answering machine in my office back in U. C. Riverside fill up with unanswered messages, so that people would get a message saying my inbox was full? Did you know that I’ve never learned the access code for the voicemail here at work in Singapore? (Nobody ever told me it, which is great.) So you see, I’m rather unusual when it comes to communication. But I like media where I get complete control about when I choose to pay attention, so maybe Twitter will be okay.
I looked at one question on Quora concerning “Can one get a consensus date for Peak Oil?” and it only had 220 page views in a year, according to the page itself. And six people added a reply, including me. I think it may be more of a naval-gazing site at the moment. That is not to say it can’t get more popular.
Now I see that my “answer” that I posted to Quora yesterday has been collapsed. Collapsed means that some admin or someone else didn’t think it met the guidelines for an answer. Alas, Quora appears to be another echo chamber for those who want to keep the crayons inside the lines.
I wonder what I am doing here, but here is my opinion.
Basically, morality has to do with sentience, not intelligence. We sentient beings who are able to want, what do we want to be the fate of consciousness in cosmic history? I think there may be various kinds of life, and species, and AGI, but what we should do remains basically the same: preventing excessive suffering and, if possible, furthering valuable things such as life, happiness, truth, art, freedom, humor, morality, justice, intelligence, etc.
I agree with DavidTweed’s policy, here above. The feedback argument seems valuable, but I came to that policy from another road. The future is not ours to see, our moral behavior should be decided first and foremost on what is actual now, to the best of our knowledge. Here and now, the most urgent thing to do, I suggest, is to reduce the occurrences of excessive suffering without incurring costs that would be unreasonable in our opinion. I guess this is the policy that has the best chance to prepare a good future and to avoid potential existential risks of all kinds.
After reading other comments here, I understand better now what this blogsite is about. If your concern is the application of mathematics and science to world problems, I warmly recommend that you have a look at Anthony Judge’s work, especially his page “Unexplored Potential of Mathematics and Geometry in reframing psycho-social challenges” http://www.laetusinpraesens.org/docs00s/maths.php
Judge has masterminded the indispensable “Encyclopedia of World Problems and Human Potential”.
John, FWIW I think you’re doing exactly the right thing. You’re using your skills to address a real problem, in a manner that is very likely to do some good. What more can anyone ask for?
The Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable, and it’s a shame you’ve given it so much air time.
I’m glad you think I’m on the right track, Greg. Having spent the first 48 years of my life trying to answer questions just because they were irresistibly fascinating and profound, I don’t feel I’ve optimized my ability to tackle big practical problems. I feel I’m still blundering around.
I agree that multiplying a very large cost or benefit by a very small probability to calculate the expected utility of some action is a highly unstable way to make decisions. This instability makes it very easy for unrecognized factors to creep in, e.g. the vanity factor of wanting to “save the universe”, but also the complacency factor of “nah, that can’t happen”. You can easily get whatever answer your heart secretly desires.
For some reason this business of multiplying an enormous number and a tiny number suddenly reminds me of the similar problem of subtracting two enormous numbers, which reminds me of Erdős’ attempt to compute his age:
I do wish I knew a truly sensible way to think about improbable enormous disasters.
Here’s an obvious problem with simply multiplying a cost by a probability: once you decide the cost of some disaster is effectively infinite it doesn’t matter how improbable it is, as long as the probability is nonzero. So, if destroying the universe has infinite cost, and the Large Hadron Collider has a nonzero probability of destabilizing the vacuum and destroying the universe, we shouldn’t start it up. This seems stupid, and I can make up examples that seem even stupider.
Personally, in ordinary life I tend to just discount events whose probability drops below a certain threshold, which of course isn’t defined in any mathematically precise way. And one good reason is that there are so many such events that there’s not enough time to consider all of them.
However, I think Yudkowsky considers the probability of self-amplifying artificial intelligence to be a lot higher than you think it is. I don’t think he thinks it’s “immensely unlikely”. So to settle that particular quarrel, perhaps we don’t need a general theory of what to do about immensely unlikely events.
This sort of issue has been discussed by the leading climate economists in some detail, if you’re interested. Basically no approach leads to this result, either because of bounds on the value of risk-reduction, or because one could use the effort on something that would more effectively reduce risk.
Martin Weitzman’s “On Modeling and Interpreting the Economics of Catastrophic Climate Change” and Bill Nordhaus’ reply, “An Analysis of the Dismal Theorem,” (page 12 onward for the most general discussion) have a lot to say, along with Posner’s book “Catastrophe.”
Weitzman pushes the importance of the right tail of climate change risk, claiming that most of the expected damage comes from that tail, depending on a “value-of-statistical-life-like parameter,” i.e. how much current consumption is a given probability of our destruction worth?
He talks about the difficulty in assigning probabilities for events that depend on uncertain scientific facts as well as social phenomena, but argues we should try nonetheless, and use expected-utility methods. He claims there are less than a dozen major existential risks (he has mentioned in various places right-tail climate change, nukes with right-tail nuclear winter, engineered pandemics, natural pandemics to a lesser degree, biotech mishaps, robot/AI disaster, and a few others) and that we should be addressing all of those more than we are.
Nordhaus retraces steps trod by a number of economists like Richard Posner (in his book “Catastrophe”), arguing that observed behavior shows that current people don’t value future generations very much, so we shouldn’t value reductions in existential risk very much more than disasters that kill most current people.
He goes on to emphasize that since there are a number of existential threats we should not be willing to spend all our social resources on sufficiently low probability interventions addressing any one of them, since we could buy better risk-reduction for the dollar (or hour of volunteering, career change, etc). Fighting the LHC is not going to reduce risk as much as many other uncontroversial options, e.g. funding continued tracking of planet-killer comet and asteroid orbits.
Further, to some extent we should expect things like wealth, peace, good civil society, and better scientific and political institutions to contribute to our ability to reduce existential risks. So if the best intervention directly aimed at reducing existential risks were (implausibly) unlikely enough to succeed, then the reasoning would favor indirect methods like the above.
In practice disasters with ludicrously small but nonzero probabilities, e.g. 10-30, are going to be utterly negligible even on a utilitarian reckoning.
Considering AI in particular, if the risk of an AI-driven existential catastrophe in the 21st century were under 1 in 1 million (cf. 1 dinosaur killer impact per 100 million years) then even utilitarians probably wouldn’t focus on AI risk relative to other threats.
Thanks, Carl! I’ve taken the liberty of adding links to those two papers you mentioned. Now I have some bed-time reading material. I’m glad some people are thinking about this stuff a lot harder than I have.
Among others, economists have grappled with this issue, even specifically in the context of climate change. (I’ve been meaning for some time to carefully read the paper by Martin Weitzman linked there, but haven’t yet.)
First off, John, thanks for writing this post. Regardless of where your thoughts on what you should be doing ultimately end up, the question of where to direct our personal efforts is something that we all have to confront, and it’s helpful to see the thought process in action.
John Baez wrote:
This comes up in discussions of existential risk of every sort, but the real disagreement tends to come not from the precise balancing of the small probabilities/large utilities, but from one side believing the probabilities are “effectively zero” and the other side believing they’re real (well above one percent, usually). One side tends to consider the EV difference between doing nothing and doing something to be disputable, whereas the other considers it overwhelming.
AI is definitely one of these situations: I’d bet that most of the people arguing against Eliezer’s points think the chance that we’ll see real (strong) AI at all (friendly or not) in the next 20, 30, or 50 years is “effectively zero”, and that it’s even a roll of the dice whether we’ll see it in the next few hundred. Whereas Eliezer figures he’s established a quite measurable lower bound on that probability, and that it’s disturbingly high.
Personally, I’d come closer to agreeing with Eliezer’s estimates, because I’m still convinced that whatever evolutionary discovery is responsible for the power of the human brain must be very simple, otherwise it would not have been discovered by evolution. That feeling derives from a lot of different observations (many of which would be controversial), but a large part of it is seeing how inept evolution is as a strategy for developing complex algorithms.
Of course, others will disagree with my reasoning, and probably have some convincing reasons that we’d really have to dig into in order to resolve. I know we’re not “supposed to” agree to disagree (http://wiki.lesswrong.com/wiki/Aumann's_agreement_theorem), but to some extent Aumann’s Agreement Theorem overlooks a very important fact: the amount of time I should rationally be willing to put into updating my beliefs on a topic should be related to my prior estimate about its importance. I haven’t personally studied the end-of-the-world in 2011 predictions because I assign them such a low prior probability that I feel no need to refine my estimate further; similarly, if someone thinks a priori that general AI is far too difficult, and that we won’t see it in our lifetimes with four nines of certainty, then I can’t fault their rationality when they decide that they don’t want to spend a few days carefully examining arguments that suggest it’s more likely.
There’s at least one thing that both Friendly AI and climate change folks have in common: to get more people involved, they need to be fighting PR campaigns to substantially raise everyone’s prior estimates of the probability of disaster. The extent to which this is necessary differs: in climate change, there are a lot more people that already have high priors, but the goal there is (at least in part) to force behavioral changes on the rest of the population, so it’ll take a lot more pressure; further, there are interests actively (and arguably more successfully) working to reduce people’s priors. In FAI, almost nobody has a high prior and it’s very difficult to affect people because of the sci-fi seeming ridiculousness of it all, but the work that needs to be done can happen in private and won’t require any massive political action.
I’ve got more to say on this, but I’ve run on way too long already, I’ll leave it to a future blog post, perhaps…
Out of curiousity, have you spent any time looking at the type of literature in any particular area of human cognition? If so, how does the huge variety of experimental results on human cognition — that suggest that human thinking is akin to a heath-robinson machine that just happens to have so many and varied bits that their interaction gives what is mostly “correct” intelligent reasoning — square with “large part of it is seeing how inept evolution is as a strategy for developing complex algorithms”? (Eg, how the human visual system works, or the structure of language, or human failings in behavioural economics, etc.)
@DavidTweed,
No, I have not done as much research on human cognition as I would like, at least not since I was in school several years ago (in my defense, I don’t work on any of this stuff for real, I’m just a boring old web-dev with absolutely no AI or cog-sci cred at all). I gather that what you’re getting at is that many bits of cognition are highly complex and are known to have evolved successfully?
I need to be more careful with what I’m actually saying, I was a bit vague before, so I’ll try to be more explicit here.
Evolution is fantastic at solving “1000 needles in a haystack” type problems, and can work up very complex solutions to them, especially when there are also thousands of pins in the haystack that can partially solve some of the problems that a needle is useful for.
It really sucks when there’s just one needle to pick out, though, and typically it’s going to perform no better than a random walk unless there’s a very clear fitness gradient pointing towards the needle. So when it does pick out a good unique solution to a problem, it’s usually the case that the problem is pretty simple.
In the case of the brain, my argument is that precisely because of how complex the brain’s structure is, it’s unlikely that a random walk stumbled upon a unique solution to the problem that it solves (or even an attractor that leads to that unique solution) in the relatively short amount of evolutionary time since it started building brains. Vastly more likely: there are a ton of different solutions, and evolution hit upon one of them.
If there are a ton of different solutions to the problem, it’s also very likely that the one evolution picked out is not at (or even near) the lower bound of complexity. Obviously we don’t know how that precise distribution looks, but chances are evolution missed out on quite a few algorithms that are simpler and would have worked, not to mention all of the algorithms that we could implement on a microprocessor but it could not in neurons, some of which may reduce the necessary complexity by a staggering factor.
Put this all together, and I don’t think it’s too far off-base to say that by the time we can run programs with the complexity of the human brain we stand a real shot at being able to construct a algorithm that does (roughly) what the “general intelligence” part of the brain does, even if we “just” use evolutionary methods with the richer set of programming primitives we have access to.
Is that a bit more clear?
John wrote:
All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments. So I wouldn’t hold your breath waiting for that to be settled. If he wants to live his own life based on his own hunches, that’s fine, but I see no reason for anyone else to take his land-grabs on terms like “rationality” and “altruism” at all seriously, merely because it’s not currently possible to provide mathematically rigorous proofs that his assignments of probabilities to various scenarios are incorrect. There’s an almost limitless supply of people who believe that their ideas are of Earth-shattering importance, and that it’s incumbent on the rest of the world to either follow them or spend their life proving them wrong.
But clearly you’re showing no signs of throwing in productive work to devote your life to “Friendly AI” — or of selling a kidney in order to fund other people’s research in that area — so I should probably just breathe a sigh and relief, shut up and go back to my day job, until I have enough free time myself to contribute something useful to the Azimuth Project, get involved in refugee support again, or do any of the other “Rare Disease for Cute Kitten” activities on which the fate of all sentient life in the universe conspicuously does not hinge.
Greg wrote:
Fear not: I’m keeping both kidneys.
In fact when asked “what you would do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?”, I scratched my head, hemmed and hawed, and eventually came up with the idea of creating an organization, registering it as a charity, and having it hire me. That way I could do whatever I want with the money.
Of course this idea is not original.
“All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments.”
Somehow I doubt you’ve read them.
No, I don’t go around asking other people to prove me mistaken. I do go around writing web pages explaining the detailed probability theory of why you’re not allowed to demand proofs like that.
I’m generally reluctant to assign exact probabilities to topics like these; I consider it a sin, like giving five many significant digits on something you cannot calculate to 1 part in 10,000 precision. Did you see any imaginary probabilities in my interview with Baez? No you did not.
If you don’t know what my arguments are, please don’t make them up.
@Eliezer Yudkowsky
I think everyone here does agree with you there, the question is to what extent.
At what point do you demand empirical evidence? If never, why do you say that you would sooner question your grasp of “rationality” than give five dollars to a Pascal’s Mugger?
I think those are reasonable questions and I also think that you can not demand that everyone who asks those questions should first read the hundreds of posts you wrote over at Less Wrong.
Of course, you don’t have to engage with those people who are unwilling to read all you ever wrote. But it sure would be helpful if you could point them to an answer or answer those questions directly.
Eliezer Yudkowsky wrote:
You don’t spell out explicit probabilities, but if there are no probabilities (or lower bounds thereon) underlying claims like this:
then on what do they rely? The claim here is that one very small number P, that you decline to specify, multiplied by some very large number U, yields a product that’s greater than some other number X.
Without a reason to believe in any specific lower bound on these “tiny probabilities of jumping the interval between interesting universe-histories”, there is no reason to believe that P*U > X. The whole argument is an appeal to people’s intuition that U is so large that P couldn’t possibly be small enough to make P*U insignificant:
Greg, this sounds to me exactly like what I’ve termed the Pascal’s Wager Fallacy Fallacy wherein any time large utility intervals are discussed, people pattern-match against Pascal’s Wager and conclude the person must be saying the probabilities are tiny, by the following logic:
“If the utility intervals are large, then the probability intervals could be small and still carry this argument, therefore the person must be arguing that they are small.”
Which does not follow. And I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. And if you can carry a qualitative argument that the probability is under, say, 1%, then that means AI is probably the wrong use of marginal resources – not because global warming is more important, of course, but because other ignored existential risks like nanotech would be more important. I am not trying to play burden-of-proof tennis. If the chances are under 1%, that’s low enough, we’ll drop the AI business from consideration until everything more realistic has been handled. We could try to carry the argument otherwise, but I do quite agree that it would be a nitwit thing to do in real life, like trying to shut down the Large Hadron Collider.
What I was trying to convey there is that the utility interval for fate of the galaxy is overwhelmingly more important than the fate of 15% of the Earth’s biological species, and that realistically we just shouldn’t be talking about the environmental stuff, there’s no possible way we should be talking about the environmental stuff, there’s enough people talking about it already and we’ve got much bigger fish going unfried. If talking about the smallness of the needed probability for A to outweigh B is making you pattern-match against Pascal’s Wager, then let’s discard all talk of small probabilities and just say that the ratio of the utility intervals is goddamned large. (Although there’s nothing you can do with utility intervals *except* multiply them by probabilities, so you can see why I used that illustration originally.)
And the point of making a point like that – that the ratio is goddamned large – is that even though we can’t calculate the probabilities exactly, we can still see, qualitatively and in broad strokes, that this is a time for the next marginal sane person to worry about existential risks and not a time for them to worry about global warming.
By the way, Greg Egan, can I politely ask how and why you decided that I’m a bad guy?
Before someone questions how Eliezer Yudkowsky could possible believe that the probability of us being wiped out by badly done AI are easily larger than 10%, here are some links:
1. Why an Intelligence Explosion is Probable
2. What should a reasonable person believe about the Singularity?
3. Singularity FAQ
Much more can be found here.
See the AI Foom Debate for a detailed discussion of the arguments underlying high-probability estimates for risks from AI.
Eliezer Yudkowsky wrote:
While I personally realize you’re not playing “burden-of-proof tennis” and you’ve spent considerable time elsewhere arguing for a high probability, I think part of the PR problem here is that people see you using many words here to argue the importance of the outcome and very few arguing the likelihood. You’re right, that leads them to the Pascal’s Mugger Fallacy Fallacy, but I’m not sure they’re entirely crazy for going there based on what they know.
Those of us that have either read a lot of your work before or already believe it’s >1% are willing to do the research, and for the most part have probably refined our personal estimates in light of your writings as a result. But why should someone that has their prior at the .01% level do the digging that’s required to figure out whether your estimate is crazy or not? Based on their prior beliefs, in all likelihood you’re nuts, living in some Kurzweil-nerdtopia-fantasyland, and it would be a waste of time to even look into it. :)
Think you can boil your thoughts down a quick argument, outline, or sound bite? It would help a lot if people could see an immediate glimpse of why you think the estimate should be as high as it is, and that you’re not just pulling it out of a hat.
Eric Jordan wrote:
This is pretty close to what I think – except the “nuts” part, because:
– We don’t know how to define intelligence (at least not in a way such that a majority of, say, psychologists agree with it), we don’t know how to measure intelligence.
– We don’t know how animals think, including humans, we don’t really know how brains work – not enough to rebuilt one, or rededign one, or repair one, or even tell precisely what is wrong with an injured one, or even what happens when a certain part of the brain stops working.
– We don’t have any technology that would enable us to create anything that comes close to intelligent behaviour modulus the uncertainty of the definition of “intelligent behaviour”.
So from my viewpoint what Eliezer does qualifies as philosophy, which is interesting in its own right, but does not qualify as a discipline that will contribute much to the survival on this planet in the next decades.
But it would be unfair to demand that Eliezer starts the whole discussion anew here, so I think I’ll follow the links that Alexander Kruel posted and read the material over there before I post another comment.
Eliezer Yudkowsky wrote:
I’m not pattern-matching against anything, and I didn’t assume that you believed the probabilities were small. But in a very long interview, you neither presented, sketched, alluded to nor linked to a single compelling reason why anyone else should share your lower bounds on this probability.
If your personal lower bound is the result of the totality of your personal experience of pondering the issue, and what you would need to communicate in order to persuade anyone else to agree with you is completely incompressible, well, fine. You’re as entitled to your educated guesses on this matter as anyone else, but no more so.
I don’t think you’re a “bad guy”. I do think it’s a shame that you’re burying an important and interesting subject — the kind of goals and capabilities that it would be appropriate to encode in AI — under a mountain of hyperbole.
Okay, so Eliezer thinks there’s an “easily larger than 10%” chance that humanity could be wiped out by badly done AI, and thinks a likely effect of global warming is the extinction of 20% of species.
If I thought that, with these possible events happening on roughly similar time scales, I’d probably focus my energies on either 1) getting AI done right or 2) slowing down its development.
I know Eliezer has other arguments against 2) being a workable approach.
(The issue of time scales matters to me, since if I thought there were a 10% chance of us getting wiped out by AI, but not in the next few centuries, I might focus my attention on more urgent problems.)
I don’t have the chutzpah to write down my own guess as to the probability of humanity getting wiped out by badly done AI sometime in this century—at least, not until I have a few more cups of coffee.
I also don’t feel like discussing the possible effects of global warming on people, which many people find more worrisome than extinction of other species. It’s too big of a subject.
But concerning extinction, here are some estimates. From Wikipedia:
Note that ‘higher lifeforms’, however they’re defined, make up a tiny fraction of all species, but we probably do care about them more.
Of course one can also find people who pooh-pooh the risk of mass extinction and give much lower numbers. In my extinction webpage, I wrote:
However, the really big disagreement concerns not the current extinction rate (where we seem to have a rough order-of-magnitude agreement) but the even harder to estimate future total extinction rate (where you’ve just seen estimates ranging from 0.7% to about 60%).
I should say, just in case it’s not clear, that I don’t trust Lomborg any further than I can throw him: his book is specifically devoted to minimizing the effects of all environmental problems. But I think he sets a lower bound.
Finally, the Stern Review says:
Of course one needs to dig into the ‘studies’ that are cited here. I deleted the footnote numbers since they’re distracting here, but one big footnote starts by mentioning Carl Thomas et al‘s 2004 Nature paper:
Anyway, all this merely goes to show that one can easily find good biologists who’d say Eliezer’s estimate of 20% extinction is on the low side. I don’t think we can settle this question here.
@Greg: I did comment briefly in the first part of the interview on reasons to believe that human intelligence is nowhere near the limits of the possible. Aside from that, if you want to know why I didn’t talk about probable rates of recursive self-improvement once the ball gets rolling, i.e. the soft takeoff vs. hard takeoff question… well, blame John, he was the one asking questions he found interesting, I just answered them. :) John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.
The basic argument for recursive self-improvement going FOOM is roughly as follows: if you look at hominid evolution, it looks like roughly constant optimization pressure from natural selection didn’t run into any bottlenecks or anything that looks like diminishing returns on either the size of brains or their software optimization, and natural selection is a relatively stupid process compared to human engineers who can use abstractions and coordinated changes to jump gaps in the fitness landscape. (As Cynthia McKenyon once said at a dinner party I was attending, one grad student can do things in an hour that evolution could not do in a million years.) The history of humans applying roughly constant brains to optimizing their culture does not give us cause to suspect diminishing returns on technology, either. So any reasonable curve of “cumulative optimization power in, versus intelligence out” that fits the admittedly sparse historical data we do have, is not going to have diminishing returns that fall off at the very rapid rate needed to prevent an intelligence explosion – if, for the first time in the history of the universe, you have an optimization process whose growth rate is dominated by its redesign of itself.
As for guessing the timescales, that actually seems to me much harder than guessing the qualitative answer to the question “Will an intelligence explosion occur?” You could read through “The MIT Encyclopedia of the Cognitive Sciences” and “Artificial Intelligence: A Modern Approach” and try to get a feel for (1) how many different tasks the human brain is performing, (2) how many of those tasks we already understand and (3) the rate at which we’re understanding new cognitive tasks. I think that a lot of people have ugh-reactions to the AI field because it sounds weird or they were disappointed because the SF they read as kids didn’t come true. But trying to estimate development times using a negative emotional reaction to people who made inflated promises and thereby got their names in the newspaper, matches the irrationality-schema “Reversed stupidity is not intelligence”. If people have previously been over-optimistic in this area, that just tells you there were some silly people out there, it doesn’t actually give you solid positive knowledge that the timescales will instead be very long. I think if you look at just the actual progress that has been made in AI, never mind embarrassments and hyperbole, just look at the actual progress, then it’s very hard to support the estimate that it’s probably going to take another couple of centuries. Another couple of centuries is a really ridiculously incredibly long amount of time in science. Where was AI in 1811? It seems to me that AI has very much gone from zero to forty in the last half-century, and that we’re probably more than halfway to sixty – if that sounds odd, you probably don’t realize what an incredible amount has been done in AI.
And for a rather huge amount of addditional discussion of this subject, see the above-mentioned AI Foom Debate.
@John: I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
Eliezer wrote:
Species contain a lot of information painstakingly evolved over millions or even billions of years; this information is a treasure whose value we’re just barely beginning to grasp. I think its value will keep going up for quite a while. Unfortunately we don’t have a good futures market for it, so we’ll wind up killing off lots of species and irretrievably losing a lot of this information. Perhaps we’re doomed to do this, but to me it makes a difference whether we lose 20% or 60% of the species on Earth. I’m curious what percent we could lose before you might become perturbed.
Since I don’t think that’s a choice we face—colonizing the Hercules supercluster or saving 80% of the species on Earth—this particular imagined scenario doesn’t move me at all.
In fact, I think that taking global warming seriously will increase the chances of our current technological civilization surviving to the point where we can leave this planet.
On a more general note, I’m also unsure of the value of analyzing my decisions based on what I imagine galaxy-civilizations 200 million years from now might think about them. I say “unsure” not out of snarkiness but because while I can see some possible value of taking this viewpoint, I also can imagine lots of ways it could be dangerous.
Okay, but then let’s talk about the marginal impact of the next added effort to combat global warming upon the probability of Earth-originating intelligent life surviving to create future galactic civilizations, instead of being distracted by the Rare Diseases of Cute Puppies part.
What do you think is the probability of global warming knocking down our planet to the point where we lose the ability to invent nanotechnology or Artificial Intelligence, and never get it back, or something else bad happens to us before we get it back?
Do you really think it’s easier to make a case for that problem than for paying attention to the intelligence explosion? Even neglecting the marginal effect of the next added effort…
It really seems like if you’re thinking quantitatively like a good utilitarian consequentialist (note: this is a codeword for “being sane”) then this shouldn’t be a difficult call. The class of direct existential risks, nanotech and AI and synthetic bioweapons, just beats the global warming stuff cold. Once you start thinking quantitatively and attending to scope, probability, and marginal impacts instead of raw warm fuzzies, there just isn’t any case left for directing the next marginal effort to global warming.
Besides, if you actually do want to take over the world and are interested in using your talents as a math professor to this end, I do know who you ought to be talking to.
Eliezer wrote:
It’s very hard for me to estimate this! To me the most likely ‘really really bad’ scenario is: global warming leading to droughts, floods, crop failures and all-out war using all the splendid weapons we have, knocking back our civilization to something much more ragged than it is today. I can easily imagine this as having a more than 5% probability, though I have no idea where I’m getting that estimate from. How likely it is that the resulting bleak world would bounce back to the point where I’d say “oh well, it was really no big deal after all!” – that’s even harder to estimate.
No, I don’t really want to take over the world. If I did, I would have started much sooner.
John, That’s a very long winded way of saying that the issue isn’t the accuracy of the extinction rate estimate! ;-) If we took a “problem finding” approach with out solutions we’d get a lot further I think.
As Alpha Omega says: “Remember that the leading source of problems is solutions…” It’s not an idle fact, but a highly useful one. What would be the problem of solving global warming, for example. If you carefully read the interpretations of the “high sounding” plans, what’s actually in them is the very same strategy economists have had all along, unaltered in any significant way. That’s to remove barrier after barrier to continually multiplying our rate of resource consumption and environmental impacts of all uncontrollable kinds.
With that disguised as “solution” and receiving unquestioning promotion, is the kind of queer social movement mad science our desperation has brought us to. Sure, it would also warm a cold house to burn the furniture, but why work yourself up into such a frenzy that you can no longer see what you’re doing??
The rule feels better once you practice it a few hundred times, but try looking for the problems in your solutions and it will save simply enormous amounts of frustration and treasure.
There’s nothing here I haven’t heard argued before, and nothing that persuades me of any stronger position than that which I’ve always accepted: it’s possible to construct existential risk scenarios without resorting to anything currently known to be logically or physically impossible.
@Greg Egan
@John Baez
I am sure you both agree with Eliezer Yudkowsky on most of all topics.
As to make future exchanges more productive I would like to ask you to pinpoint where exactly you disagree with him.
Currently I perceive the debate to be unnecessarily vague and we would all benefit from discussing where we disagree rather than merely stating that we disagree.
Thank you.
In my opinion, bringing people from different disciplines around a (virtual) table to discuss mathematical modeling of environmental and economic systems is a HUGE contribution and one that not many people can pull off.
Yes there are already many people studying these systems, but we need many many more. Ideally these mathematical models should become so commonplace that they could be studied even in high schools. Then we will have progress.
Also, i don’t think that our problem can be solved by “more intelligence” alone, of any kind, in the same way they couldn’t be solved by much faster computers, instant translation, brain-computer interfaces, genetic engineering, or an army of super-geniuses.
Maybe my approach is too reductionist, but if you want to develop, say, lithium-air batteries, (or the cure to cancer, or do pretty much anything really important) what you need is a lot of funding, experiments, simulations, and basically a lot trial and error. Yes, you’ll get there (or anywhere else) faster if you have smart people involved, but ultimately the bottleneck is not the lack of intelligence. i think, but the time and money needed to build things and test nature.
Giampiero wrote:
I could probably pull that off. It would be lots of fun, too! But then I read something like this and think that maybe such a discussion is just a glorified version of thumb-twiddling unless we bring to it some sense of urgency.
(Of the 10 news stories I just linked to, suppose 8 or 9 turn out to be grossly exaggerated. Then we’re still in serious trouble.)
So, okay: maybe I should do something a bit like what you’re suggesting, but with some discussion of possible problems thrown in. I’m sort of doing this already… but I’d like to get more people involved, and I’d like to start seeing some concrete ‘results’ of some sort.
Maybe it is worthwhile to spend some time to make more precise your concept of ‘green mathematics’ and enter a discussion. Quite a few people have argued that we need such a new way of mathematics to deal with our current problems, however virtually nobody is actually working on that.
I would like to make my concept of ‘green mathematics’ more precise, but I don’t know how yet. Maybe more discussions would help.
I am trying to develop a new kind of mathematics, but incrementally, in small steps, because that’s all I see how to do. I’m starting by developing a theory of networks that will clarify the relation between:
• Petri nets
• chemical reaction networks
• belief networks (or ‘Bayesian networks’)
• neural nets
• electrical circuits
• bond graphs
• Feynman diagrams
• diagrams in Odum’s ‘systems ecology’
and a few other things. I know I can do this. It’s probably just a tiny part of ‘green mathematics’. But one advantage of a bite-sized project that actually succeeds is that it will help get more mathematicians interested. As you seem to hint, mathematicians are reluctant to work on big projects that haven’t been precisely defined and might turn out to be just fluff.
With you 100% on this John. What may also help get people engaged on this topic, is that it not only explains physical phenomena but it is the root of some sophisticated inferencing schemes and thus decision-making algorithms. Excuse my hyperventilation, but this is really breathtaking in its potential scope.
You very probably know about that, but network theory is also often summarized under operations research – not quite in the sense as operations like on assembly lines (interesting to know how they’d operate on reactors) but in the traditional sense.
I just mention this because sometimes one may slip some terms.
Does it make sense to speak of a ‘green mathematics’? We cannot dictate the uses to which a mathematical theory may be put; we can only trust it will be used wisely. Riemann couldn’t foresee that his work in geometry would be taken up within a physical theory, later required to maintain the timing of a system of satellites used for environmental and military purposes.
Imagine you make advances in network theory by extracting an idea from a treatment of a specific kind of network to reveal its generality. If you then use the idea to advance, say, Bayesian networks with it, you’re as likely as not to have marketing divisions paying the closest attention as they seek to enhance algorithms to learn from customers’ previous purchases what they’re likely to buy next. It may also allow what you would take to be positive outcomes.
But, regardless of the name, I think you’ve located a very interesting to direction in which to push forward. As I dimly saw from my time with learning theorists, there’s a great deal of structural similarity being noticed in a range of applied maths, and yet there are insufficient tools and expertise to get right to the bottom of this similarity. A nice case of a similarity is mentioned on p. 65 of Seeger’s notes.
Your skills are of just the kind to get to the bottom of such similarities, and then to draw others along in your trail. Von Neumann would be smiling down on the network project, given well-known views expressed in The Mathematician.
To be fair, the other thing is that, being an applied field, machine learning isn’t only interested in the “find bigger and bigger pictures” game but also in details of specific issues that don’t apply more generally.
David wrote:
Maybe, maybe not. This is what I wrote when introducing that phrase:
I wish there were a branch of mathematics—in my dreams I call it green mathematics—that would interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. If the 20th century was the century of physics, while the 21st is the century of biology, shouldn’t mathematics change too? As we struggle to understand and improve humanity’s interaction with the biosphere, shouldn’t mathematicians have some role to play?
Or maybe ‘green mathematics’ can only be born after we realize it needs to be fundamentally different than traditional mathematics. For starters, it may require massive use of computers, instead of the paper-and-pencil methods that work so well in traditional math. Simulations might become more important than proofs. That’s okay with me. Mathematicians like things to be elegant—but one can still have elegant definitions and elegant models, even if one needs computer simulations to see how the models behave.
Perhaps ‘green mathematics’ will require a radical shift of viewpoint that we can barely begin to imagine.
It’s also possible that ‘green mathematics’ already exists in preliminary form, scattered throughout many different fields: mathematical biology, quantitative ecology, bioinformatics, artificial life studies, and so on. Maybe we just need more mathematicians to learn these fields and seek to synthesize them.
I’m not sure what I think about this ‘green mathematics’ idea. But I think I’m getting a vague feel for it. This may sound corny, but I feel it should be about structures that are more like this:
than this:
There’s a different kind of order here.
Right. I certainly wasn’t using ‘green mathematics’ to mean something like ‘mathematics all of whose uses the Green party would approve of’.
Thanks! Now I think you’re talking about the project I call ‘network theory’. That’s a much more specific and real thing than the nebulous dream of ‘green mathematics’. But I like to hope that network theory will play some role in understanding biochemistry, biology, and ecosystems. There’s a quote by B. C. Patten and M. Witkamp that intrigues me:
I don’t think it’ll be that simple, but it’s an interesting notion. I urge you to find a copy of this book and stare at the diagrams:
• Howard T. Odum, Systems Ecology: an Introduction, Wiley-Interscience, New York, 1983.
You’ll see what’s on my mind.
Looking at the second and third answers to an MO question asking about applications, green mathematics would seem to include algebraic topology and differential geometry.
Plus, the computer simulation itself can be elegant if “done right”.
What is it that we need “green mathematics” for, exactly? I mean we’re basically in the following epistemic state right now: We know the weather is going to go to hell in the next ninety years, and we know there’s a chance it goes to double hell in the next thirty years if the clathrates melt. We know that we ought to be building 10,000 LFTR reactors… and we know that our declining civilization now lacks the political will and engineering know-how to design new nuclear reactors of a type that was thrown up in a few years as an offhand research project back in the 1960s. We know that we’ll probably end up doing geoengineering because it’s easy, that solar cells will get cheaper but possibly not cheap enough, and that lots of other creative solutions won’t get used because our civilization is too stupid.
What’s a more advanced network theory going to actually change?
@Eliezer Yudkowsky
All you wrote there applies even more to friendly AI than environmentalism.
I also know that you don’t buy into that line of reasoning so I am puzzled why you employ it against environmentalism now.
That something seems hopeless is no argument in favor of giving up, it doesn’t mean we shouldn’t try anyway.
The critical difference between friendly AI research and environmentalism is that the latter is more certain to occur than the former (at least that is what some people like Greg Egan seem to believe).
The difference with Friendly AI is that there’s something very specific that we need the math for; we need the math to build a Friendly AI in the first place. There are people who want to do this, and could find funding to do this, but can’t do this strictly because nobody knows enough math.
What I’m suggesting here is that global warming, in contrast, is not a math problem.
How about advancedly networked AI?
Eliezer wrote:
I’m not saying we “need” green mathematics. I’m certainly not proposing it as a silver bullet against global warming.
Green mathematics is just my catchword to summarize this puzzle: if the 20th century was the century of physics and the 21st is the century of biology, what will this do to mathematics?
It’s quite amazing to look at math departments and see how much of the work there is connected to string theory. Maybe U. C. Riverside is atypical, but it looks like 11 out of 23 professors have done work connected to string theory, including myself. Most of these people aren’t mathematical physicists, either: it’s just that math has coevolved with physics for centuries, and string theory is a kind of culmination of this relationship: it uses a spectacularly wide variety of different kinds of math, and sheds new light on it, and raises all sorts of fascinating questions. There’s enough there to work on for another century.
On the other hand, string theory hasn’t made any predictions testable with current-day technology. And even if it did, it doesn’t seem likely to lead to technological spinoffs the way quantum mechanics or even quantum field theory did. At least not for a long time. Our understanding of the fundamental laws of physics is already way ahead of our ability to exploit it. We’ve got more than enough fundamental physics to do nanotech, for example.
Meanwhile, biologists are doing very practical things… but biologists I talk to all say they need more math to find useful patterns in the huge piles of data they’re accumulating. In fact my friend Christopher Lee at the Center for Computational Biology at UCLA is trying to mechanize the experimental method, including the generation of hypotheses! You might be interested in his paper Empirical information metrics for prediction power and experiment planning. I wouldn’t be shocked if this line of work eventually led to new ideas in AI, especially since all the big money is in biology and medicine.
But I digress. My point is that it seems like a bit of a shame that so many mathematicians are working on math inspired by string theory when biologists are crying out for help. On the other hand, as a mathematician I know exactly why this is the case. Fundamental physics is neat and elegant, full of puzzles that mathematicians can pick up off the shelf and try to solve. Biology seems like a huge mess. Maybe it is a huge mess, or maybe we just haven’t found the right ways to think about it. Mathematicians like beautiful structures that have been polished for centuries—structures that we know incredibly well, but still hold deep mysteries. Most mathematicians don’t like exploring a wild frontier where the street signs haven’t been set up because the roads haven’t even been built yet.
So, even if some new kind of ‘green mathematics’ is possible, it’ll take time. But I can’t resist trying to speed it up a little.
Eliezer wrote:
I’m really glad that ‘we’ are in this epistemic state. I hadn’t known your views on these matters.
I wish that ‘they’ were also in this epistemic state. The sooner ‘they’ get out of the state of denial they’re in, the better off we’ll be.
However, I believe that people’s thinking will only catch up with reality when things get worse. At that point I’m expecting governments to get rather desperate. So, I’d like to have a lot of scientists ready with some good ideas and some plans. So, I want to get scientists and engineers working on that. That’s one thing the Azimuth Project is about: getting scientists and engineers educated about these issues, and giving them a place to think about them.
This conversation right here is part of that.
One point of disagreement here: no matter what course of action we take, I don’t think ‘easy’ will be an apt description of it.
For example, suppose we go with geoengineering. While it seems relatively easy to change the Earth’s climate through geoengineering (see Benford), it doesn’t seem so easy to be confident we’re doing a good job of it. The climate is a complex system, and we’d be pushing it in new ways. People will get very angry if the weather in country X cools down nicely while country Y gets thrown into a deep freeze one winter. There’d be no way to avoid some weather disasters somewhere, but whoever is in charge is going to be under huge pressure to do a good job.
Of course, if people get desperate enough they may do anything. But doing geoengineering well would require a quite elaborate combination of simulation, experiment, and constant feedback.
Planetary control theory is a subject still in its infancy: we’ve got big groups of people running general circulation models trying to predict the climate given various economic scenarios, but nobody (as far as I know) running models with feedback loops where, say, the amount of sulfur dioxide put into the atmosphere each week is adjusted according to last week’s weather. In some ways control is easier than prediction: compare building a thermostat that keeps your living room at a comfortable temperature to the task of predicting its temperature next week if you don’t have a thermostat! But it’s still a big challenge when it comes to something like the Earth’s climate.
So, there’s a lot of new science and engineering required here. And I think there’s a lot required for almost any approach to dealing with climate change.
Umm, well, that’s not quite true. Here’s another approach:
But if this doesn’t work, we may need to resort to science and engineering.
Network theory is pretty abstract math, so as usual it’s hard to know exactly how it will help anything. I’m working on network theory not because this should be the focus of the Azimuth Project, and not because it’ll end global warming, but because 1) it’s something I see how to do, and 2) it’s a way to lure mathematicians away from the delights of math inspired by fundamental physics, toward math that has more contact with practical problems.
Network theory is already secretly used in biochemistry, control theory, population biology and system ecology, queueing theory, circuit design, machine learning, parallel computing, quantum information processing, quantum field theory, and probably lots more subjects I’m forgetting or don’t know about yet.
Experts in these various subjects keep reinventing the mathematical tools—or at least, tools that look the same from a sufficiently abstract perspective. But the general theory of networks remains obscure, because little pieces show up scattered here and there, usually cloaked in technical vocabulary that repels people from other disciplines.
So, getting the subject cleared up should make all sorts of interesting things happen, which couldn’t happen before. It’ll take a few years to get this rolling and then with luck it’ll take care of itself—the math involved has a lot of the ‘classical beauty’ that mathematicians can easily recognize and respond to.
In some ways it’s dumb to attempt this in parallel with trying to start the Azimuth Project, but I seem to need to do some math each day to stay happy.
The logical solution would be to convince these mathematicians that “green” applications of string theory exist. I can hear the sales pitch now:
“Hey, it’s great that you have all these results on Kerr black holes and integrability of
super-Yang Mills theory. Really, that’s shiny! Say, while we’re on the subject of ‘t Hooft expansions, did you know you can define one for the Kardar–Parisi—Zhang model of dynamical surface growth? I wonder if any of those fancy duality tools would be useful for figuring out, you know, critical exponents and stuff like that.
“And on the topic of field theories for nonequilibrium statistical physics, I’ve got this problem which involves a Lagrangian that’s basically the same as the one for Reggeonic quasiparticles in the scattering problem which originally gave rise to string theory, way back when. The idea that we could steal something from one problem and apply it to the other, well, it’s not much crazier than other stuff I’ve seen proposed.”
OK, maybe it’s science fiction, but it could at least be well-crafted science fiction. :-)
I predict someday physicists will apply string theory to biology and declare that success in this area confirms the theory, just as some do now with quark-gluon plasma. But I just collected a case of scotch after winning a bet with Dave Ring, who claimed the LHC would see strong evidence of supersymmetry after one year of operation. So it’s me, not the string theorists, who is making really practical predictions.
While I have your ear, Blake: is the procedure for getting Hamiltonians described here the “cookbook recipe” you mentioned? I figure it must be. Has someone discussed that recipe in abstract generality before, or only in dozens of special cases?
Any applications are better than none; applications involving scotch are better still. (The only bet I’d have been willing to make over the LHC was that it’d find something nobody was expecting. I suppose that when the machine broke, we could have started a pool on how long it would take to fix…)
The “cookbook” procedure I had in mind was this description by Täuber, Howard and Vollmayr-Lee (2005):
(N.B.: They use the opposite sign convention for the Hamiltonian.)
Blake wrote:
Does nothing at all count as something nobody was expecting? Wouldn’t it be freaky if they found nothing, not even a Higgs?
Thanks a million for that reference to Täuber, Howard and Vollmayr-Lee. Yes, that’s exactly the recipe described in part 5. Of course there’s no other way it could work. It was fun rediscovering it, though. I was in Cambodia with nothing to do except climb around beautiful ancient temples, and I had the delicious delusion that I was dreaming up this stuff for the first time.
Unfortunately I think you’re right that things are roughly that bad. The “declining civilization” you mention is one candidate for the most fundamental problem — that is, current wealth and wisdom could solve these problems if we were rationally governed, but we seem to be getting farther from that rather than closer. If you could fix that, you’d be addressing Friendly AI and climate risks simultaneously, since a rational society would increase funding and improve policy for both of them.
But of course, making society or government rational is probably harder than either of those problems themselves. More to the point, your (Eliezer’s) comparative advantage leads you to work on Friendly AI directly (rather than on things that might indirectly support work on it), and to write about rationality in the hope of making an important subset of society a bit more rational; John’s comparative advantage (and his much lower estimate of the near-term risk of unfriendly AI) leads him to (among other things) look for new ways of integrating some mathematical approaches to complex systems (which if successful could have all kinds of indirect effects, including even on understanding of intelligence or of society). (Neither of you feels your comparative advantage favors a direct attempt to improve the overall working of society or government.)
“But of course, making society or government rational is probably harder than either of those problems themselves.”
Actually, I think making a friendly government is easier than making a friendly AI: the government has human scale intelligence rather than superhuman intelligence and we know a lot more about human psychology than AI psychology.
[…] In response to my post asking What To Do?, Lee Smolin pointed out this conference on energy […]
Remember that the leading source of problems is solutions, and the last thing the world needs is “help”. “Green mathematics” sounds like some kind of midlife crisis, an aging mathematician’s attempt to make himself “relevant” and be one of the “good guys” in the fight to “save civilization”. But this is all absurd! As Von Neumann said to Feynman, “you don’t have to be responsible for the world you’re in.” What’s more, this kind of do-gooderism has a rather unimpressive record in reality, and has a nasty tendency to create vast new problems that can’t be solved by doing more good!
If you really want to accelerate progress, forget selfless science and think about seeking power. Was there ever a period of more rapid technological change on this planet than the epic power struggle of 1939-1945? I think not. What does this tell you?
It tells me that this planet is not a stage for morality plays, that power struggles drive progress and that “doing good” has little to do with it. We are clearly approaching another another period of crisis on this planet, and no amount of do-gooderism is likely to stop it. However, there is a chance that a bold and brilliant group of scientists and technologists can seize unprecedented power with technologies like artificial super-intelligence. So I agree with Yudkowsky that this is what ambitious people should devote their brainpower to who really want to make an impact on our world, but they should just dispense with all pretense of being “good guys” and simply seek world domination. Even if they fail, I’m willing to bet that they’ll do more to advance civilization than a hundred groups of impotent do-gooders!
Good point. Eliezer? Let’s stop messing around and just take over the world.
Even if you go AI Galt, you should ensure AI is friendly to Galt.
Alpha Omega: I’m impressed with your clarity, tempting “do-gooders” with the “perfect good” of taking their solution to the end of all problems. Given that the role of technology is to ever increasingly control nature, and only fails when that goes out of control for some natural cause, the clear solution is to eliminate that problem, drop all pretenses, and just go ahead and finally make technology a complete success!
[…] John Baez explains his motivation behind his Azimuth Project — a great example. […]
Baez on What To Do?
[…] Eliezer Yudkowsky replies to John Baez Find whatever you’re best at; if that thing that you’re best at is inventing new […]
[…] am also highly skeptical about using the expected value of a galactic civilization to claim otherwise. Because that reasoning will ultimately make you privilege unlikely […]
[…] What To Do? […]
[…] What To Do? […]
[…] Last but not least one should always be wary of a group of people with strong beliefs about the possibility of doom as the result of the actions of another group of people (in this case the culprits being AI and computer scientists). Even more so if it is a group who believes that the fate of an intergalactic civilization depends on their actions. […]
[…] What To Do? […]
[…] Eliezer Yudkowsky replies to John […]
Greg Egan wrote:
In some ways this post is a followup to What To Do (Part 1), so if you haven’t read that, you might want to now.
I am quite new here, having only a cursory review of your blog I stumbled upon it by way of a link to the “Voynich Manuscript”. Nevertheless, I was quite intrigued by your curious conditional invitation: “If you want to help save the planet, please send me an email or say hi on my link…” Gosh, you make it sound so extraordinarily simple ! :-) Okay, by all means, “Hi, indeed !” Regarding the daddy war bucks Dear John letter, the phrase “If all the barriers were removed..” jumped off of the page at me. This, because I have seen this magic done before. There is an entire pseudo-science that manages this miracle by assigning what might otherwise be deemed as “constraints” to something almost benign and inconsequential. What were once “constraints” are now “externalities” and presto, a frontier without barriers ! Some others, however, maintain that such magic is something else enitrely–something like chicanery. A wise man once remarked quite sternly, “Capital cannot abide a limit”. And there are a number of us who believe him. That said, I would suggest also that it might be wise to approach the challenge using a philosophy that is quite different, in general nature, than that implicit in the language of the letter. That is to say, how about solving a cluster of smaller problems where there is, hopefully, more successes than failures ? In other words, you could, with your insight and keen observation become a sort of technical clearinghouse for the ideas of others. Indeed, your post here is sort of just that. But how about making it, officially, the vehicle you use to solve the problem(s) ? I get the idea, generally, from “The Cathedral and the Bazaar”, by Eric S. Raymond. Have you read it, by chance ? What do you think ?
I answered your question over here.
[…] — Eliezer Yudkowsky [Source] […]