In a comment on my last interview with Yudkowsky, Eric Jordan wrote:
John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.
I’ve been thinking about this a lot.
Indeed, the reason I quit work on my previous area of interest—categorification and higher gauge theory—was the feeling that more and more people were moving into it. When I started, it seemed like a lonely but exciting quest. By now there are plenty of conferences on it, attended by plenty of people. It would be a full-time job just keeping up, much less doing something truly new. That made me feel inadequate—and worse, unnecessary. Helping start a snowball roll downhill is fun… but what’s the point in chasing one that’s already rolling?
The people working in this field include former grad students of mine and other youngsters I helped turn on to the subject. At first this made me a bit frustrated. It’s as if I engineered my own obsolescence. If only I’d spent less time explaining things, and more time proving theorems, maybe I could have stayed at the forefront!
But by now I’ve learned to see the bright side: it means I’m free to do other things. As I get older, I’m becoming ever more conscious of my limited lifespan and the vast number of things I’d like to try.
But what to do?
This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.
For me this question was brought into sharp focus when I got a research position where I was allowed—nay, downright encouraged!—to follow my heart and work on what I consider truly important. In the ordinary course of life we often feel too caught up in the flow of things to do more than make small course corrections. Suddenly I was given a burst of freedom. What to do with it?
In my earlier work, I’d always taken the attitude that I should tackle whatever questions seemed most beautiful and profound… subject to the constraint that I had a good chance of making some progress on them. I realized that this attitude assumes other people will do most of the ‘dirty work’, whatever that may be. But I figured I could get away with it. I figured that if I were ever called to account—by my own conscience, say—I could point to the fact that I’d worked hard to understand the universe and also spent a lot of time teaching people, both in my job and in my spare time. Surely that counts for something?
I had, however, for decades been observing the slow-motion train wreck that our civilization seems to be engaged in. Global warming, ocean acidification and habitat loss may be combining to cause a mass extinction event, and perhaps—in conjunction with resource depletion—a serious setback to human civilization. Now is not the time to go over all the evidence: suffice it to say that I think we may be heading for serious trouble.
It’s hard to know just how much trouble. If it were just routine ‘misery as usual’, I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.
I’m not sure this complex of problems rises to the level of an ‘existential risk’—which Nick Bostrom defines as one where an adverse outcome would either annihilate intelligent life originating on Earth or permanently and drastically curtail its potential. But I see scenarios where we clobber ourselves quite seriously. They don’t even seem unlikely, and they don’t seem very far-off, and I don’t see people effectively rising to the occasion. So, just as I’d move to put out a fire if I saw smoke coming out of the kitchen and everyone else was too busy watching TV to notice, I feel I have to do something.
But the question remains: what to do?
Eliezer Yudkowsky had some unabashed advice:
I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.
So how do you go about protecting the future of intelligent life? Environmentalism? After all, there are environmental catastrophes that could knock over our civilization… but then if you want to put the whole universe at stake, it’s not enough for one civilization to topple, you have to argue that our civilization is above average in its chances of building a positive galactic future compared to whatever civilization would rise again a century or two later. Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.
Similarly with other short-term problems. Unless they’re little-known and unpopular problems, the marginal impact is not going to make sense, because millions of other people will already be working on them. And even if you argue that some short-term problem leverages existential risk, it’s not going to be perfect leverage and some quantitative discount will apply, probably a large one. I would be suspicious that the decision to work on a short-term problem was driven by warm glow, status drives, or simple conventionalism.
With that said, there’s also such a thing as comparative advantage—the old puzzle of the lawyer who works an hour in the soup clinic instead of working an extra hour as a lawyer and donating the money. Personally I’d say you can work an hour in the soup clinic to keep yourself going if you like, but you should also be working extra lawyer-hours and donating the money to the soup clinic, or better yet, to something with more scope. (See “Purchase Fuzzies and Utilons Separately” on Less Wrong.) Most people can’t work effectively on Artificial Intelligence (some would question if anyone can, but at the very least it’s not an easy problem). But there’s a variety of existential risks to choose from, plus a general background job of spreading sufficiently high-grade rationality and existential risk awareness. One really should look over those before going into something short-term and conventional. Unless your master plan is just to work the extra hours and donate them to the cause with the highest marginal expected utility per dollar, which is perfectly respectable.
Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point.
When I heard this, one of my first reactions was: “Of course I don’t want to do anything ‘conventional’, something that ‘millions of people’ are already doing”. After all, my sense of being just another guy in the crowd was a big factor in leaving work on categorification and higher gauge theory—and most people have never even heard of those subjects!
I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new. Though I always want more people to join in, we’ve already got some good projects going that take advantage of my ‘comparative advantage’: the ability to do math and explain stuff.
The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems. I’ve been putting most of my energy into that lately, and coming up with ideas faster than I can explain them. On top of that, Eric Forgy, Tim van Beek, Staffan Liljgeren, Matt Reece, David Tweed and others have other interesting projects cooking behind the scenes on the Azimuth Forum. I’ll be talking about those soon, too.
I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.
One can, of course, easily fool oneself into thinking one knows.