Reframing Superintelligence

Eric Drexler has a document calling for a view of superintelligent systems where instead of focusing on agents or minds we should focus on intelligent services. This is, of course, the approach taken by industrial AI so far. But the idea of a superintelligent agent with its own personality, desires and motivations still has a strong grip on our fantasies of the future.

• Eric Drexler, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Technical Report #2019-1, Future of Humanity Institute.

His abstract begins thus:

Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them.

The desire to build an independent self-motivated superintelligent agents (“AGI”: artificial general intelligence) still beckons to many. But Drexler suggests treating this as a deviant branch we should avoid. He instead wants us to focus on “CAIS”: comprehensive AI services.

First, we don’t have any practical reason to want AGI:

In practical terms, we value potential AI systems for what they could do, whether driving a car, designing a spacecraft, caring for a patient, disarming an opponent, proving a theorem, or writing a symphony. Scientific curiosity and long-standing aspirations will encourage the development of AGI agents with open-ended, self-directed, human-like capabilities, but the more powerful drives of military competition, economic competition, and improving human welfare do not in themselves call for such agents. What matters in practical terms are the concrete AI services provided (their scope, quality, and reliability) and the ease or difficulty of acquiring them (in terms of time, cost,and human effort).

Second, it’s harder to create agents with their own motives than to create services. And third, they are more risky.

But there’s no sharp line between “AI as service” and “AI as agent”, so endless care is required if we want CAIS but not AGI:

There is no bright line between safe CAI services and unsafe AGI agents, and AGI is perhaps best regarded as a potential branch from an R&D-automation/CAIS path. To continue along safe paths from today’s early AI R&D automation to superintelligent-level CAIS calls for an improved understanding of the preconditions for AI risk, while for any given level ofsafety, a better understanding of risk will widen the scope of known-safe system architectures and capabilities. The analysis presented above suggests that CAIS models of the emergence of superintelligent-level AI capabilities, including AGI, should be given substantial and arguably predominant weight in considering questions of AI safety and strategy.

Although it is important to distinguish between pools of AI services and classic conceptions of integrated, opaque, utility-maximizing agents, we should be alert to the potential for coupled AI services to develop emergent, unintended,and potentially risky agent-like behaviors. Because there is no bright line between agents and non-agents, or between rational utility maximization and reactive behaviors shaped by blind evolution, avoiding risky behaviors calls for at least two complementary perspectives: both (1) design-oriented studies that can guide implementation of systems that will provide requisite degrees of e.g., stability, reliability, and transparency, and (2) agent-oriented studies support design by exploring the characteristics of systems that could display emergent, unintended, and potentially risky agent-like behaviors. The possibility (or likelihood) of humans implementing highly-adaptive agents that pursue open-ended goals in the world (e.g., money-maximizers) presents particularly difficult problems.

Perhaps “slippage” toward agency is a bigger risk than the deliberate creation of a superintelligent agent. I feel extremely unconfident in the ability of humans to successfully manage anything, except for short periods of time. I’m not confident in any superintelligence being better at this, either: they could be better at managing things, but they’d have more to manage. Drexler writes:

Superintelligent-level aid in understanding and implementing solutions to the AGI control problem could greatly improve our strategic position.

and while this is true, this offers even more opportunity for “slippage”.

I suspect that whatever can go wrong eventually does. Luckily a lot goes right, too. We fumble, stumble and tumble forward into the future.

13 Responses to Reframing Superintelligence

  1. Toby Bartels says:

    Jeff Morton once suggested to me that general intelligence is a measure of information-processing capability (or something like that). I was sceptical then, but I’ve warmed to the idea that this is more of less what intelligence tests are measuring, or at least trying to measure. And by this standard, the computer systems that we have now are already superintelligent. Indeed, if you program a computer appropriately (to explain what you want it to do, which should be just as fair as translating the instructions between human languages), it can easily ace abstract intelligence tests like Raven’s Matrices (or at least that seems obvious to me, who has never tried to program this). But it has no agency; left to itself, it just sits there. That doesn’t make them entirely safe, because people with agency may use them, but they’re not going to decide to destroy humanity for inscrutable reasons, which is the traditional worry.

    • davidwlocke says:

      A 1996 paper about requirement elicitation finds that we failed at eliciting requirements. For many reasons. In Agile, we don’t even try. It is the unsolved problem of computer science. Agile has resulted in amateur-level performance. We are not even trying to explain things to another human. And, now AI wants to do what it does without explaining any of it to anyone.

  2. davidwlocke says:

    In a recent paper, the author argued against the need to explain the results of AI outputs. Or, to argue against knowledge expectation. I am lost on the point of that. If it doesn’t have to be understood, how is it intelligent?

  3. Wolfgang says:

    Humans are in one way intrinsically curious, and in another way intrinsically lazy. Suppose we create a learning machine that potentially could improve itself indefinitely, is it naive to ask, what would be the rationale for the machine, to actually do so? The ‘thought’ of ruling or even destroying mankind might be such a rationale, but a rather high level of self-awareness would be required for this to happen, and we don’t even know, if any machine will obtain something like consciousness just by endless cycles of learning and self-improvement. Unless, maybe, there is something like a new thermodynamic law, that would inevitably force this infinite improvement? It cannot be entropy, in my opinion, since entropy counteracts any form of learning.

  4. Phillip Helbig says:

    As Max Tegmark (and perhaps others) have pointed out, AI is already better then humans at things like chess and go. While the man-beating chess computer was programmed by chess experts, the man-beating go computer essentially programmed itself. At some point, AI will learn how to program AI, which could lead to Kurzweil’s singularity. At some point agency might arise of itself so to speak. And while it might be improbable for the robots to turn on their creators and destroy them like in some B movie, perhaps their goals are not aligned with our goals. How many species have become extinct because of humans? Practically none were intentionally killed off; rather, our goals (or, rather, the goals of a large enough group of people to make a difference) and their goals were different.

    Max’s book Life 3.0 is an interesting read.

    Max used to do mainly physics (he was a leading cosmologist) but is now doing mostly AI research.

    It’s been the good part of a century since then, but in my reading of modern AI literature, I note that most of the moral and other dilemmas were explored in Asimov’s science fiction.

  5. John Baez says:

    I didn’t know Tegmark switched to AI research. Where is he working now? Let’s see… apparently still the MIT physics department.

    At some point agency might arise of itself so to speak.

    I guess a lot of worries about AI risk revolve around the probability of agency arising “on its own”. Natural life had “agency” long before intelligence, in the sense that even the simplest bacteria act in ways that help them find food, avoid threats and reproduce: it’s a Darwinian imperative. A human-designed system that reproduces because people like it and build more doesn’t have the same sort of pressure to acquire agency. Unless, of course, we are trying to build systems that have, or act like they have, agency! Some people are.

    • Phillip Helbig says:

      One of Max’s emphases are AI risks.

    • Phillip Helbig says:

      “I didn’t know Tegmark switched to AI research. Where is he working now? Let’s see… apparently still the MIT physics department.”

      Where can you go from there? :-) OK, some have moved somewhere else, for various reasons, but not many.

      I guess that’s the advantage of tenure: work on something else if it takes your fancy. (I’m sure that he still teaches physics.)

    • Toby Bartels says:

      Or if the systems are subject to variation and selection, which produces evolution. The agency wouldn’t be directed towards finding food, of course, but towards getting more people to like it. But there's still potential danger there, since being liked enough to get reproduced is not the same as doing what people want in the long term. Indeed, researchers are already consciously trying to design systems that make people like them, only these researchers work in marketing rather than in computer science.

      • John Baez says:

        Yes. China introduced the first female artificial news anchor in 2019, and one can imagine a ‘likeability arms race’ in this field:

        Also in video games, pornography, etc.

        I know someone who used to work for Amazon studying people who direct abuse at Alexa. While Alexa doesn’t have any consciousness yet and can’t be offended by abuse, she was concerned that Alexa could serve as a ‘training ground’ for abusers, and was interested in studying this. As artificial systems become ever more person-like this issue will grow, especially when pornographers and the sex trade get involved. Pretty much any hair-raising scenario you can imagine will occur as soon as it becomes technically possible and cheap enough.

    • Phillip Helbig says:

      “Natural life had “agency” long before intelligence, in the sense that even the simplest bacteria act in ways that help them find food, avoid threats and reproduce: it’s a Darwinian imperative. A human-designed system that reproduces because people like it and build more doesn’t have the same sort of pressure to acquire agency. Unless, of course, we are trying to build systems that have, or act like they have, agency! Some people are.”

      There is a novel by Rudy Rucker (who, in an interesting way, gives me an Einstein number of 4—not in terms of papers, but in the sense of 4 degrees of separation) where, in order to get over Penrose-type objections involving Gödel’s incompleteness theorem, AI is allowed to evolve from much more basic stuff, rather than be constructed per se.

      Another interesting take on that is Code of the Lifemaker by James P. Hogan. (Some of his earlier science fiction is good, whatever one thinks of his politics; this book and Voyage from Yesteryear are probably my favourites).

      Rucker is a jack of all trades—publishing as a mathematician as Rudolf v. B. Rucker, founding the cyberpunk literary genre, teaching computer graphics (which is part of my Einstein intersection, to borrow a title from someone else), painting, blogging, photographing, publishing, and so on. He also wrote one of the very few books I’ve read more than once: Infinity and the Mind, which is a somewhat technical popular-mathematics book. Highly recommended!

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.