Eric Drexler has a document calling for a view of superintelligent systems where instead of focusing on agents or minds we should focus on intelligent services. This is, of course, the approach taken by industrial AI so far. But the idea of a superintelligent agent with its own personality, desires and motivations still has a strong grip on our fantasies of the future.
• Eric Drexler, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Technical Report #2019-1, Future of Humanity Institute.
His abstract begins thus:
Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them.
The desire to build an independent self-motivated superintelligent agents (“AGI”: artificial general intelligence) still beckons to many. But Drexler suggests treating this as a deviant branch we should avoid. He instead wants us to focus on “CAIS”: comprehensive AI services.
First, we don’t have any practical reason to want AGI:
In practical terms, we value potential AI systems for what they could do, whether driving a car, designing a spacecraft, caring for a patient, disarming an opponent, proving a theorem, or writing a symphony. Scientific curiosity and long-standing aspirations will encourage the development of AGI agents with open-ended, self-directed, human-like capabilities, but the more powerful drives of military competition, economic competition, and improving human welfare do not in themselves call for such agents. What matters in practical terms are the concrete AI services provided (their scope, quality, and reliability) and the ease or difficulty of acquiring them (in terms of time, cost,and human effort).
Second, it’s harder to create agents with their own motives than to create services. And third, they are more risky.
But there’s no sharp line between “AI as service” and “AI as agent”, so endless care is required if we want CAIS but not AGI:
There is no bright line between safe CAI services and unsafe AGI agents, and AGI is perhaps best regarded as a potential branch from an R&D-automation/CAIS path. To continue along safe paths from today’s early AI R&D automation to superintelligent-level CAIS calls for an improved understanding of the preconditions for AI risk, while for any given level ofsafety, a better understanding of risk will widen the scope of known-safe system architectures and capabilities. The analysis presented above suggests that CAIS models of the emergence of superintelligent-level AI capabilities, including AGI, should be given substantial and arguably predominant weight in considering questions of AI safety and strategy.
Although it is important to distinguish between pools of AI services and classic conceptions of integrated, opaque, utility-maximizing agents, we should be alert to the potential for coupled AI services to develop emergent, unintended,and potentially risky agent-like behaviors. Because there is no bright line between agents and non-agents, or between rational utility maximization and reactive behaviors shaped by blind evolution, avoiding risky behaviors calls for at least two complementary perspectives: both (1) design-oriented studies that can guide implementation of systems that will provide requisite degrees of e.g., stability, reliability, and transparency, and (2) agent-oriented studies support design by exploring the characteristics of systems that could display emergent, unintended, and potentially risky agent-like behaviors. The possibility (or likelihood) of humans implementing highly-adaptive agents that pursue open-ended goals in the world (e.g., money-maximizers) presents particularly difficult problems.
Perhaps “slippage” toward agency is a bigger risk than the deliberate creation of a superintelligent agent. I feel extremely unconfident in the ability of humans to successfully manage anything, except for short periods of time. I’m not confident in any superintelligence being better at this, either: they could be better at managing things, but they’d have more to manage. Drexler writes:
Superintelligent-level aid in understanding and implementing solutions to the AGI control problem could greatly improve our strategic position.
and while this is true, this offers even more opportunity for “slippage”.
I suspect that whatever can go wrong eventually does. Luckily a lot goes right, too. We fumble, stumble and tumble forward into the future.