Books by Bostrom, Nick
- Bostrom, Nick.
Superintelligence.
Oxford: Oxford University Press, 2014.
ISBN 978-0-19-967811-2.
-
Absent the emergence of some physical constraint which causes the
exponential growth of computing power at constant cost to cease,
some form of economic or societal collapse which brings an end to
research and development of advanced computing hardware and software,
or a decision, whether bottom-up or top-down, to deliberately relinquish
such technologies, it is probable that within the 21st century there
will emerge artificially-constructed systems which are more intelligent
(measured in a variety of ways) than any human being who has ever lived and,
given the superior ability of such systems to improve themselves, may
rapidly advance to superiority over all human society taken as a whole.
This “intelligence explosion” may occur in so short a time
(seconds to hours) that human society will have no time to adapt to its
presence or interfere with its emergence. This challenging and occasionally
difficult book, written by a philosopher who has explored these issues in depth,
argues that the emergence of superintelligence will pose the greatest
human-caused existential threat to our species so far in its existence,
and perhaps in all time.
Let us consider what superintelligence may mean. The history of
machines designed by humans is that they rapidly surpass their
biological predecessors to a large degree. Biology never produced
something like a steam engine, a locomotive, or an airliner. It
is similarly likely that once the intellectual and technological leap to
constructing artificially intelligent systems is made, these systems
will surpass human capabilities to an extent greater than those of a Boeing
747 exceed those of a hawk. The gap between the cognitive power of a human,
or all humanity combined, and the first mature superintelligence may be as
great as that between brewer's yeast and humans. We'd better be sure of
the intentions and benevolence of that intelligence before handing
over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can
envision over a few centuries on this planet, but the entire “cosmic
endowment” of humanity. It is entirely plausible that we are members
of the only intelligent species in the galaxy, and possibly in the entire
visible universe. (If we weren't, there would be abundant and visible evidence
of cosmic engineering by those more advanced that we.) Thus our cosmic
endowment may be the entire galaxy, or the universe, until the end of
time. What we do in the next century may determine the destiny of the
universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an
example given by the author. There are extremely difficult and subtle
questions about what the motivations of a superintelligence might be,
how the possession of such power might change it, and the prospects for
we, its creator, to constrain it to behave in a way we consider consistent
with our own values. But for the moment, let's ignore all of those
problems and assume we can specify the motivation of an artificially
intelligent agent we create and that it will remain faithful to that
motivation for all time. Now suppose a paper clip factory has installed a
high-end computing system to handle its design tasks, automate manufacturing,
manage acquisition and distribution of its products, and otherwise obtain
an advantage over its competitors. This system, with connectivity
to the global Internet, makes the leap to superintelligence before any
other system (since it understands that superintelligence will enable it
to better achieve the goals set for it). Overnight, it replicates itself
all around the world, manipulates financial markets to obtain resources
for itself, and deploys them to carry out its mission. The mission?—to
maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly
discover that most of the raw materials it requires in the near future
are locked in the core of the Earth, and can be liberated by
disassembling the planet by self-replicating nanotechnological
machines. This will cause the extinction of its creators and all
other biological species on Earth, but then they were just consuming
energy and material resources which could better be deployed for making
paper clips. Soon other planets in the solar system would be similarly
disassembled, and
self-reproducing probes
dispatched on missions to
other stars, there to make paper clips and spawn other probes to more
stars and eventually other galaxies. Eventually, the entire visible
universe would be turned into paper clips, all because the original
factory manager didn't hire a philosopher to work out the ultimate
consequences of the final goal programmed into his factory automation
system.
This is a light-hearted example, but if you happen to observe a void in a
galaxy whose spectrum resembles that of paper clips, be very
worried.
One of the reasons to believe that we will have to confront superintelligence
is that there are multiple roads to achieving it, largely independent of
one another.
Artificial general intelligence
(human-level intelligence
in as many domains as humans exhibit intelligence today, and not
constrained to limited tasks such as playing chess or driving a car) may
simply await the discovery of a clever software method which could run on
existing computers or networks. Or, it might emerge as networks store more
and more data about the real world and have access to accumulated human
knowledge. Or, we may build “neuromorphic“ systems whose
hardware operates in ways similar to the components of human brains, but
at electronic, not biologically-limited speeds. Or, we may be able to
scan an entire human brain and emulate it, even without understanding how
it works in detail, either on neuromorphic or a more conventional
computing architecture. Finally, by identifying the genetic components
of human intelligence, we may be able to manipulate the human germ line,
modify the genetic code of embryos, or select among mass-produced
embryos those with the greatest predisposition toward intelligence. All
of these approaches may be pursued in parallel, and progress in one may
advance others.
At some point, the emergence of superintelligence calls into the question
the economic rationale for a large human population. In 1915, there were
about 26 million horses in the U.S. By the early 1950s, only 2 million
remained. Perhaps the AIs will have a nostalgic attachment to those who
created them, as humans had for the animals who bore their burdens for
millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are
given to long gassy prose devoid of specifics and for spouting
complicated indirect arguments which don't seem to be independently
testable (“What if we asked the AI to determine its own goals,
based on its understanding of what we would ask it to do if only
we were as intelligent as it and thus able to better comprehend what
we really want?”). These are interesting concepts, but would
you want to bet the destiny of the universe on them? The latter half
of the book is full of such fuzzy speculation, which I doubt is likely
to result in clear policy choices before we're faced with the emergence
of an artificial intelligence, after which, if they're wrong, it will
be too late.
That said, this book is a welcome antidote to wildly optimistic views
of the emergence of artificial intelligence which blithely assume it
will be our dutiful servant rather than a fearful master. Some readers
may assume that an artificial intelligence will be something like a
present-day computer or search engine, and not be self-aware and have
its own agenda and powerful wiles to advance it, based upon a knowledge
of humans far beyond what any single human brain can encompass. Unless you
believe there is some kind of intellectual
élan vital inherent in biological
substrates which is absent in their equivalents based on other hardware
(which just seems silly to me—like arguing there's something
special about a horse which can't be accomplished better by a truck),
the mature artificial intelligence will be the superior in every way
to its human creators, so in-depth ratiocination about how it will
regard and treat us is in order before we find ourselves faced with the
reality of dealing with our successor.
September 2014