- Barrat, James.
Our Final Invention.
New York: Thomas Dunne Books, 2013.
ISBN 978-0-312-62237-4.
-
As a member of that crusty generation who began programming
mainframe computers
with
punch cards
in the 1960s, the phrase “artificial intelligence” evokes an
almost visceral response of scepticism. Since its origin in the 1950s, the
field has been a hotbed of wildly over-optimistic enthusiasts,
predictions of breakthroughs which never happened, and some
outright confidence men preying on investors and institutions
making research grants.
John McCarthy,
who organised the first international conference on artificial intelligence
(a term he coined), predicted at the time that computers would achieve
human-level general intelligence within six months of concerted research
toward that goal. In 1970
Marvin Minsky
said “In from three to eight years we will have a machine
with the general intelligence of an average human being.” And
these were serious scientists and pioneers of the field; the charlatans
and hucksters were even more absurd in their predictions.
And yet, and yet…. The exponential growth in computing power available
at constant cost has allowed us to “brute force” numerous problems
once considered within the domain of artificial intelligence. Optical
character recognition (machine reading), language translation, voice recognition,
natural language query, facial recognition, chess playing at the grandmaster
level, and self-driving automobiles were all once thought to be things a
computer could never do unless it vaulted to the level of human intelligence,
yet now most have become commonplace or are on the way to becoming so. Might
we, in the foreseeable future, be able to brute force human-level general
intelligence?
Let's step back and define some terms. “Artificial General Intelligence” (AGI)
means a machine with intelligence comparable to that of a human across all of
the domains of human intelligence (and not limited, say, to playing chess
or driving a vehicle), with self-awareness and the ability to learn from
mistakes and improve its performance. It need not be embodied in a robot form
(although some argue it would have to be to achieve human-level performance),
but could certainly pass the
Turing test: a human communicating
with it over whatever channels of communication are available (in the
original formulation of the test, a text-only teleprinter) would not be able to
determine whether he or she were communicating with a machine or another human.
“Artificial Super Intelligence” (ASI) denotes a machine whose
intelligence exceeds that of the most intelligent human. Since a self-aware
intelligent machine will be able to modify its own programming, with immediate
effect, as opposed to biological organisms which must rely upon the achingly
slow mechanism of evolution, an AGI might evolve into an ASI in an eyeblink:
arriving at intelligence a million times or more greater than that of any
human, a process which
I. J. Good called
an “intelligence explosion”.
What will it be like when, for the first time in the history of our species,
we share the planet with an intelligence greater than our own? History
is less than encouraging. All members of genus
Homo which were
less intelligent than modern humans (inferring from cranial capacity
and artifacts, although one can argue about
Neanderthals)
are extinct. Will that be the fate of our species once we create
a super intelligence? This
book presents the case that not only will the construction of an ASI
be the final invention we need to make, since it will be able
to anticipate anything we might invent long before we can ourselves, but also our
final invention because we won't be around to make any more.
What will be the motivations of a machine a million times more intelligent than
a human? Could humans understand such motivations any more than brewer's
yeast could understand ours? As
Eliezer Yudkowsky
observed,
“The AI does not hate you, nor does it love you, but you are made
out of atoms which it can use for something else.” Indeed, when humans
plan to construct a building, do they take into account the wishes of
bacteria in soil upon which the structure will be built? The gap between
humans and ASI will be as great. The consequences of creating ASI may extend
far beyond the Earth. A super intelligence may decide to propagate itself throughout
the galaxy and even beyond: with immortality and the ability to create
perfect copies of itself, even travelling at a fraction of the speed of light
it could spread itself into all viable habitats in the galaxy in a few hundreds
of millions of years—a small fraction of the billions of years life has
existed on Earth. Perhaps ASI probes from other extinct biological civilisations
foolish enough to build them are already headed our way.
People are presently working toward achieving AGI. Some are in the academic and
commercial spheres, with their work reasonably transparent and reported
in public venues. Others are “stealth companies” or divisions
within companies (does anybody doubt that Google's achieving an AGI level of
understanding of the information it Hoovers up from the Web wouldn't be a
overwhelming competitive advantage?). Still others are funded by
government agencies or operate within the black world: certainly players such
as NSA dream of being able to understand all of the information they intercept
and cross-correlate it. There is a powerful “first mover” advantage
in developing AGI and ASI. The first who obtains it will be able to
exploit its capability against those who haven't yet achieved it. Consequently,
notwithstanding the worries about loss of control of the technology, players
will be motivated to support its development for fear their adversaries might
get there first.
This is a well-researched and extensively documented examination of the
state of artificial intelligence and assessment of its risks. There are
extensive end notes including references to documents on the Web which, in
the Kindle edition, are linked directly to their sources.
In the Kindle edition, the index is just a list of “searchable terms”,
not linked to references in the text. There are a few goofs, as you might
expect for a documentary film maker writing about technology
(“Newton's second law of thermodynamics”), but nothing which
invalidates the argument made herein.
I find myself oddly ambivalent about the whole thing. When I hear
“artificial intelligence” what flashes through my mind
remains that dielectric material I step in when I'm insufficiently
vigilant crossing pastures in Switzerland. Yet with the pure increase
in computing power, many things previously considered AI have been
achieved, so it's not implausible that, should this exponential increase
continue, human-level machine intelligence will be achieved either through
massive computing power applied to cognitive algorithms or direct emulation
of the structure of the human brain. If and when that happens, it is
difficult to see why an “intelligence explosion” will not
occur. And once that happens, humans will be faced with an
intelligence that dwarfs that of their entire species; which will have
already penetrated every last corner of its infrastructure; read every word
available online written by every human; and which will deal with
its human interlocutors after gaming trillions of scenarios on
cloud computing resources it has co-opted.
And still we advance the cause of artificial intelligence every day.
Sleep well.
December 2013