- Awret, Uziel, ed.
The Singularity.
Exeter, UK: Imprint Academic, 2016.
ISBN 978-1-84540-907-4.
-
For more than half a century, the prospect of a technological
singularity has been part of the intellectual landscape of those
envisioning the future. In 1965, in a paper titled “Speculations
Concerning the First Ultraintelligent Machine” statistician
I. J. Good
wrote,
Let an ultra-intelligent machine be defined as a machine
that can far surpass all of the intellectual activities of
any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine
could design even better machines; there would then
unquestionably be an “intelligence explosion”, and
the intelligence of man would be left far behind. Thus the first
ultraintelligent machine is the
last invention that man need ever make.
(The idea of a runaway increase in intelligence had been discussed
earlier, notably by Robert A. Heinlein in a 1952 essay titled
“Where To?”) Discussion of an intelligence explosion
and/or technological singularity was largely confined to science
fiction and the more speculatively inclined among those trying to
foresee the future, largely because the prerequisite—building machines
which were more intelligent than humans—seemed such a distant prospect,
especially as the initially optimistic claims of workers in the field of
artificial intelligence gave way to disappointment.
Over all those decades, however, the exponential growth in computing
power available at constant cost continued. The funny thing about
continued exponential growth is that it doesn't matter what fixed
level you're aiming for: the exponential will eventually exceed it, and
probably a lot sooner than most people expect. By the 1990s, it was
clear just how far the growth in computing power and storage had come,
and that there were no technological barriers on the horizon likely
to impede continued growth for decades to come. People started to
draw straight lines on semi-log paper and discovered that, depending
upon how you evaluate the computing capacity of the human brain (a
complicated and controversial question), the computing power of
a machine with a cost comparable to a present-day personal computer
would cross the human brain threshold sometime in the twenty-first century.
There seemed to be a limited number of alternative outcomes.
- Progress in computing comes to a halt before reaching
parity with human brain power, due to technological
limits, economics (inability to afford the new
technologies required, or lack of applications to fund
the intermediate steps), or intervention by authority
(for example, regulation motivated by a desire to
avoid the risks and displacement due to super-human
intelligence).
- Computing continues to advance, but we find that the human
brain is either far more complicated than we believed it to be,
or that something is going on in there which cannot be
modelled or simulated by a deterministic computational process.
The goal of human-level artificial intelligence recedes into
the distant future.
- Blooie! Human level machine intelligence is achieved,
successive generations of machine intelligences run away
to approach the physical limits of computation, and before
long machine intelligence exceeds that of humans to the degree
humans surpass the intelligence of mice (or maybe insects).
Now, the thing about this is that many people will dismiss such speculation
as science fiction having nothing to do with the “real world”
they inhabit. But there's no more conservative form of forecasting
than observing a trend which has been in existence for a long time
(in the case of growth in computing power, more than a century, spanning
multiple generations of very different hardware and technologies), and
continuing to extrapolate it into the future and then ask, “What
happens then?” When you go through this exercise and an answer
pops out which seems to indicate that within the lives of many people
now living, an event completely unprecedented in the history
of our species—the emergence of an intelligence which far
surpasses that of humans—might happen, the prospects and
consequences bear some serious consideration.
The present book, based upon two special issues of the Journal
of Consciousness Studies, attempts to examine the probability,
nature, and consequences of a singularity from a variety of
intellectual disciplines and viewpoints. The volume begins with
an essay by philosopher
David Chalmers
originally published in 2010: “The Singularity: a Philosophical
Analysis”, which attempts to trace various paths to a
singularity and evaluate their probability. Chalmers does not
attempt to estimate the time at which a singularity may occur—he
argues that if it happens any time within the next few centuries,
it will be an epochal event in human history which is worth
thinking about today. Chalmers contends that the argument for
artificial intelligence (AI) is robust because there appear to be
multiple paths by which we could get there, and hence AI does not
depend upon a fragile chain of technological assumptions which might
break at any point in the future. We could, for example, continue to
increase the performance and storage capacity of our computers, to
such an extent that the “deep learning” techniques already
used in computing applications, combined with access to a vast amount
of digital data on the Internet, may cross the line of human
intelligence. Or, we may continue our progress in reverse-engineering
the microstructure of the human brain and apply our ever-growing computing
power to emulating it at a low level (this scenario is discussed in
detail in Robin Hanson's
The Age of Em [September 2016]).
Or, since human intelligence was produced by the process of evolution,
we might set our supercomputers to simulate evolution itself (which
we're already doing to some extent with
genetic algorithms)
in order to evolve super-human artificial intelligence (not only would
computer-simulated evolution run much faster than biological evolution,
it would not be random, but rather directed toward desired
results, much like selective breeding of plants or livestock).
Regardless of the path or paths taken, the outcomes will be
one of the three discussed above: either a singularity or no
singularity. Assume, arguendo,
that the singularity occurs, whether before 2050 as some optimists
project or many decades later. What will it be like? Will it be
good or bad? Chalmers writes,
I take it for granted that there are potential good and bad
aspects to an intelligence explosion. For example, ending
disease and poverty would be good. Destroying all sentient
life would be bad. The subjugation of humans by machines
would be at least subjectively bad.
…well, at least in the eyes of the humans. If there is a
singularity in our future, how might we act to maximise the
good consequences and avoid the bad outcomes? Can we design
our intellectual successors (and bear in mind that we will design
only the first generation: each subsequent generation will be
designed by the machines which preceded it) to share human
values and morality? Can we ensure they are “friendly”
to humans and not malevolent (or, perhaps, indifferent, just as
humans do not take into account the consequences for ant
colonies and bacteria living in the soil upon which buildings are
constructed?) And just what are “human values and
morality” and “friendly behaviour” anyway, given
that we have been slaughtering one another for
millennia in disputes over such issues? Can we impose safeguards
to prevent the artificial intelligence from “escaping”
into the world? What is the likelihood we could prevent such a
super-being from persuading us to let it loose, given that it thinks
thousands or millions of times faster than we, has access to all of
human written knowledge, and the ability to model and simulate the
effects of its arguments? Is turning off an AI murder, or terminating
the simulation of an AI society genocide? Is it moral to confine an
AI to what amounts to a sensory deprivation chamber, or in what amounts
to solitary confinement, or to deceive it about the nature of the
world outside its computing environment?
What will become of humans in a post-singularity world? Given that
our species is the only survivor of genus
Homo,
history is not encouraging, and the gap between human intelligence
and that of post-singularity AIs is likely to be orders of
magnitude greater than that between modern humans and the
great apes. Will these super-intelligent AIs have consciousness
and self-awareness, or will they be
philosophical
zombies: able to mimic the behaviour of a conscious being but
devoid of any internal sentience? What does that even mean, and how
can you be sure other humans you encounter aren't zombies? Are you
really all that sure about yourself? Are the
qualia
of machines not constrained?
Perhaps the human destiny is to merge with our mind children, either
by enhancing human cognition, senses, and memory through implants in our
brain, or by uploading our biological brains into a different computing
substrate entirely, whether by emulation at a low level (for example,
simulating neuron by neuron at the level of synapses and neurotransmitters),
or at a higher, functional level based upon an understanding of the
operation of the brain gleaned by analysis by AIs. If you upload your
brain into a computer, is the upload conscious? Is it you? Consider
the following thought experiment: replace each biological neuron of
your brain, one by one, with a machine replacement which interacts with
its neighbours precisely as the original meat neuron did. Do you cease
to be you when one neuron is replaced? When a hundred are replaced?
A billion? Half of your brain? The whole thing? Does your consciousness
slowly fade into zombie existence as the biological fraction of your
brain declines toward zero? If so, what is magic about biology, anyway?
Isn't arguing that there's something about the biological substrate
which uniquely endows it with consciousness as improbable as the discredited
theory of
vitalism, which
contended that living things had properties which could not be
explained by physics and chemistry?
Now let's consider another kind of uploading. Instead of incremental
replacement of the brain, suppose an anæsthetised human's brain
is destructively scanned, perhaps by molecular-scale robots, and its
structure transferred to a computer, which will then emulate it
precisely as the incrementally replaced brain in the previous
example. When the process is done, the original brain is a puddle of
goo and the human is dead, but the computer emulation now has all of the
memories, life experience, and ability to interact as its progenitor. But
is it the same person? Did the consciousness and perception of identity
somehow transfer from the brain to the computer? Or will the computer
emulation mourn its now departed biological precursor, as it contemplates
its own immortality? What if the scanning process isn't
destructive? When it's done, BioDave wakes up and makes the acquaintance
of DigiDave, who shares his entire life up to the point of uploading.
Certainly the two must be considered distinct individuals, as are
identical twins whose histories diverged in the womb, right? Does
DigiDave have rights in the property of BioDave?
“Dave's not
here”? Wait—we're both here!
Now what?
Or, what about somebody today who, in the
sure
and certain hope of the Resurrection to eternal life
opts to have their brain
cryonically preserved
moments after clinical death is pronounced. After the singularity,
the decedent's brain is scanned (in this case it's irrelevant whether or
not the scan is destructive), and uploaded to a computer, which starts
to run an emulation of it. Will the person's identity and consciousness
be preserved, or will it be a new person with the same memories and
life experiences? Will it matter?
Deep questions, these. The book presents Chalmers' paper as a
“target essay”, and then invites contributors in
twenty-six chapters to discuss the issues raised. A concluding essay
by Chalmers replies to the essays and defends his arguments against
objections to them by their authors. The essays, and their authors,
are all over the map. One author strikes this reader as a confidence
man and another a crackpot—and these are two of the more
interesting contributions to the volume. Nine chapters are by
academic philosophers, and are mostly what you might expect: word
games masquerading as profound thought, with an admixture of ad hominem argument, including one
chapter which descends into Freudian pseudo-scientific analysis of
Chalmers' motives and says that he “never leaps to
conclusions; he oozes to conclusions”.
Perhaps these are questions philosophers are ill-suited to ponder.
Unlike questions of the nature of knowledge, how to live a good life,
the origins of morality, and all of the other diffuse gruel about
which philosophers have been arguing since societies became
sufficiently wealthy to indulge in them, without any notable resolution
in more than two millennia, the issues posed by a
singularity have answers. Either the singularity will occur
or it won't. If it does, it will either result in the extinction of
the human species (or its reduction to irrelevance), or it won't.
AIs, if and when they come into existence, will either be conscious, self-aware,
and endowed with free will, or they won't. They will either share the
values and morality of their progenitors or they won't. It will either be
possible for humans to upload their brains to a digital substrate, or
it won't. These uploads will either be conscious, or they'll be
zombies. If they're conscious, they'll either continue the identity
and life experience of the pre-upload humans, or they won't. These
are objective questions which can be settled by experiment. You get the
sense that philosophers dislike experiments—they're a risk to
job security disputing questions their ancestors have been puzzling over
at least since Athens.
Some authors dispute the probability of a singularity and argue that
the complexity of the human brain has been vastly underestimated.
Others contend there is a distinction between computational power
and the ability to design, and consequently exponential growth
in computing may not produce the ability to design super-intelligence.
Still another chapter dismisses the evolutionary argument through
evidence that the scope and time scale of terrestrial evolution is
computationally intractable into the distant future even if computing
power continues to grow at the rate of the last century. There is
even a case made that the feasibility of a singularity makes the
probability that we're living, not in a top-level physical universe,
but in a simulation run by post-singularity super-intelligences,
overwhelming, and that they may be motivated to turn off our
simulation before we reach our own singularity, which may threaten
them.
This is all very much a mixed bag. There are a multitude of Big Questions,
but very few Big Answers among the 438 pages of philosopher
word salad. I find my reaction similar to that of
David Hume,
who wrote in 1748:
If we take in our hand any volume of divinity or school
metaphysics, for instance, let us ask, Does it contain
any abstract reasoning containing quantity or number?
No. Does it contain any experimental reasoning concerning
matter of fact and existence? No. Commit it then to
the flames, for it can contain nothing but sophistry and illusion.
I don't burn books (it's
некультурный
and expensive when you read them on an iPad), but you'll
probably learn as much pondering the
questions posed here on your own and in discussions with friends
as from the scholarly contributions in these essays. The copy
editing is mediocre, with some eminent authors stumbling over the
humble apostrophe. The
Kindle edition cites cross-references
by page number, which are useless since the electronic edition
does not include page numbers. There is no index.
March 2017