- Awret, Uziel, ed.
The Singularity.
Exeter, UK: Imprint Academic, 2016.
ISBN 978-1-84540-907-4.
-
For more than half a century, the prospect of a technological
singularity has been part of the intellectual landscape of those
envisioning the future. In 1965, in a paper titled “Speculations
Concerning the First Ultraintelligent Machine” statistician
I. J. Good
wrote,
Let an ultra-intelligent machine be defined as a machine
that can far surpass all of the intellectual activities of
any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine
could design even better machines; there would then
unquestionably be an “intelligence explosion”, and
the intelligence of man would be left far behind. Thus the first
ultraintelligent machine is the
last invention that man need ever make.
(The idea of a runaway increase in intelligence had been discussed
earlier, notably by Robert A. Heinlein in a 1952 essay titled
“Where To?”) Discussion of an intelligence explosion
and/or technological singularity was largely confined to science
fiction and the more speculatively inclined among those trying to
foresee the future, largely because the prerequisite—building machines
which were more intelligent than humans—seemed such a distant prospect,
especially as the initially optimistic claims of workers in the field of
artificial intelligence gave way to disappointment.
Over all those decades, however, the exponential growth in computing
power available at constant cost continued. The funny thing about
continued exponential growth is that it doesn't matter what fixed
level you're aiming for: the exponential will eventually exceed it, and
probably a lot sooner than most people expect. By the 1990s, it was
clear just how far the growth in computing power and storage had come,
and that there were no technological barriers on the horizon likely
to impede continued growth for decades to come. People started to
draw straight lines on semi-log paper and discovered that, depending
upon how you evaluate the computing capacity of the human brain (a
complicated and controversial question), the computing power of
a machine with a cost comparable to a present-day personal computer
would cross the human brain threshold sometime in the twenty-first century.
There seemed to be a limited number of alternative outcomes.
- Progress in computing comes to a halt before reaching
parity with human brain power, due to technological
limits, economics (inability to afford the new
technologies required, or lack of applications to fund
the intermediate steps), or intervention by authority
(for example, regulation motivated by a desire to
avoid the risks and displacement due to super-human
intelligence).
- Computing continues to advance, but we find that the human
brain is either far more complicated than we believed it to be,
or that something is going on in there which cannot be
modelled or simulated by a deterministic computational process.
The goal of human-level artificial intelligence recedes into
the distant future.
- Blooie! Human level machine intelligence is achieved,
successive generations of machine intelligences run away
to approach the physical limits of computation, and before
long machine intelligence exceeds that of humans to the degree
humans surpass the intelligence of mice (or maybe insects).
Now, the thing about this is that many people will dismiss such speculation
as science fiction having nothing to do with the “real world”
they inhabit. But there's no more conservative form of forecasting
than observing a trend which has been in existence for a long time
(in the case of growth in computing power, more than a century, spanning
multiple generations of very different hardware and technologies), and
continuing to extrapolate it into the future and then ask, “What
happens then?” When you go through this exercise and an answer
pops out which seems to indicate that within the lives of many people
now living, an event completely unprecedented in the history
of our species—the emergence of an intelligence which far
surpasses that of humans—might happen, the prospects and
consequences bear some serious consideration.
The present book, based upon two special issues of the Journal
of Consciousness Studies, attempts to examine the probability,
nature, and consequences of a singularity from a variety of
intellectual disciplines and viewpoints. The volume begins with
an essay by philosopher
David Chalmers
originally published in 2010: “The Singularity: a Philosophical
Analysis”, which attempts to trace various paths to a
singularity and evaluate their probability. Chalmers does not
attempt to estimate the time at which a singularity may occur—he
argues that if it happens any time within the next few centuries,
it will be an epochal event in human history which is worth
thinking about today. Chalmers contends that the argument for
artificial intelligence (AI) is robust because there appear to be
multiple paths by which we could get there, and hence AI does not
depend upon a fragile chain of technological assumptions which might
break at any point in the future. We could, for example, continue to
increase the performance and storage capacity of our computers, to
such an extent that the “deep learning” techniques already
used in computing applications, combined with access to a vast amount
of digital data on the Internet, may cross the line of human
intelligence. Or, we may continue our progress in reverse-engineering
the microstructure of the human brain and apply our ever-growing computing
power to emulating it at a low level (this scenario is discussed in
detail in Robin Hanson's
The Age of Em [September 2016]).
Or, since human intelligence was produced by the process of evolution,
we might set our supercomputers to simulate evolution itself (which
we're already doing to some extent with
genetic algorithms)
in order to evolve super-human artificial intelligence (not only would
computer-simulated evolution run much faster than biological evolution,
it would not be random, but rather directed toward desired
results, much like selective breeding of plants or livestock).
Regardless of the path or paths taken, the outcomes will be
one of the three discussed above: either a singularity or no
singularity. Assume, arguendo,
that the singularity occurs, whether before 2050 as some optimists
project or many decades later. What will it be like? Will it be
good or bad? Chalmers writes,
I take it for granted that there are potential good and bad
aspects to an intelligence explosion. For example, ending
disease and poverty would be good. Destroying all sentient
life would be bad. The subjugation of humans by machines
would be at least subjectively bad.
…well, at least in the eyes of the humans. If there is a
singularity in our future, how might we act to maximise the
good consequences and avoid the bad outcomes? Can we design
our intellectual successors (and bear in mind that we will design
only the first generation: each subsequent generation will be
designed by the machines which preceded it) to share human
values and morality? Can we ensure they are “friendly”
to humans and not malevolent (or, perhaps, indifferent, just as
humans do not take into account the consequences for ant
colonies and bacteria living in the soil upon which buildings are
constructed?) And just what are “human values and
morality” and “friendly behaviour” anyway, given
that we have been slaughtering one another for
millennia in disputes over such issues? Can we impose safeguards
to prevent the artificial intelligence from “escaping”
into the world? What is the likelihood we could prevent such a
super-being from persuading us to let it loose, given that it thinks
thousands or millions of times faster than we, has access to all of
human written knowledge, and the ability to model and simulate the
effects of its arguments? Is turning off an AI murder, or terminating
the simulation of an AI society genocide? Is it moral to confine an
AI to what amounts to a sensory deprivation chamber, or in what amounts
to solitary confinement, or to deceive it about the nature of the
world outside its computing environment?
What will become of humans in a post-singularity world? Given that
our species is the only survivor of genus
Homo,
history is not encouraging, and the gap between human intelligence
and that of post-singularity AIs is likely to be orders of
magnitude greater than that between modern humans and the
great apes. Will these super-intelligent AIs have consciousness
and self-awareness, or will they be
philosophical
zombies: able to mimic the behaviour of a conscious being but
devoid of any internal sentience? What does that even mean, and how
can you be sure other humans you encounter aren't zombies? Are you
really all that sure about yourself? Are the
qualia
of machines not constrained?
Perhaps the human destiny is to merge with our mind children, either
by enhancing human cognition, senses, and memory through implants in our
brain, or by uploading our biological brains into a different computing
substrate entirely, whether by emulation at a low level (for example,
simulating neuron by neuron at the level of synapses and neurotransmitters),
or at a higher, functional level based upon an understanding of the
operation of the brain gleaned by analysis by AIs. If you upload your
brain into a computer, is the upload conscious? Is it you? Consider
the following thought experiment: replace each biological neuron of
your brain, one by one, with a machine replacement which interacts with
its neighbours precisely as the original meat neuron did. Do you cease
to be you when one neuron is replaced? When a hundred are replaced?
A billion? Half of your brain? The whole thing? Does your consciousness
slowly fade into zombie existence as the biological fraction of your
brain declines toward zero? If so, what is magic about biology, anyway?
Isn't arguing that there's something about the biological substrate
which uniquely endows it with consciousness as improbable as the discredited
theory of
vitalism, which
contended that living things had properties which could not be
explained by physics and chemistry?
Now let's consider another kind of uploading. Instead of incremental
replacement of the brain, suppose an anæsthetised human's brain
is destructively scanned, perhaps by molecular-scale robots, and its
structure transferred to a computer, which will then emulate it
precisely as the incrementally replaced brain in the previous
example. When the process is done, the original brain is a puddle of
goo and the human is dead, but the computer emulation now has all of the
memories, life experience, and ability to interact as its progenitor. But
is it the same person? Did the consciousness and perception of identity
somehow transfer from the brain to the computer? Or will the computer
emulation mourn its now departed biological precursor, as it contemplates
its own immortality? What if the scanning process isn't
destructive? When it's done, BioDave wakes up and makes the acquaintance
of DigiDave, who shares his entire life up to the point of uploading.
Certainly the two must be considered distinct individuals, as are
identical twins whose histories diverged in the womb, right? Does
DigiDave have rights in the property of BioDave?
“Dave's not
here”? Wait—we're both here!
Now what?
Or, what about somebody today who, in the
sure
and certain hope of the Resurrection to eternal life
opts to have their brain
cryonically preserved
moments after clinical death is pronounced. After the singularity,
the decedent's brain is scanned (in this case it's irrelevant whether or
not the scan is destructive), and uploaded to a computer, which starts
to run an emulation of it. Will the person's identity and consciousness
be preserved, or will it be a new person with the same memories and
life experiences? Will it matter?
Deep questions, these. The book presents Chalmers' paper as a
“target essay”, and then invites contributors in
twenty-six chapters to discuss the issues raised. A concluding essay
by Chalmers replies to the essays and defends his arguments against
objections to them by their authors. The essays, and their authors,
are all over the map. One author strikes this reader as a confidence
man and another a crackpot—and these are two of the more
interesting contributions to the volume. Nine chapters are by
academic philosophers, and are mostly what you might expect: word
games masquerading as profound thought, with an admixture of ad hominem argument, including one
chapter which descends into Freudian pseudo-scientific analysis of
Chalmers' motives and says that he “never leaps to
conclusions; he oozes to conclusions”.
Perhaps these are questions philosophers are ill-suited to ponder.
Unlike questions of the nature of knowledge, how to live a good life,
the origins of morality, and all of the other diffuse gruel about
which philosophers have been arguing since societies became
sufficiently wealthy to indulge in them, without any notable resolution
in more than two millennia, the issues posed by a
singularity have answers. Either the singularity will occur
or it won't. If it does, it will either result in the extinction of
the human species (or its reduction to irrelevance), or it won't.
AIs, if and when they come into existence, will either be conscious, self-aware,
and endowed with free will, or they won't. They will either share the
values and morality of their progenitors or they won't. It will either be
possible for humans to upload their brains to a digital substrate, or
it won't. These uploads will either be conscious, or they'll be
zombies. If they're conscious, they'll either continue the identity
and life experience of the pre-upload humans, or they won't. These
are objective questions which can be settled by experiment. You get the
sense that philosophers dislike experiments—they're a risk to
job security disputing questions their ancestors have been puzzling over
at least since Athens.
Some authors dispute the probability of a singularity and argue that
the complexity of the human brain has been vastly underestimated.
Others contend there is a distinction between computational power
and the ability to design, and consequently exponential growth
in computing may not produce the ability to design super-intelligence.
Still another chapter dismisses the evolutionary argument through
evidence that the scope and time scale of terrestrial evolution is
computationally intractable into the distant future even if computing
power continues to grow at the rate of the last century. There is
even a case made that the feasibility of a singularity makes the
probability that we're living, not in a top-level physical universe,
but in a simulation run by post-singularity super-intelligences,
overwhelming, and that they may be motivated to turn off our
simulation before we reach our own singularity, which may threaten
them.
This is all very much a mixed bag. There are a multitude of Big Questions,
but very few Big Answers among the 438 pages of philosopher
word salad. I find my reaction similar to that of
David Hume,
who wrote in 1748:
If we take in our hand any volume of divinity or school
metaphysics, for instance, let us ask, Does it contain
any abstract reasoning containing quantity or number?
No. Does it contain any experimental reasoning concerning
matter of fact and existence? No. Commit it then to
the flames, for it can contain nothing but sophistry and illusion.
I don't burn books (it's
некультурный
and expensive when you read them on an iPad), but you'll
probably learn as much pondering the
questions posed here on your own and in discussions with friends
as from the scholarly contributions in these essays. The copy
editing is mediocre, with some eminent authors stumbling over the
humble apostrophe. The
Kindle edition cites cross-references
by page number, which are useless since the electronic edition
does not include page numbers. There is no index.
- Hannan, Daniel.
What Next.
London: Head of Zeus, 2016.
ISBN 978-1-78669-193-4.
-
On June 23rd, 2016, the people of the United Kingdom, against
the advice of most politicians, big business, organised
labour, corporate media, academia, and their self-styled
“betters”, narrowly voted to re-assert their
sovereignty and reclaim the independence of their proud nation,
slowly being dissolved in an “ever closer union”
with the anti-democratic, protectionist, corrupt,
bankrupt, and increasingly authoritarian European Union (EU).
The day of the referendum, bookmakers gave odds which implied
less than a 20% chance of a Leave vote, and yet the morning
after the common sense and perception of right and wrong
of the British people, which had caused them to prevail
in the face of wars, economic and social crises, and a
changing international environment re-asserted itself, and
caused them to say, “No more, thank you. We prefer our
thousand year tradition of self-rule to being dictated to
by unelected foreign oligarchic technocrats.”
The author, Conservative Member of the European Parliament for
South East England since 1999, has been one of the most
vociferous and eloquent partisans of Britain's reclaiming its
independence and campaigners for a Leave vote in the referendum;
the vote was a personal triumph for him. In the introduction,
he writes, “After forty-three years, we
have pushed the door ajar. A rectangle of light dazzles us and,
as our eyes adjust, we see a summer meadow. Swallows swoop against
the blue sky. We hear the gurgling of a little brook. Now to
stride into the sunlight.” What next, indeed?
Before presenting his vision of an independent, prosperous, and
more free Britain, he recounts Britain's history in the European
Union, the sordid state of the institutions of that would-be
socialist superstate, and the details of the Leave campaign,
including a candid and sometimes acerbic view not just of his
opponents but also nominal allies. Hannan argues that Leave
ultimately won because those advocating it were able to present
a positive future for an independent Britain. He says that
every time the Leave message veered toward negatives of the existing
relationship with the EU, in particular immigration, polling in
favour of Leave declined, and when the positive benefits of
independence—for example free trade with Commonwealth nations and
the rest of the world, local control of Britain's fisheries and
agriculture, living under laws made in Britain by a parliament
elected by the British people—Leave's polling improved.
Fundamentally, you can only get so far asking people to vote against
something, especially when the establishment is marching in
lockstep to create fear of the unknown among the electorate.
Presenting a positive vision was, Hannan believes, essential to
prevailing.
Central to understanding a post-EU Britain is the distinction
between a free-trade area and a customs union. The EU has done its
best to confuse people about this issue, presenting its single
market as a kind of free trade utopia. Nothing could be farther
from the truth. A free trade area is just what the name implies:
a group of states which have eliminated tariffs and other barriers
such as quotas, and allow goods and services to cross borders
unimpeded. A customs union such as the EU establishes standards
for goods sold within its internal market which, through regulation,
members are required to enforce (hence, the absurdity of unelected
bureaucrats in Brussels telling the French how to make cheese).
Further, while goods conforming to the regulations can be sold
within the union, there are major trade barriers with parties
outside, often imposed to protect industries with political
pull inside the union. For example, wine produced in California
or Chile is subject to a 32% tariff imposed by the EU to protect its
own winemakers. British apparel manufacturers cannot import
textiles from India, a country with long historical and close
commercial ties, without paying EU tariffs intended to protect
uncompetitive manufacturers on the Continent. Pointy-headed
and economically ignorant “green” policies compound
the problem: a medium-sized company in the EU pays 20% more for
energy than a competitor in China and twice as much as one in
the United States. In international trade disputes, Britain in
the EU is represented by one twenty-eighth of a European Commissioner,
while an independent Britain will have its own seat, like New
Zealand, Switzerland, and the US.
Hannan believes that after leaving the EU, the UK should join the
European
Free Trade Association (EFTA), and demonstrates how ETFA
states such as Norway and Switzerland are more prosperous than
EU members and have better trade with countries outside it. (He
argues against joining the
European
Economic Area [EEA], from which Switzerland has wisely
opted out. The EEA provides too much leverage to the Brussels imperium
to meddle in the policies of member states.) More important for
Britain's future than its relationship to the EU is its ability,
once outside, to conclude bilateral trade agreements with important
trading partners such as the US (even, perhaps, joining NAFTA),
Anglosphere countries such as Australia, South Africa, and New Zealand,
and India, China, Russia, Brazil and other nations: all of which it
cannot do while a member of the EU.
What of Britain's domestic policy? Free of diktats from Brussels,
it will be whatever Britons wish, expressed through their
representatives at Westminster. Hannan quotes the
psychologist Kurt Lewin, who in the 1940s described change as
a three stage process. First, old assumptions about the
way things are and the way they have to be become
“unfrozen”. This ushers in a period of rapid
transformation, where institutions become fluid and can
adapt to changed circumstances and perceptions. Then the new
situation congeals into a status quo which endures until
the next moment of unfreezing. For four decades, Britain has
been frozen into an inertia where parliamentarians and
governments respond to popular demands all too often by saying,
“We'd like to do that, but the EU doesn't permit it.”
Leaving the EU will remove this comfortable excuse, and possibly
catalyse a great unfreezing of Britain's institutions. Where
will this ultimately go? Wherever the people wish it to. Hannan
has some suggestions for potential happy outcomes in this bright
new day.
Britain has devolved substantial governance to Scotland, and yet
Scottish MPs still vote in Westminster for policies which affect
England but to which their constituents are not subject. Perhaps
federalisation might progress to the point where the House of Commons
becomes the English Parliament, with either a reformed House of Lords
or a new body empowered to vote only on matters affecting the
entire Union such as national defence and foreign policy. Free of
the EU, the UK can adopt competitive corporate taxation and
governance policies, and attract companies from around the world
to build not just headquarters but also research and development and
manufacturing facilities. The national VAT could be abolished
entirely and replaced with a local sales tax, paid at point of
retail, set by counties or metropolitan areas in competition with
one another (current payments to these authorities by the Treasury are
almost exactly equal to revenue from the VAT); with competition,
authorities will be forced to economise lest their residents vote
with their feet. With their own source of revenue, decision
making for a host of policies, from housing to welfare, could be
pushed down from Whitehall to City Hall. Immigration can be
re-focused upon the need of the country for skills and labour,
not thrown open to anybody who arrives.
The British vote for independence has been decried by the elitists,
oligarchs, and would-be commissars as a “populist revolt”.
(Do you think those words too strong? Did you know that all of those
EU politicians and bureaucrats are exempt from taxation
in their own countries, and pay a flat tax of around 21%, far less
than the despised citizens they rule?) What is happening, first
in Britain, and before long elsewhere as the corrupt foundations of
the EU crumble, is that the working classes are standing up to
the smirking classes and saying, “Enough.” Britain's
success, which (unless the people are betrayed and their wishes
subverted) is assured, since freedom and democracy always work
better than slavery and bureaucratic dictatorship, will serve to
demonstrate to citizens of other railroad-era continental-scale
empires that smaller, agile, responsive, and free governance
is essential for success in the information age.
- Pratchett, Terry and Stephen Baxter.
The Long War.
New York: HarperCollins, 2013.
ISBN 978-0-06-206869-9.
-
This is the second novel in the authors' series which began with
The Long Earth (November 2012). That
book, which I enjoyed immensely, created a vast new arena for
storytelling: a large, perhaps infinite, number of parallel Earths,
all synchronised in time, among which people can “step”
with the aid of a simple electronic gizmo (incorporating a
potato) whose inventor posted the plans on the Internet on
what has since been called Step Day. Some small fraction of
the population has always been “natural steppers”—able
to move among universes without mechanical assistance, but other
than that tiny minority, all of the worlds of the Long Earth
beyond our own (called the Datum) are devoid of humans. There
are natural stepping humanoids, dubbed “elves” and
“trolls”, but none with human-level intelligence.
As this book opens, a generation has passed since Step Day, and
the human presence has begun to expand into the vast expanses of
the Long Earth. Most worlds are pristine wilderness, with all
the dangers to pioneers venturing into places where
large predators have never been controlled. Joshua Valienté,
whose epic voyage of exploration with Lobsang (who from moment to
moment may be a motorcycle repairman, computer network, Tibetan
monk, or airship) discovered the wonders of these innumerable worlds
in the first book, has settled down to raise a family on a world
in the Far West.
Humans being humans, this gift of what amounts of an infinitely
larger scope for their history has not been without its drawbacks
and conflicts. With the opening of an endless frontier, the
restless and creative have decamped from the Datum to seek adventure
and fortune free of the crowds and control of their increasingly
regimented home world. This has resulted in a drop in innovation and
economic hit to the Datum, and for Datum politicians (particularly
in the United States, the grabbiest of all jurisdictions) to seek
to expand their control (and particularly the ability to loot) to
all residents of the so-called “Aegis”—the
geographical footprint of its territory across the multitude of
worlds. The trolls, who mostly get along with humans and work for
them, hear news from across the worlds through their “long
call” of scandalous mistreatment of their kind by humans
in some places, and now appear to have vanished from many human
settlements to parts unknown. A group of worlds in the American
Aegis in the distant West have adopted the Valhalla Declaration,
asserting their independence from the greedy and intrusive government
of the Datum and, in response, the Datum is sending a fleet of
stepping airships (or “twains”, named for the Mark
Twain of the first novel) to assert its authority over these
recalcitrant emigrants. Joshua and Sally Linsay, pioneer explorers,
return to the Datum to make their case for the rights of trolls. China
mounts an ambitious expedition to the unseen worlds of its footprint
in the Far East.
And so it goes, for more than four hundred pages. This really
isn't a novel at all, but rather four or five novellas
interleaved with one another, where the individual stories
barely interact before most of the characters
meet at a barbecue in the next to last chapter. When I put down
The Long Earth, I concluded that the authors had
created a stage in which all kinds of fiction could play
out and looked forward to seeing what they'd do with it. What a
disappointment! There are a few interesting concepts, such as
evolutionary consequences of travel between parallel Earths and
technologies which oppressive regimes use to keep their subjects
from just stepping away to freedom, but they are few and far
between. There is no war! If you're going to title
your book The Long War, many readers are going to
expect one, and it doesn't happen. I can recall only two
laugh-out-loud lines in the entire book, which is hardly what
you expect when picking up a book with Terry Pratchett's name on
the cover. I shall not be reading the remaining books in the
series which, if Amazon reviews are to be believed, go downhill from
here.