Books by Walton, David
- Walton, David.
Three Laws Lethal.
Jersey City, NJ: Pyr, 2019.
ISBN 978-1-63388-560-8.
-
In the near future, autonomous vehicles, “autocars”,
are available from a number of major automobile manufacturers.
The self-driving capability, while not infallible, has been
approved by regulatory authorities after having demonstrated
that it is, on average, safer than the population of human
drivers on the road and not subject to human frailties such as
driving under the influence of alcohol or drugs, while tired, or
distracted by others in the car or electronic gadgets. While
self-driving remains a luxury feature with which a minority of
cars on the road are equipped, regulators are confident that as
it spreads more widely and improves over time, the highway
accident rate will decline.
But placing an algorithm and sensors in command of a vehicle
with a mass of more than a tonne hurtling down the road at 100
km per hour or faster is not just a formidable technical
problem, it is one with serious and unavoidable moral
implications. These come into stark focus when, in an incident
on a highway near Seattle, an autocar swerves to avoid a tree
crashing down on the highway, hitting and killing a motorcyclist
in an adjacent lane of which the car's sensors must have been
aware. The car appears to have made a choice, valuing
the lives of its passengers: a mother and her two children, over
that of the motorcyclist. What really happened, and how the car
decided what to do in that split-second, is opaque, because the
software controlling it was, as all such software, proprietary
and closed to independent inspection and audit by third
parties. It's one thing to acknowledge that self-driving
vehicles are safer, as a whole, than those with humans behind
the wheel, but entirely another to cede to them the moral agency
of life and death on the highway. Should an autocar value the
lives of its passengers over those of others? What if there
were a sole passenger in the car and two on the motorcycle? And
who is liable for the death of the motorcyclist: the auto
manufacturer, the developers of the software, the owner of car,
the driver who switched it into automatic mode, or the
regulators who approved its use on public roads? The case was
headed for court, and all would be watching the precedents it
might establish.
Tyler Daniels and Brandon Kincannon, graduate students in the
computer science department of the University of Pennsylvania,
were convinced they could do better. The key was going beyond
individual vehicles which tried to operate autonomously based
upon what their own sensors could glean from their immediate
environment, toward an architecture where vehicles communicated
with one another and coordinated their activities. This would
allow sharing information over a wider area and be able to avoid
accidents resulting from individual vehicles acting without the
knowledge of the actions of others. Further, they wanted to
re-architect individual ground transportation from a model of
individually-owned and operated vehicles to transportation as a
service, where customers would summon an autocar on demand with
their smartphone, with the vehicle network dispatching the
closest free car to their location. This would dramatically
change the economics of personal transportation. The typical private
car spends twenty-two out of twenty-four hours parked, taking up
a parking space and depreciating as it sits idle. The
transportation service autocar would be in constant service
(except for downtime for maintenance, refuelling, and times of
reduced demand), generating revenue for its operator. An angel
investor believes their story and, most importantly, believes in
them sufficiently to write a check for the initial demonstration
phase of their project, and they set to work.
Their team consists of Tyler and Brandon, plus Abby and Naomi
Sumner, sisters who differed in almost every way: Abby outgoing
and vivacious, with an instinct for public relations and
marketing, and Naomi the super-nerd, verging on being “on
the spectrum”. The big day of the public roll-out of the
technology arrives, and ends in disaster, killing Abby in what
was supposed to be a demonstration of the system's inherent
safety. The disaster puts an end to the venture and the
surviving principals go their separate ways. Tyler signs on as
a consultant and expert witness for the lawyers bringing the
suit on behalf of the motorcyclist killed in Seattle, using the
exposure to advocate for open source software being a
requirement for autonomous vehicles. Brandon uses money
inherited after the death of his father to launch a new venture,
Black Knight, offering transportation as a service initially in
the New York area and then expanding to other cities. Naomi,
whose university experiment in genetic software implemented as
non-player characters (NPCs) in a virtual world was the
foundation of the original venture's software, sees Black Knight
as a way to preserve the world and beings she has created as
they develop and require more and more computing resources.
Characters in the virtual world support themselves and compete
by driving Black Knight cars in the real world, and as
generation follows generation and natural selection works its
wonders, customers and competitors are amazed at how Black
Knight vehicles anticipate the needs of their users and maintain
an unequalled level of efficiency.
Tyler leverages his recognition from the trial into a new
self-driving venture based on open source software called
“Zoom”, which spreads across the U.S. west coast and
eventually comes into competition with Black Knight in the
east. Somehow, Zoom's algorithms, despite being open and having
a large community contributing to their development, never seem
able to equal the service provided by Black Knight, which is so
secretive that even Brandon, the CEO, doesn't know how Naomi's
software does it.
In approaching any kind of optimisation problem such as
scheduling a fleet of vehicles to anticipate and respond to
real-time demand, a key question is choosing the
“objective function”: how the performance of the
system is evaluated based upon the stated goals of its
designers. This is especially crucial when the optimisation is
applied to a system connected to the real world. The parable of
the
“Clippy
Apocalypse”, where an artificial intelligence put in
charge of a paperclip factory and trained to maximise the
production of paperclips escapes into the wild and eventually
converts first its home planet, then the rest of the solar
system, and eventually the entire visible universe into paper
clips. The system worked as designed—but the objective
function was poorly chosen.
Naomi's NPCs literally (or virtually) lived or died based upon
their ability to provide transportation service to Black
Knight's customers, and natural selection, running at the
accelerated pace of the simulation they inhabited, relentlessly
selected them with the objective of improving their service and
expanding Black Knight's market. To the extent that, within
their simulation, they perceived opposition to these goals, they
would act to circumvent it—whatever it takes.
This sets the stage for one of the more imaginative tales of how
artificial general intelligence might arrive through the back
door: not designed in a laboratory but emerging through the
process of evolution in a complex system subjected to real-world
constraints and able to operate in the real world. The moral
dimensions of this go well beyond the trolley
problem often cited in connection with autonomous vehicles,
dealing with questions of whether artificial intelligences we
create for our own purposes are tools, servants, or slaves, and
what happens when their purposes diverge from those for which we
created them.
This is a techno-thriller, with plenty of action in the
conclusion of the story, but also a cerebral exploration of the
moral questions which something as seemingly straightforward and
beneficial as autonomous vehicles may pose in the future.
December 2019