« October 18, 2005 | Main | October 22, 2005 »
Friday, October 21, 2005
Reading List: The Singularity Is Near
- Kurzweil, Ray. The Singularity Is Near. New York: Viking, 2005. ISBN 0-670-03384-7.
-
What happens if Moore's Law--the annual doubling of computing power
at constant cost--just keeps on going? In this book,
inventor, entrepreneur, and futurist Ray Kurzweil extrapolates the
long-term faster than exponential growth (the exponent is itself
growing exponentially) in computing power to the point where the
computational capacity of the human brain is available for about
US$1000 (around 2020, he estimates), reverse engineering and
emulation of human brain structure permits machine intelligence
indistinguishable from that of humans as defined by the Turing test
(around 2030), and the subsequent (and he believes inevitable)
runaway growth in artificial intelligence leading to a technological
singularity around 2045 when US$1000 will purchase computing power
comparable to that of all presently-existing human brains and the new
intelligence created in that single year will be a billion times
greater than that of the entire intellectual heritage of human
civilisation prior to that date. He argues that the inhabitants of
this brave new world, having transcended biological computation in
favour of nanotechnological substrates "trillions of trillions of
times more capable" will remain human, having preserved their
essential identity and evolutionary heritage across this leap to
Godlike intellectual powers. Then what? One might as well have asked
an ant to speculate on what newly-evolved hominids would end up
accomplishing, as the gap between ourselves and these super cyborgs
(some of the precursors of which the author argues are alive today)
is probably greater than between arthropod and anthropoid.
Throughout this tour de force of boundless
technological optimism, one is impressed by the author's adamantine
intellectual integrity. This is not an advocacy document--in fact,
Kurzweil's view is that the events he envisions are essentially
inevitable given the technological, economic, and moral (curing
disease and alleviating suffering) dynamics driving them.
Potential roadblocks are discussed candidly, along with the
existential risks posed by the genetics, nanotechnology, and robotics
(GNR) revolutions which will set the stage for the singularity. A
chapter is devoted to responding to critics of various aspects of the
argument, in which opposing views are treated with respect.
I'm not going to expound further in great detail. I suspect a majority of
people who read these comments will, in all likelihood, read the book
themselves (if they haven't already) and make up their own minds about it.
If you are at all interested in the evolution of technology in this
century and its consequences for the humans who are creating it, this
is certainly a book you should read. The balance of these remarks
discuss various matters which came to mind as I read the book; they may
not make much sense unless you've read it (You are going to
read it, aren't you?), but may highlight things to reflect upon as you do.
- Switching off the simulation. Page 404 raises a somewhat arcane risk I've pondered at some length. Suppose our entire universe is a simulation run on some super-intelligent being's computer. (What's the purpose of the universe? It's a science fair project!) What should we do to avoid having the simulation turned off, which would be bad? Presumably, the most likely reason to stop the simulation is that it's become boring. Going through a technological singularity, either from the inside or from the outside looking in, certainly doesn't sound boring, so Kurzweil argues that working toward the singularity protects us, if we be simulated, from having our plug pulled. Well, maybe, but suppose the explosion in computing power accessible to the simulated beings (us) at the singularity exceeds that available to run the simulation? (This is plausible, since post-singularity computing rapidly approaches its ultimate physical limits.) Then one imagines some super-kid running top to figure out what's slowing down the First Superbeing Shooter game he's running and killing the CPU hog process. There are also things we can do which might increase the risk of the simulation's being switched off. Consider, as I've proposed, precision fundamental physics experiments aimed at detecting round-off errors in the simulation (manifested, for example, as small violations of conservation laws). Once the beings in the simulation twig to the fact that they're in a simulation and that their reality is no more accurate than double precision floating point, what's the point to letting it run?
- Fifty bits per atom? In the description of the computational capacity of a rock (p. 131), the calculation assumes that 100 bits of memory can be encoded in each atom of a disordered medium. I don't get it; even reliably storing a single bit per atom is difficult to envision. Using the "precise position, spin, and quantum state" of a large ensemble of atoms as mentioned on p. 134 seems highly dubious.
- Luddites. The risk from anti-technology backlash is discussed in some detail. ("Ned Ludd" himself joins in some of the trans-temporal dialogues.) One can imagine the next generation of anti-globalist demonstrators taking to the streets to protest the "evil corporations conspiring to make us all rich and immortal".
- Fundamentalism. Another risk is posed by fundamentalism, not so much of the religious variety, but rather fundamentalist humanists who perceive the migration of humans to non-biological substrates (at first by augmentation, later by uploading) as repellent to their biological conception of humanity. One is inclined, along with the author, simply to wait until these folks get old enough to need a hip replacement, pacemaker, or cerebral implant to reverse a degenerative disease to motivate them to recalibrate their definition of "purely biological". Still, I'm far from the first to observe that Singularitarianism (chapter 7) itself has some things in common with religious fundamentalism. In particular, it requires faith in rationality (which, as Karl Popper observed, cannot be rationally justified), and that the intentions of super-intelligent beings, as Godlike in their powers compared to humans as we are to Saccharomyces cerevisiae, will be benign and that they will receive us into eternal life and bliss. Haven't I heard this somewhere before? The main difference is that the Singularitarian doesn't just aspire to Heaven, but to Godhood Itself. One downside of this may be that God gets quite irate.
- Vanity. I usually try to avoid the "Washington read" (picking up a book and flipping immediately to the index to see if I'm in it), but I happened to notice in passing I made this one, for a minor citation in footnote 47 to chapter 2.
- Spindle cells. The material about "spindle cells" on pp. 191-194 is absolutely fascinating. These are very large, deeply and widely interconnected neurons which are found only in humans and a a few great apes. Humans have about 80,000 spindle cells, while gorillas have 16,000, bonobos 2,100 and chimpanzees 1,800. If you're intrigued by what makes humans human, this looks like a promising place to start.
- Speculative physics. The author shares my interest in physics verging on the fringe, and, turning the pages of this book, we come across such topics as possible ways to exceed the speed of light, black hole ultimate computers, stable wormholes and closed timelike curves (a.k.a. time machines), baby universes, cold fusion, and more. Now, none of these things is in any way relevant to nor necessary for the advent of the singularity, which requires only well-understood mainstream physics. The speculative topics enter primarily in discussions of the ultimate limits on a post-singularity civilisation and the implications for the destiny of intelligence in the universe. In a way they may distract from the argument, since a reader might be inclined to dismiss the singularity as yet another woolly speculation, which it isn't.
- Source citations. The end notes contain many citations of articles in Wired, which I consider an entertainment medium rather than a reliable source of technological information. There are also references to articles in Wikipedia, where any idiot can modify anything any time they feel like it. I would not consider any information from these sources reliable unless independently verified from more scholarly publications.
- "You apes wanna live forever?" Kurzweil doesn't just anticipate the singularity, he hopes to personally experience it, to which end (p. 211) he ingests "250 supplements (pills) a day and . . . a half-dozen intravenous therapies each week". Setting aside the shots, just envision two hundred and fifty pills each and every day! That's 1,750 pills a week or, if you're awake sixteen hours a day, an average of more than 15 pills per waking hour, or one pill about every four minutes (one presumes they are swallowed in batches, not spaced out, which would make for a somewhat odd social life). Between the year 2000 and the estimated arrival of human-level artificial intelligence in 2030, he will swallow in excess of two and a half million pills, which makes one wonder what the probability of choking to death on any individual pill might be. He remarks, "Although my program may seem extreme, it is actually conservative--and optimal (based on my current knowledge)." Well, okay, but I'd worry about a "strategy for preventing heart disease [which] is to adopt ten different heart-disease-prevention therapies that attack each of the known risk factors" running into unanticipated interactions, given how everything in biology tends to connect to everything else. There is little discussion of the alternative approach to immortality with which many nanotechnologists of the mambo chicken persuasion are enamoured, which involves severing the heads of recently deceased individuals and freezing them in liquid nitrogen in sure and certain hope of the resurrection unto eternal life.