- Hawkins, Jeff with Sandra Blakeslee.
On Intelligence.
New York: Times Books, 2004.
ISBN 0-8050-7456-2.
-
Ever since the early days of research into the sub-topic
of computer science which styles itself “artificial
intelligence”, such work has been criticised by philosophers,
biologists, and neuroscientists who argue that while
symbolic manipulation, database retrieval, and logical
computation may be able to mimic, to some limited extent,
the behaviour of an intelligent being, in no case does
the computer understand the problem it is solving
in the sense a human does. John R. Searle's
“Chinese
Room” thought experiment is one of the best known
and extensively debated of these criticisms, but there are many
others just as cogent and difficult to refute.
These days, criticising artificial intelligence verges on
hunting cows with a bazooka—unlike the early days
in the 1950s when everybody expected the world chess championship
to be held by a computer within five or ten years and mathematicians
were fretting over what they'd do with their lives once computers
learnt to discover and prove theorems thousands of times faster
than they, decades of hype, fads, disappointment, and broken promises
have instilled some sense of reality into the expectations
most technical people have for “AI”, if not into those
working in the field and those they bamboozle with the sixth
(or is it the sixteenth) generation of AI bafflegab.
AI researchers sometimes defend their field by saying “If it
works, it isn't AI”, by which they mean that as soon as a
difficult problem once considered within the domain of
artificial intelligence—optical character recognition,
playing chess at the grandmaster level, recognising faces in
a crowd—is solved, it's no longer considered AI but simply
another computer application, leaving AI with the remaining
unsolved problems. There is certainly some truth in this, but
a closer look gives lie to the claim that these problems, solved
with enormous effort on the part of numerous researchers, and
with the application, in most cases, of computing power undreamed
of in the early days of AI, actually represents “intelligence”,
or at least what one regards as intelligent behaviour on the part of
a living brain.
First of all, in no case did a computer “learn” how to
solve these problems in the way a human or other organism does; in
every case experts analysed the specific problem domain in great detail,
developed special-purpose solutions tailored to the problem, and then
implemented them on computing hardware which in no way resembles the
human brain. Further, each of these “successes” of AI
is useless outside its narrow scope of application: a chess-playing computer
cannot read handwriting, a speech recognition program cannot identify
faces, and a natural language query program cannot solve mathematical
“word problems” which pose no difficulty to fourth graders.
And while many of these programs are said to be “trained” by
presenting them with collections of stimuli and desired responses,
no amount of such training will permit, say, an optical character
recognition program to learn to write limericks. Such programs
can certainly be useful, but nothing other than the fact that they
solve problems which were once considered difficult in an age when
computers were much slower and had limited memory resources justifies
calling them “intelligent”, and outside the marketing
department, few people would remotely consider them so.
The subject of this ambitious book is not “artificial intelligence”
but intelligence: the real thing, as manifested in the higher
cognitive processes of the mammalian brain, embodied, by all
the evidence, in the neocortex. One of the most fascinating things
about the neocortex is how much a creature can do without one,
for only mammals have them. Reptiles, birds, amphibians,
fish, and even insects (which barely have a brain at all) exhibit
complex behaviour, perception of and interaction with their
environment, and adaptation to an extent which puts to shame the
much-vaunted products of “artificial intelligence”, and
yet they all do so without a neocortex at all. In this book, the author
hypothesises that the neocortex evolved in mammals as an add-on
to the old brain (essentially, what computer architects would call a
“bag hanging on the side of the old machine”) which
implements a multi-level hierarchical associative memory for patterns
and a complementary decoder from patterns to detailed low-level
behaviour which, wired through the old brain to the sensory inputs and
motor controls, dynamically learns spatial and temporal patterns and
uses them to make predictions which are fed back to the lower levels
of the hierarchy, which in turns signals whether further inputs
confirm or deny them. The ability of the high-level cortex to
correctly predict inputs is what we call “understanding”
and it is something which no computer program is presently capable of
doing in the general case.
Much of the recent and present-day work in neuroscience has been
devoted to imaging where the brain processes various kinds of
information. While fascinating and useful, these investigations may
overlook one of the most striking things about the neocortex: that
almost every part of it, whether devoted to vision, hearing,
touch, speech, or motion appears to have more or less the same
structure. This observation, by Vernon B. Mountcastle in 1978,
suggests there may be a common cortical algorithm by
which all of these seemingly disparate forms of processing
are done. Consider: by the time sensory inputs reach the brain,
they are all in the form of spikes transmitted by neurons, and all
outputs are sent in the same form, regardless of their ultimate
effect. Further, evidence of plasticity in the cortex is abundant:
in cases of damage, the brain seems to be able to re-wire itself to
transfer a function to a different region of the cortex. In a long
(70 page) chapter, the author presents a sketchy model of what
such a common cortical algorithm might be, and how it may be implemented
within the known physiological structure of the cortex.
The author is a founder of
Palm Computing and
Handspring (which was subsequently acquired by Palm).
He subsequently founded the Redwood Neuroscience Institute, which
has now become part of the
Helen Wills Neuroscience
Institute at the University of California, Berkeley,
and in March of 2005 founded
Numenta, Inc. with the
goal of developing computer memory systems based on the model
of the neocortex presented in this book.
Some academic scientists may sniff at the pretensions of a (very
successful) entrepreneur diving into their speciality and trying to
figure out how the brain works at a high level. But, hey, nobody
else seems to be doing it—the computer scientists are
hacking away at their monster programs and parallel machines, the
brain community seems stuck on functional imaging (like trying to
reverse-engineer a
microprocessor in the nineteenth century by looking at its gross
chemical and electrical properties), and the neuron experts are off
dissecting squid: none of these seem likely to lead to an
understanding (there's that word again!) of what's actually going on
inside their own tenured, taxpayer-funded skulls. There is
undoubtedly much that is wrong in the author's speculations, but then
he admits that from the outset and, admirably, presents an appendix
containing eleven testable predictions, each of which can falsify all
or part of his theory. I've long suspected that intelligence has
more to do with memory
than computation, so I'll confess to being predisposed toward the
arguments presented here, but I'd be surprised if any reader didn't
find themselves thinking
about their own thought processes in a different way after reading this
book. You won't find the answers to the mysteries of the brain here,
but at least you'll discover many of the questions worth pondering,
and perhaps an idea or two worth exploring with the vast computing
power at the disposal of individuals today and the boundless resources
of data in all forms available on the Internet.
December 2006