Biology

Barlow, Connie. The Ghosts of Evolution. New York: Basic Books, 2000. ISBN 0-465-00552-7.
Ponder the pit of the avocado; no, actually ponder it—hold it in your hand and get a sense of how big and heavy it is. Now consider that due to its toughness, slick surface, and being laced with toxins, it was meant to be swallowed whole and deposited far from the tree in the dung of the animal who gulped down the entire fruit, pit and all. Just imagine the size of the gullet (and internal tubing) that requires—what on Earth, or more precisely, given the avocado's range, what in the Americas served to disperse these seeds prior to the arrival of humans some 13,000 years ago?

The Western Hemisphere was, in fact, prior to the great extinction at the end of the Pleistocene, (coincident with the arrival of humans across the land bridge with Asia, and probably the result of their intensive hunting), home to a rich collection of megafauna: mammoths and mastodons, enormous ground sloths, camels, the original horses, and an armadillo as large as a bear, now all gone. Plants with fruit which doesn't seem to make any sense—which rots beneath the tree and isn't dispersed by any extant creature—may be the orphaned ecological partners of extinct species with which they co-evolved. Plants, particularly perennials and those which can reproduce clonally, evolve much more slowly than mammal and bird species, and may survive, albeit in a limited or spotty range, through secondary dispersers of their seeds (seed hoarders and predators, water, and gravity) long after the animal vectors their seeds evolved to employ have departed the scene. That is the fascinating premise of this book, which examines how enigmatic, apparently nonsensical fruit such as the osage orange, Kentucky coffee tree, honey locust, ginkgo, desert gourd, and others may be, figuratively, ripening their fruit every year waiting for the passing mastodon or megatherium which never arrives, some surviving because they are attractive, useful, and/or tasty to the talking apes who killed off the megafauna.

All of this is very interesting, and along the way one learns a great deal about the co-evolution of plants and their seed dispersal partners and predators—an endless arms race involving armour, chemical warfare (selective toxins and deterrents in pulp and seeds), stealth, and co-optation (burrs which hitch a ride on the fur of animals). However, this 250 page volume is basically an 85 page essay struggling to get out of the rambling, repetitious, self-indulgent, pretentious prose and unbridled speculations of the author, which results in a literary bolus as difficult to masticate as the seed pods of some of the plants described therein. This book desperately needed the attention of an editor ready to wield the red pencil and Basic Books, generally a quality publisher of popularisations of science, dropped the ball (or, perhaps I should say, spit out the seed) here. The organisation of the text is atrocious—we encounter the same material over and over, frequently see technical terms such as indehiscent used four or five times before they are first defined, only to then endure a half-dozen subsequent definitions of the same word (a brief glossary of botanical terms would be a great improvement), and on occasions botanical jargon is used apparently because it rolls so majestically off the tongue or lends authority to the account—which authority is sorely lacking. While there is serious science and well-documented, peer-reviewed evidence for anachronism in certain fruits, Barlow uses the concept as a launching pad for wild speculation in which any apparent lack of perfect adaptation between a plant and its present-day environment is taken as evidence for an extinct ecological partner.

One of many examples is the suggestion on p. 164 that the fact that the American holly tree produces spiny leaves well above the level of any current browser (deer here, not Internet Exploder or Netscrape!) is evidence it evolved to defend itself against much larger herbivores. Well, maybe, but it may just be that a tree lacks the means to precisely measure the distance from the ground, and those which err on the side of safety are more likely to survive. The discussion of evolution throughout is laced with teleological and anthropomorphic metaphors which will induce teeth-grinding among Darwinists audible across a large lecture hall.

At the start of chapter 8, vertebrate paleontologist Richard Tedford is quoted as saying, “Frankly, this is not really science. You haven't got a way of testing any of this. It's more metaphysics.”—amen. The author tests the toxicity of ginkgo seeds by feeding them to squirrels in a park in New York City (“All the world seems in tune, on a spring afternoon…”), and the attractiveness of maggot-ridden overripe pawpaw fruit by leaving it outside her New Mexico trailer for frequent visitor Mrs. Foxie (you can't make up stuff like this) and, in the morning, it was gone! I recall a similar experiment from childhood involving milk, cookies, and flying reindeer; she does, admittedly, acknowledge that skunks or raccoons might have been responsible. There's an extended discourse on the possible merits of eating dirt, especially for pregnant women, then in the very next chapter the suggestion that the honey locust has “devolved” into the swamp locust, accompanied by an end note observing that a professional botanist expert in the genus considers this nonsense.

Don't get me wrong, there's plenty of interesting material here, and much to think about in the complex intertwined evolution of animals and plants, but this is a topic which deserves a more disciplined author and a better book.

June 2005 Permalink

Behe, Michael J., William A. Dembski, and Stephen C. Meyer. Science and Evidence for Design in the Universe. San Francisco: Ignatius Press, 2000. ISBN 0-89870-809-5.

March 2002 Permalink

Brink, Anthony. Debating AZT: Mbeki and the AIDS Drug Controversy. Pietermaritzburg, South Africa: Open Books, 2000. ISBN 0-620-26177-3.
I bought this volume in a bookshop in South Africa; none of the principal on-line booksellers have ever heard of it. The complete book is now available on the Web.

July 2002 Permalink

Cochran, Gregory and Henry Harpending. The 10,000 Year Explosion. New York: Basic Books, 2009. ISBN 978-0-465-00221-4.
“Only an intellectual could believe something so stupid” most definitely applies to the conventional wisdom among anthropologists and social scientists that human evolution somehow came to an end around 40,000 years ago with the emergence of modern humans and that differences among human population groups today are only “skin deep”: the basic physical, genetic, and cognitive toolkit of humans around the globe is essentially identical, with only historical contingency and cultural inheritance responsible for different outcomes.

To anybody acquainted with evolutionary theory, this should have been dismissed as ideologically motivated nonsensical propaganda on the face of it. Evolution is driven by changes and new challenges faced by a species as it moves into new niches and environments, adapts to environmental change, migrates and encounters new competition, and is afflicted by new diseases which select for those with immunity. Modern humans, in their expansion from Africa to almost every habitable part of the globe, have endured changes and challenges which dwarf those of almost any other metazoan species. It stands to reason, then, that the pace of human evolution, far from coming to a halt, would in fact accelerate dramatically, as natural selection was driven by the coming and going of ice ages, the development of agriculture and domestication of animals, spread of humans into environments inhospitable to their ancestors, trade and conquest resulting in the mixing of genes among populations, and numerous other factors.

Fortunately, we're lucky to live in an age in which we need no longer speculate upon such matters. The ability to sequence the human genome and compare the lineage of genes in various populations has created the field of genetic anthropology, which is in the process of transforming what was once a “soft science” into a thoroughly quantitative discipline where theories can be readily falsified by evidence in the genome. This book has the potential of creating a phase transition in anthropology: it is a manifesto for the genomic revolution, and a few years from now anthropologists who ignore the kind of evidence presented here will be increasingly forgotten, publishing papers nobody reads because they neglect the irrefutable evidence of human history we carry in our genes.

The authors are very ambitious in their claims, and I'm sure that some years from now they will be seen to have overreached in some of them. But the central message will, I am confident, stand: human evolution has dramatically accelerated since the emergence of modern humans, and is being driven at an ever faster pace by the cultural and environmental changes humans are incessantly confronting. Further, human history cannot be understood without first acknowledging that the human populations which were the actors in it were fundamentally different. The conquest of the Americas by Europeans may well not have happened had not Europeans carried genes which protected them against the infectuous diseases they also carried on their voyages of exploration and conquest. (By some estimates, indigenous populations in the Americas fell to 10% of their pre-contact levels, precipitating societal collapse.) Why do about half of all humans on Earth speak languages of the Indo-European group? Well, it may be because the obscure cattle herders from the steppes who spoke the ur-language happened to evolve a gene which made them lactose tolerant throughout adulthood, and hence were able to raise cattle for dairy products, which is five times as productive (measured by calories per unit area) as raising cattle for meat. While Europeans' immunity to disease served them well in their conquest of the Americas, their lack of immunity to diseases endemic in sub-Saharan Africa (in particular, falciparum malaria) rendered initial attempts colonise that region disastrous.

The authors do not hesitate to speculate on possible genetic influences on events in human history, but their conjectures are based upon published genetic evidence, cited from primary sources in the extensive end notes. A number of these discussions may lead to the sound of skulls exploding among those wedded to the dominant academic dogma. The authors suggest that some of the genes which allowed modern humans emerging from Africa to prosper in northern climes were the result of cross-breeding with Neanderthals; that just as domestication of animals results in neoteny, domestication of humans in agricultural and the consequent state societies has induced neotenous changes in “domesticated humans” which result in populations with a long history of living in agricultural societies adapting better to modern civilisation than those without that selection in their genetic heritage, and that the unique experience of selection for success in intellectually demanding professions and lack of interbreeding resulted in the emergence of the Ashkenazi Jews as a population whose mean intelligence exceeds that of all other human populations (as well as a prevalence of genetic diseases which appear linked to biochemical factors related to brain function).

There's an odd kind of doublethink present among many champions of evolutionary theory. While invoking evolution to explain even those aspects of the history of life on Earth where doing so involves what can only be called a “leap of faith”, they dismiss the self-evident consequences of natural selection on populations of their own species. Certainly, all humans constitute a single species: we can interbreed, and that's the definition. But all dogs and wolves can interbreed, yet nobody would say that there is no difference between a Great Dane and a Dachshund. Largely isolated human populations have been subjected to unique selective pressures from their environment, diet, diseases, conflict, culture, and competition, and it's nonsense to argue that these challenges did not drive selection of adaptive alleles among the population.

This book is a welcome shot across the bow of the “we're all the same” anthropological dogma, and provides a guide to the discoveries to be made as comparative genetics lays a firm scientific foundation for anthropology.

May 2009 Permalink

Darling, David J. Life Everywhere: The Maverick Science of Astrobiology. New York: Basic Books, 2001. ISBN 0-465-01563-8.

October 2001 Permalink

Dembski, William A. No Free Lunch. Lanham, MD: Rowan & Littlefield, 2002. ISBN 0-7425-1297-5.
It seems to be the rule that the softer the science, the more rigid and vociferously enforced the dogma. Physicists, confident of what they do know and cognisant of how much they still don't, have no problems with speculative theories of parallel universes, wormholes and time machines, and inconstant physical constants. But express the slightest scepticism about Darwinian evolution being the one, completely correct, absolutely established beyond a shadow of a doubt, comprehensive and exclusive explanation for the emergence of complexity and diversity in life on Earth, and outraged biologists run to the courts, the legislature, and the media to suppress the heresy, accusing those who dare to doubt their dogma as being benighted opponents of science seeking to impose a “theocracy”. Funny, I thought science progressed by putting theories to the test, and that all theories were provisional, subject to falsification by experimental evidence or replacement by a more comprehensive theory which explains additional phenomena and/or requires fewer arbitrary assumptions.

In this book, mathematician and philosopher William A. Dembski attempts to lay the mathematical and logical foundation for inferring the presence of intelligent design in biology. Note that “intelligent design” needn't imply divine or supernatural intervention—the “directed panspermia” theory of the origin of life proposed by co-discoverer of the structure of DNA and Nobel Prize winner Francis Crick is a theory of intelligent design which invokes no deity, and my perpetually unfinished work The Rube Goldberg Variations and the science fiction story upon which it is based involve searches for evidence of design in scientific data, not in scripture.

You certainly won't find any theology here. What you will find is logical and mathematical arguments which sometimes ascend (or descend, if you wish) into prose like (p. 153), “Thus, if P characterizes the probability of E0 occurring and f characterizes the physical process that led from E0 to E1, then Pf −1 characterizes the probability of E1 occurring and P(E0) ≤ Pf −1(E1) since f(E0) = E1 and thus E0 ⊂ f −1(E1).” OK, I did cherry-pick that sentence from a particularly technical section which the author advises readers to skip if they're willing to accept the less formal argument already presented. Technical arguments are well-supplemented by analogies and examples throughout the text.

Dembski argues that what he terms “complex specified information” is conclusive evidence for the presence of design. Complexity (the Shannon information measure) is insufficient—all possible outcomes of flipping a coin 100 times in a row are equally probable—but presented with a sequence of all heads, all tails, alternating heads and tails, or a pattern in which heads occurred only for prime numbered flips, the evidence for design (in this case, cheating or an unfair coin) would be considered overwhelming. Complex information is considered specified if it is compressible in the sense of Chaitin-Kolmogorov-Solomonoff algorithmic information theory, which measures the randomness of a bit string by the length of the shortest computer program which could produce it. The overwhelming majority of 100 bit strings cannot be expressed more compactly than simply by listing the bits; the examples given above, however, are all highly compressible. This is the kind of measure, albeit not rigorously computed, which SETI researchers would use to identify a signal as of intelligent origin, which courts apply in intellectual property cases to decide whether similarity is accidental or deliberate copying, and archaeologists use to determine whether an artefact is of natural or human origin. Only when one starts asking these kinds of questions about biology and the origin of life does controversy erupt!

Chapter 3 proposes a “Law of Conservation of Information” which, if you accept it, would appear to rule out the generation of additional complex specified information by the process of Darwinian evolution. This would mean that while evolution can and does account for the development of resistance to antibiotics in bacteria and pesticides in insects, modification of colouration and pattern due to changes in environment, and all the other well-confirmed cases of the Darwinian mechanism, that innovation of entirely novel and irreducibly complex (see chapter 5) mechanisms such as the bacterial flagellum require some external input of the complex specified information they embody. Well, maybe…but one should remember that conservation laws in science, unlike invariants in mathematics, are empirical observations which can be falsified by a single counter-example. Niels Bohr, for example, prior to its explanation due to the neutrino, theorised that the energy spectrum of nuclear beta decay could be due to a violation of conservation of energy, and his theory was taken seriously until ruled out by experiment.

Let's suppose, for the sake of argument, that Darwinian evolution does explain the emergence of all the complexity of the Earth's biosphere, starting with a single primordial replicating lifeform. Then one still must explain how that replicator came to be in the first place (since Darwinian evolution cannot work on non-replicating organisms), and where the information embodied in its molecular structure came from. The smallest present-day bacterial genomes belong to symbiotic or parasitic species, and are in the neighbourhood of 500,000 base pairs, or roughly 1 megabit of information. Even granting that the ancestral organism might have been much smaller and simpler, it is difficult to imagine a replicator capable of Darwinian evolution with an information content 1000 times smaller than these bacteria, Yet randomly assembling even 500 bits of precisely specified information seems to be beyond the capacity of the universe we inhabit. If you imagine every one of the approximately 1080 elementary particles in the universe trying combinations every Planck interval, 1045 times every second, it would still take about a billion times the present age of the universe to randomly discover a 500 bit pattern. Of course, there are doubtless many patterns which would work, but when you consider how conservative all the assumptions are which go into this estimate, and reflect upon the evidence that life seemed to appear on Earth just about as early as environmental conditions permitted it to exist, it's pretty clear that glib claims that evolution explains everything and there are just a few details to be sorted out are arm-waving at best and propaganda at worst, and that it's far too early to exclude any plausible theory which could explain the mystery of the origin of life. Although there are many points in this book with which you may take issue, and it does not claim in any way to provide answers, it is valuable in understanding just how difficult the problem is and how many holes exist in other, more accepted, explanations. A clear challenge posed to purely naturalistic explanations of the origin of terrestrial life is to suggest a prebiotic mechanism which can assemble adequate specified information (say, 500 bits as the absolute minimum) to serve as a primordial replicator from the materials available on the early Earth in the time between the final catastrophic bombardment and the first evidence for early life.

May 2005 Permalink

Duesberg, Peter H. Inventing the AIDS Virus. Washington: Regnery, 1996. ISBN 0-89526-470-6.

June 2001 Permalink

Dyson, Freeman J. The Sun, the Genome, and the Internet. Oxford: Oxford University Press, 1999. ISBN 0-19-513922-4.
The text in this book is set in a hideous flavour of the Adobe Caslon font in which little curlicue ligatures connect the letter pairs “ct” and “st” and, in addition, the “ligatures” for “ff”, “fi”, “fl”, and “ft” lop off most of the bar of the “f”, leaving it looking like a droopy “l”. This might have been elegant for chapter titles, but it's way over the top for body copy. Dyson's writing, of course, more than redeems the bad typography, but you gotta wonder why we couldn't have had the former without the latter.

September 2003 Permalink

Dyson, Freeman J. Origins of Life. 2nd. ed. Cambridge: Cambridge University Press, 1999. ISBN 0-521-62668-4.
The years which followed Freeman Dyson's 1985 Tarner lectures, published in the first edition of Origins of Life that year, saw tremendous progress in molecular biology, including the determination of the complete nucleotide sequences of organisms ranging from E. coli to H. sapiens, and a variety of evidence indicating the importance of Archaea and the deep, hot biosphere to theories of the origin of life. In this extensively revised second edition, Dyson incorporates subsequent work relevant to his double-origin (metabolism first, replication later) hypothesis. It's perhaps indicative of how difficult the problem of the origin of life is that none of the multitude of experiments done in the almost 20 years since Dyson's original lectures has substantially confirmed or denied his theory nor answered any of the explicit questions he posed as challenges to experimenters.

March 2004 Permalink

Entine, Jon. Taboo. New York: PublicAffairs, 2000. ISBN 1-58648-026-X.

A certain segment of the dogma-based community of postmodern academics and their hangers-on seems to have no difficulty whatsoever believing that Darwinian evolution explains every aspect of the origin and diversification of life on Earth while, at the same time, denying that genetics—the mechanism which underlies evolution—plays any part in differentiating groups of humans. Doublethink is easy if you never think at all. Among those to whom evidence matters, here's a pretty astonishing fact to ponder. In the last four Olympic games prior to the publication of this book in the year 2000, there were thirty-two finalists in the men's 100-metre sprint. All thirty-two were of West African descent—a region which accounts for just 8% of the world's population. If finalists in this event were randomly chosen from the entire global population, the probability of this concentration occurring by chance is 0.0832 or about 8Î10−36, which is significant at the level of more than twelve standard deviations. The hardest of results in the flintiest of sciences—null tests of conservation laws and the like—are rarely significant above 7 to 8 standard deviations.

Now one can certainly imagine any number of cultural and other non-genetic factors which predispose those with West African ancestry toward world-class performance in sprinting, but twelve standard deviations? The fact that running is something all humans do without being taught, and that training for running doesn't require any complicated or expensive equipment (as opposed to sports such as swimming, high-diving, rowing, or equestrian events), and that champions of West African ancestry hail from countries around the world, should suggest a genetic component to all but the most blinkered of blank slaters.

Taboo explores the reality of racial differences in performance in various sports, and the long and often sordid entangled histories of race and sports, including the tawdry story of race science and eugenics, over-reaction to which has made most discussion of human biodiversity, as the title of book says, taboo. The equally forbidden subject of inherent differences in male and female athletic performance is delved into as well, with a look at the hormone dripping “babes from Berlin” manufactured by the cruel and exploitive East German sports machine before the collapse of that dismal and unlamented tyranny.

Those who know some statistics will have no difficulty understanding what's going on here—the graph on page 255 tells the whole story. I wish the book had gone into a little more depth about the phenomenon of a slight shift in the mean performance of a group—much smaller than individual variation—causing a huge difference in the number of group members found in the extreme tail of a normal distribution. Another valuable, albeit speculative, insight is that if one supposes that there are genes which confer advantage to competitors in certain athletic events, then given the intense winnowing process world-class athletes pass through before they reach the starting line at the Olympics, it is plausible all of them at that level possess every favourable gene, and that the winner is determined by training, will to win, strategy, individual differences, and luck, just as one assumed before genetics got mixed up in the matter. It's just that if you don't have the genes (just as if your legs aren't long enough to be a runner), you don't get anywhere near that level of competition.

Unless research in these areas is suppressed due to an ill-considered political agenda, it is likely that the key genetic components of athletic performance will be identified in the next couple of decades. Will this mean that world-class athletic competition can be replaced by DNA tests? Of course not—it's just that one factor in the feedback loop of genetic endowment, cultural reinforcement of activities in which group members excel, and the individual striving for excellence which makes competitors into champions will be better understood.

May 2005 Permalink

Gamow, George. One, Two, Three…Infinity. Mineola, NY: Dover, [1947] 1961. rev. ed. ISBN 0-486-25664-2.
This book, which first I read at around age twelve, rekindled my native interest in mathematics and science which had, by then, been almost entirely extinguished by six years of that intellectual torture called “classroom instruction”. Gamow was an eminent physicist: among other things, he advocated the big bang theory decades before it became fashionable, originated the concept of big bang nucleosynthesis, predicted the cosmic microwave background radiation 16 years before it was discovered, proposed the liquid drop model of the atomic nucleus, worked extensively in the astrophysics of energy production in stars, and even designed a nuclear bomb (“Greenhouse George”), which initiated the first deuterium-tritium fusion reaction here on Earth. But he was also one of most talented popularisers of science in the twentieth century, with a total of 18 popular science books published between 1939 and 1967, including the Mr Tompkins series, timeless classics which inspired many of the science visualisation projects at this site, in particular C-ship. A talented cartoonist as well, 128 of his delightful pen and ink drawings grace this volume. For a work published in 1947 with relatively minor revisions in the 1961 edition, this book has withstood the test of time remarkably well—Gamow was both wise and lucky in his choice of topics. Certainly, nobody should consider this book a survey of present-day science, but for folks well-grounded in contemporary orthodoxy, it's a delightful period piece providing a glimpse of the scientific world view of almost a half-century ago as explained by a master of the art. This Dover paperback is an unabridged reprint of the 1961 revised edition.

September 2004 Permalink

Gold, Thomas. The Deep Hot Biosphere. New York: Copernicus, 1999. ISBN 0-387-98546-8.

March 2001 Permalink

Gordon, Deborah M. Ants at Work. New York: The Free Press, 1999. ISBN 0-684-85733-2.

January 2003 Permalink

Hawkins, Jeff with Sandra Blakeslee. On Intelligence. New York: Times Books, 2004. ISBN 0-8050-7456-2.
Ever since the early days of research into the sub-topic of computer science which styles itself “artificial intelligence”, such work has been criticised by philosophers, biologists, and neuroscientists who argue that while symbolic manipulation, database retrieval, and logical computation may be able to mimic, to some limited extent, the behaviour of an intelligent being, in no case does the computer understand the problem it is solving in the sense a human does. John R. Searle's “Chinese Room” thought experiment is one of the best known and extensively debated of these criticisms, but there are many others just as cogent and difficult to refute.

These days, criticising artificial intelligence verges on hunting cows with a bazooka—unlike the early days in the 1950s when everybody expected the world chess championship to be held by a computer within five or ten years and mathematicians were fretting over what they'd do with their lives once computers learnt to discover and prove theorems thousands of times faster than they, decades of hype, fads, disappointment, and broken promises have instilled some sense of reality into the expectations most technical people have for “AI”, if not into those working in the field and those they bamboozle with the sixth (or is it the sixteenth) generation of AI bafflegab.

AI researchers sometimes defend their field by saying “If it works, it isn't AI”, by which they mean that as soon as a difficult problem once considered within the domain of artificial intelligence—optical character recognition, playing chess at the grandmaster level, recognising faces in a crowd—is solved, it's no longer considered AI but simply another computer application, leaving AI with the remaining unsolved problems. There is certainly some truth in this, but a closer look gives lie to the claim that these problems, solved with enormous effort on the part of numerous researchers, and with the application, in most cases, of computing power undreamed of in the early days of AI, actually represents “intelligence”, or at least what one regards as intelligent behaviour on the part of a living brain.

First of all, in no case did a computer “learn” how to solve these problems in the way a human or other organism does; in every case experts analysed the specific problem domain in great detail, developed special-purpose solutions tailored to the problem, and then implemented them on computing hardware which in no way resembles the human brain. Further, each of these “successes” of AI is useless outside its narrow scope of application: a chess-playing computer cannot read handwriting, a speech recognition program cannot identify faces, and a natural language query program cannot solve mathematical “word problems” which pose no difficulty to fourth graders. And while many of these programs are said to be “trained” by presenting them with collections of stimuli and desired responses, no amount of such training will permit, say, an optical character recognition program to learn to write limericks. Such programs can certainly be useful, but nothing other than the fact that they solve problems which were once considered difficult in an age when computers were much slower and had limited memory resources justifies calling them “intelligent”, and outside the marketing department, few people would remotely consider them so.

The subject of this ambitious book is not “artificial intelligence” but intelligence: the real thing, as manifested in the higher cognitive processes of the mammalian brain, embodied, by all the evidence, in the neocortex. One of the most fascinating things about the neocortex is how much a creature can do without one, for only mammals have them. Reptiles, birds, amphibians, fish, and even insects (which barely have a brain at all) exhibit complex behaviour, perception of and interaction with their environment, and adaptation to an extent which puts to shame the much-vaunted products of “artificial intelligence”, and yet they all do so without a neocortex at all. In this book, the author hypothesises that the neocortex evolved in mammals as an add-on to the old brain (essentially, what computer architects would call a “bag hanging on the side of the old machine”) which implements a multi-level hierarchical associative memory for patterns and a complementary decoder from patterns to detailed low-level behaviour which, wired through the old brain to the sensory inputs and motor controls, dynamically learns spatial and temporal patterns and uses them to make predictions which are fed back to the lower levels of the hierarchy, which in turns signals whether further inputs confirm or deny them. The ability of the high-level cortex to correctly predict inputs is what we call “understanding” and it is something which no computer program is presently capable of doing in the general case.

Much of the recent and present-day work in neuroscience has been devoted to imaging where the brain processes various kinds of information. While fascinating and useful, these investigations may overlook one of the most striking things about the neocortex: that almost every part of it, whether devoted to vision, hearing, touch, speech, or motion appears to have more or less the same structure. This observation, by Vernon B. Mountcastle in 1978, suggests there may be a common cortical algorithm by which all of these seemingly disparate forms of processing are done. Consider: by the time sensory inputs reach the brain, they are all in the form of spikes transmitted by neurons, and all outputs are sent in the same form, regardless of their ultimate effect. Further, evidence of plasticity in the cortex is abundant: in cases of damage, the brain seems to be able to re-wire itself to transfer a function to a different region of the cortex. In a long (70 page) chapter, the author presents a sketchy model of what such a common cortical algorithm might be, and how it may be implemented within the known physiological structure of the cortex.

The author is a founder of Palm Computing and Handspring (which was subsequently acquired by Palm). He subsequently founded the Redwood Neuroscience Institute, which has now become part of the Helen Wills Neuroscience Institute at the University of California, Berkeley, and in March of 2005 founded Numenta, Inc. with the goal of developing computer memory systems based on the model of the neocortex presented in this book.

Some academic scientists may sniff at the pretensions of a (very successful) entrepreneur diving into their speciality and trying to figure out how the brain works at a high level. But, hey, nobody else seems to be doing it—the computer scientists are hacking away at their monster programs and parallel machines, the brain community seems stuck on functional imaging (like trying to reverse-engineer a microprocessor in the nineteenth century by looking at its gross chemical and electrical properties), and the neuron experts are off dissecting squid: none of these seem likely to lead to an understanding (there's that word again!) of what's actually going on inside their own tenured, taxpayer-funded skulls. There is undoubtedly much that is wrong in the author's speculations, but then he admits that from the outset and, admirably, presents an appendix containing eleven testable predictions, each of which can falsify all or part of his theory. I've long suspected that intelligence has more to do with memory than computation, so I'll confess to being predisposed toward the arguments presented here, but I'd be surprised if any reader didn't find themselves thinking about their own thought processes in a different way after reading this book. You won't find the answers to the mysteries of the brain here, but at least you'll discover many of the questions worth pondering, and perhaps an idea or two worth exploring with the vast computing power at the disposal of individuals today and the boundless resources of data in all forms available on the Internet.

December 2006 Permalink

Kauffman, Stuart A. Investigations. New York: Oxford University Press, 2000. ISBN 0-19-512105-8.
Few people have thought as long and as hard about the origin of life and the emergence of complexity in a biosphere as Stuart Kauffman. Medical doctor, geneticist, professor of biochemistry and biophysics, MacArthur Fellow, and member of the faculty of the Santa Fe Institute for a decade, he has sought to discover the principles which might underlie a “general biology”—the laws which would govern any biosphere, whether terrestrial, extraterrestrial, or simulated within a computer, regardless of its physical substrate.

This book, which he describes on occasion as “protoscience”, provides an overview of the principles he suspects, but cannot prove, may underlie all forms of life, and beyond that systems in general which are far from equilibrium such as a modern technological economy and the universe itself. Most of science before the middle of the twentieth century studied complex systems at or near equilibrium; only at such states could the simplifying assumptions of statistical mechanics be applied to render the problem tractable. With computers, however, we can now begin to explore open systems (albeit far smaller than those in nature) which are far from equilibrium, have dynamic flows of energy and material, and do not necessarily evolve toward a state of maximum entropy.

Kauffman believes there may be what amounts to a fourth law of thermodynamics which applies to such systems and, although we don't know enough to state it precisely, he suspects it may be that these open, extremely nonergodic, systems evolve as rapidly as possible to expand and fill their state space and that unlike, say, a gas in a closed volume or the stars in a galaxy, where the complete state space can be specified in advance (that is, the dimensionality of the space, not the precise position and momentum values of every object within it), the state space of a non-equilibrium system cannot be prestated because its very evolution expands the state space. The presence of autonomous agents introduces another level of complexity and creativity, as evolution drives the agents to greater and greater diversity and complexity to better adapt to the ever-shifting fitness landscape.

These are complicated and deep issues, and this is a very difficult book, although appearing, at first glance, to be written for a popular audience. I seriously doubt whether somebody who was not previously acquainted with these topics and thought about them at some length will make it to the end and, even if they do, take much away from the book. Those who are comfortable with the laws of thermodynamics, the genetic code, protein chemistry, catalysis, autocatalytic networks, Carnot cycles, fitness landscapes, hill-climbing strategies, the no-go theorem, error catastrophes, self-organisation, percolation phase transitions in graphs, and other technical issues raised in the arguments must still confront the author's prose style. It seems like Kauffman aspires to be a prose stylist conveying a sense of wonder to his readers along the lines of Carl Sagan and Stephen Jay Gould. Unfortunately, he doesn't pull it off as well, and the reader must wade through numerous paragraphs like the following from pp. 97–98:

Does it always take work to construct constraints? No, as we will soon see. Does it often take work to construct constraints? Yes. In those cases, the work done to construct constraints is, in fact, another coupling of spontaneous and nonspontaneous processes. But this is just what we are suggesting must occur in autonomous agents. In the universe as a whole, exploding from the big bang into this vast diversity, are many of the constraints on the release of energy that have formed due to a linking of spontaneous and nonspontaneous processes? Yes. What might this be about? I'll say it again. The universe is full of sources of energy. Nonequilibrium processes and structures of increasing diversity and complexity arise that constitute sources of energy that measure, detect, and capture those sources of energy, build new structures that constitute constraints on the release of energy, and hence drive nonspontaneous processes to create more such diversifying and novel processes, structures, and energy sources.
I have not cherry-picked this passage; there are hundreds of others like it. Given the complexity of the technical material and the difficulty of the concepts being explained, it seems to me that the straightforward, unaffected Point A to Point B style of explanation which Isaac Asimov employed would work much better. Pardon my audacity, but allow me to rewrite the above paragraph.
Autonomous agents require energy, and the universe is full of sources of energy. But in order to do work, they require energy to be released under constraints. Some constraints are natural, but others are constructed by autonomous agents which must do work to build novel constraints. A new constraint, once built, provides access to new sources of energy, which can be exploited by new agents, contributing to an ever growing diversity and complexity of agents, constraints, and sources of energy.
Which is better? I rewrite; you decide. The tone of the prose is all over the place. In one paragraph he's talking about Tomasina the trilobite (p. 129) and Gertrude the ugly squirrel (p. 131), then the next thing you know it's “Here, the hexamer is simplified to 3'CCCGGG5', and the two complementary trimers are 5'GGG3' + 5'CCC3'. Left to its own devices, this reaction is exergonic and, in the presence of excess trimers compared to the equilibrium ratio of hexamer to trimers, will flow exergonically toward equilibrium by synthesizing the hexamer.” (p. 64). This flipping back and forth between colloquial and scholarly voices leads to a kind of comprehensional kinetosis. There are a few typographical errors, none serious, but I have to share this delightful one-sentence paragraph from p. 254 (ellipsis in the original):
By iteration, we can construct a graph connecting the founder spin network with its 1-Pachner move “descendants,” 2-Pachner move descendints…N-Pachner move descendents.
Good grief—is Oxford University Press outsourcing their copy editing to Slashdot?

For the reasons given above, I found this a difficult read. But it is an important book, bristling with ideas which will get you looking at the big questions in a different way, and speculating, along with the author, that there may be some profound scientific insights which science has overlooked to date sitting right before our eyes—in the biosphere, the economy, and this fantastically complicated universe which seems to have emerged somehow from a near-thermalised big bang. While Kauffman is the first to admit that these are hypotheses and speculations, not science, they are eminently testable by straightforward scientific investigation, and there is every reason to believe that if there are, indeed, general laws that govern these phenomena, we will begin to glimpse them in the next few decades. If you're interested in these matters, this is a book you shouldn't miss, but be aware what you're getting into when you undertake to read it.

February 2007 Permalink

Kaufman, Marc. First Contact. New York: Simon & Schuster, 2011. ISBN 978-1-4391-0901-4.
How many fields of science can you think of which study something for which there is no generally accepted experimental evidence whatsoever? Such areas of inquiry certainly exist: string theory and quantum gravity come immediately to mind, but those are research programs motivated by self-evident shortcomings in the theoretical foundations of physics which become apparent when our current understanding is extrapolated to very high energies. Astrobiology, the study of life in the cosmos, has, to date, only one exemplar to investigate: life on Earth. For despite the enormous diversity of terrestrial life, it shares a common genetic code and molecular machinery, and appears to be descended from a common ancestral organism.

And yet in the last few decades astrobiology has been a field which, although having not so far unambiguously identified extraterrestrial life, has learned a great deal about life on Earth, the nature of life, possible paths for the origin of life on Earth and elsewhere, and the habitats in the universe where life might be found. This book, by a veteran Washington Post science reporter, visits the astrobiologists in their native habitats, ranging from deep mines in South Africa, where organisms separated from the surface biosphere for millions of years have been identified, Antarctica; whose ice hosts microbes the likes of which might flourish on the icy bodies of the outer solar system; to planet hunters patiently observing stars from the ground and space to discover worlds orbiting distant stars.

It is amazing how much we have learned in such a short time. When I was a kid, many imagined that Venus's clouds shrouded a world of steamy jungles, and that Mars had plants which changed colour with the seasons. No planet of another star had been detected, and respectable astronomers argued that the solar system might have been formed by a freak close approach between two stars and that planets might be extremely rare. The genetic code of life had not been decoded, and an entire domain of Earthly life, bearing important clues for life's origin, was unknown and unsuspected. This book describes the discoveries which have filled in the blanks over the last few decades, painting a picture of a galaxy in which planets abound, many in the “habitable zone” of their stars. Life on Earth has been found to have colonised habitats previously considered as inhospitable to life as other worlds: absence of oxygen, no sunlight, temperatures near freezing or above the boiling point of water, extreme acidity or alkalinity: life finds a way.

We may have already discovered extraterrestrial life. The author meets the thoroughly respectable scientists who operated the life detection experiments of the Viking Mars landers in the 1970s, sought microfossils of organisms in a meteorite from Mars found in Antarctica, and searched for evidence of life in carbonaceous meteorites. Each believes the results of their work is evidence of life beyond Earth, but the standard of evidence required for such an extraordinary claim has not been met in the opinion of most investigators.

While most astrobiologists seek evidence of simple life forms (which exclusively inhabited Earth for most of its history), the Search for Extraterrestrial Intelligence (SETI) jumps to the other end of evolution and seeks interstellar communications from other technological civilisations. While initial searches were extremely limited in the assumptions about signals they might detect, progress in computing has drastically increased the scope of these investigations. In addition, other channels of communication, such as very short optical pulses, are now being explored. While no signals have been detected in 50 years of off and on searching, only a minuscule fraction of the search space has been explored, and it may be that in retrospect we'll realise that we've had evidence of interstellar signals in our databases for years in the form of transient pulses not recognised because we were looking for narrowband continuous beacons.

Discovery of life beyond the Earth, whether humble microbes on other bodies of the solar system or an extraterrestrial civilisation millions of years older than our own spamming the galaxy with its ETwitter feed, would arguably be the most significant discovery in the history of science. If we have only one example of life in the universe, its origin may have been a forbiddingly improbable fluke which happened only once in our galaxy or in the entire universe. But if there are two independent examples of the origin of life (note that if we find life on Mars, it is crucial to determine whether it shares a common origin with terrestrial life: since meteors exchange material between the planets, it's possible Earth life originated on Mars or vice versa), then there is every reason to believe life is as common in the cosmos as we are now finding planets to be. Perhaps in the next few decades we will discover the universe to be filled with wondrous creatures awaiting our discovery. Or maybe not—we may be alone in the universe, in which case it is our destiny to bring it to life.

November 2013 Permalink

Koman, Victor. Solomon's Knife. Mill Valley, CA: Pulpless.Com, [1989] 1999. ISBN 1-58445-072-X.

November 2002 Permalink

Kurzweil, Ray. The Singularity Is Near. New York: Viking, 2005. ISBN 0-670-03384-7.
What happens if Moore's Law—the annual doubling of computing power at constant cost—just keeps on going? In this book, inventor, entrepreneur, and futurist Ray Kurzweil extrapolates the long-term faster than exponential growth (the exponent is itself growing exponentially) in computing power to the point where the computational capacity of the human brain is available for about US$1000 (around 2020, he estimates), reverse engineering and emulation of human brain structure permits machine intelligence indistinguishable from that of humans as defined by the Turing test (around 2030), and the subsequent (and he believes inevitable) runaway growth in artificial intelligence leading to a technological singularity around 2045 when US$1000 will purchase computing power comparable to that of all presently-existing human brains and the new intelligence created in that single year will be a billion times greater than that of the entire intellectual heritage of human civilisation prior to that date. He argues that the inhabitants of this brave new world, having transcended biological computation in favour of nanotechnological substrates “trillions of trillions of times more capable” will remain human, having preserved their essential identity and evolutionary heritage across this leap to Godlike intellectual powers. Then what? One might as well have asked an ant to speculate on what newly-evolved hominids would end up accomplishing, as the gap between ourselves and these super cyborgs (some of the precursors of which the author argues are alive today) is probably greater than between arthropod and anthropoid.

Throughout this tour de force of boundless technological optimism, one is impressed by the author's adamantine intellectual integrity. This is not an advocacy document—in fact, Kurzweil's view is that the events he envisions are essentially inevitable given the technological, economic, and moral (curing disease and alleviating suffering) dynamics driving them. Potential roadblocks are discussed candidly, along with the existential risks posed by the genetics, nanotechnology, and robotics (GNR) revolutions which will set the stage for the singularity. A chapter is devoted to responding to critics of various aspects of the argument, in which opposing views are treated with respect.

I'm not going to expound further in great detail. I suspect a majority of people who read these comments will, in all likelihood, read the book themselves (if they haven't already) and make up their own minds about it. If you are at all interested in the evolution of technology in this century and its consequences for the humans who are creating it, this is certainly a book you should read. The balance of these remarks discuss various matters which came to mind as I read the book; they may not make much sense unless you've read it (You are going to read it, aren't you?), but may highlight things to reflect upon as you do.

  • Switching off the simulation. Page 404 raises a somewhat arcane risk I've pondered at some length. Suppose our entire universe is a simulation run on some super-intelligent being's computer. (What's the purpose of the universe? It's a science fair project!) What should we do to avoid having the simulation turned off, which would be bad? Presumably, the most likely reason to stop the simulation is that it's become boring. Going through a technological singularity, either from the inside or from the outside looking in, certainly doesn't sound boring, so Kurzweil argues that working toward the singularity protects us, if we be simulated, from having our plug pulled. Well, maybe, but suppose the explosion in computing power accessible to the simulated beings (us) at the singularity exceeds that available to run the simulation? (This is plausible, since post-singularity computing rapidly approaches its ultimate physical limits.) Then one imagines some super-kid running top to figure out what's slowing down the First Superbeing Shooter game he's running and killing the CPU hog process. There are also things we can do which might increase the risk of the simulation's being switched off. Consider, as I've proposed, precision fundamental physics experiments aimed at detecting round-off errors in the simulation (manifested, for example, as small violations of conservation laws). Once the beings in the simulation twig to the fact that they're in a simulation and that their reality is no more accurate than double precision floating point, what's the point to letting it run?
  • Fifty bits per atom? In the description of the computational capacity of a rock (p. 131), the calculation assumes that 100 bits of memory can be encoded in each atom of a disordered medium. I don't get it; even reliably storing a single bit per atom is difficult to envision. Using the “precise position, spin, and quantum state” of a large ensemble of atoms as mentioned on p. 134 seems highly dubious.
  • Luddites. The risk from anti-technology backlash is discussed in some detail. (“Ned Ludd” himself joins in some of the trans-temporal dialogues.) One can imagine the next generation of anti-globalist demonstrators taking to the streets to protest the “evil corporations conspiring to make us all rich and immortal”.
  • Fundamentalism. Another risk is posed by fundamentalism, not so much of the religious variety, but rather fundamentalist humanists who perceive the migration of humans to non-biological substrates (at first by augmentation, later by uploading) as repellent to their biological conception of humanity. One is inclined, along with the author, simply to wait until these folks get old enough to need a hip replacement, pacemaker, or cerebral implant to reverse a degenerative disease to motivate them to recalibrate their definition of “purely biological”. Still, I'm far from the first to observe that Singularitarianism (chapter 7) itself has some things in common with religious fundamentalism. In particular, it requires faith in rationality (which, as Karl Popper observed, cannot be rationally justified), and that the intentions of super-intelligent beings, as Godlike in their powers compared to humans as we are to Saccharomyces cerevisiae, will be benign and that they will receive us into eternal life and bliss. Haven't I heard this somewhere before? The main difference is that the Singularitarian doesn't just aspire to Heaven, but to Godhood Itself. One downside of this may be that God gets quite irate.
  • Vanity. I usually try to avoid the “Washington read” (picking up a book and flipping immediately to the index to see if I'm in it), but I happened to notice in passing I made this one, for a minor citation in footnote 47 to chapter 2.
  • Spindle cells. The material about “spindle cells” on pp. 191–194 is absolutely fascinating. These are very large, deeply and widely interconnected neurons which are found only in humans and a few great apes. Humans have about 80,000 spindle cells, while gorillas have 16,000, bonobos 2,100 and chimpanzees 1,800. If you're intrigued by what makes humans human, this looks like a promising place to start.
  • Speculative physics. The author shares my interest in physics verging on the fringe, and, turning the pages of this book, we come across such topics as possible ways to exceed the speed of light, black hole ultimate computers, stable wormholes and closed timelike curves (a.k.a. time machines), baby universes, cold fusion, and more. Now, none of these things is in any way relevant to nor necessary for the advent of the singularity, which requires only well-understood mainstream physics. The speculative topics enter primarily in discussions of the ultimate limits on a post-singularity civilisation and the implications for the destiny of intelligence in the universe. In a way they may distract from the argument, since a reader might be inclined to dismiss the singularity as yet another woolly speculation, which it isn't.
  • Source citations. The end notes contain many citations of articles in Wired, which I consider an entertainment medium rather than a reliable source of technological information. There are also references to articles in Wikipedia, where any idiot can modify anything any time they feel like it. I would not consider any information from these sources reliable unless independently verified from more scholarly publications.
  • “You apes wanna live forever?” Kurzweil doesn't just anticipate the singularity, he hopes to personally experience it, to which end (p. 211) he ingests “250 supplements (pills) a day and … a half-dozen intravenous therapies each week”. Setting aside the shots, just envision two hundred and fifty pills each and every day! That's 1,750 pills a week or, if you're awake sixteen hours a day, an average of more than 15 pills per waking hour, or one pill about every four minutes (one presumes they are swallowed in batches, not spaced out, which would make for a somewhat odd social life). Between the year 2000 and the estimated arrival of human-level artificial intelligence in 2030, he will swallow in excess of two and a half million pills, which makes one wonder what the probability of choking to death on any individual pill might be. He remarks, “Although my program may seem extreme, it is actually conservative—and optimal (based on my current knowledge).” Well, okay, but I'd worry about a “strategy for preventing heart disease [which] is to adopt ten different heart-disease-prevention therapies that attack each of the known risk factors” running into unanticipated interactions, given how everything in biology tends to connect to everything else. There is little discussion of the alternative approach to immortality with which many nanotechnologists of the mambo chicken persuasion are enamoured, which involves severing the heads of recently deceased individuals and freezing them in liquid nitrogen in sure and certain hope of the resurrection unto eternal life.

October 2005 Permalink

Lane, Nick. Power, Sex, Suicide. Oxford: Oxford University Press, 2005. ISBN 978-0-19-920564-6.
When you start to look in detail at the evolution of life on Earth, it appears to be one mystery after another. Why did life appear so quickly after the Earth became hospitable to it? Why did life spend billions of years exclusively in the form of single-celled bacteria without a nucleus (bacteria and archaea)? Why are all complex cells (eukaryotes) apparently descended from a single ancestral cell? Why did it take so long for complex multicellular organisms to evolve? (I've taken a crack [perhaps crackpot] shot at that one myself.) Why did evolution favour sexual reproduction, where two parents are required to produce offspring, while clonal reproduction is twice as efficient? Why just two sexes (among the vast majority of species) and not more? What drove the apparent trend toward greater size and complexity in multicellular organisms? Why are the life spans of organisms so accurately predicted by a power law based upon their metabolic rate? Why and how does metabolic rate fall with the size of an organism? Why did evolution favour warm-bloodedness (endothermy) when it increases an organism's requirement for food by more than an order of magnitude? Why do organisms age, and why is the rate of ageing and the appearance of degenerative diseases so closely correlated with metabolic rate? Conversely, why do birds and bats live so long: a pigeon has about the same mass and metabolic rate as a rat, yet lives ten times as long?

I was intensely interested in molecular biology and evolution of complexity in the early 1990s, but midway through that decade I kind of tuned it out—there was this “Internet” thing going on which captured my attention…. While much remains to be discovered, and many of the currently favoured hypotheses remain speculative, there has been enormous progress toward resolving these conundra in recent years, and this book is an excellent way to catch up on this research frontier.

Quite remarkably, a common thread pulling together most of these questions is one of the most humble and ubiquitous components of eukaryotic life: the mitochondria. Long recognised as the power generators of the cell (“Power”), they have been subsequently discovered to play a key rôle in the evolution of sexual reproduction (“Sex”), and in programmed cell death (apoptosis—“Suicide”). Bacteria and archaea are constrained in size by the cube/square law: they power themselves by respiratory mechanisms embedded in their cellular membranes, which grow as the square of their diameter, but consume energy within the bulk of the cell, which grows as the cube. Consequently, evolution selects for small size, as a larger bacterium can generate less energy for its internal needs. Further, bacteria compete for scarce resources purely by replication rate: a bacterium which divides even a small fraction more rapidly will quickly come to predominate in the population versus its more slowly reproducing competitors. In cell division, the most energetically costly and time consuming part is copying the genome's DNA. As a result, evolution ruthlessly selects for the shortest genome, which results in the arcane overlapping genes in bacterial DNA which look like the work of those byte-shaving programmers you knew back when computers had 8 Kb RAM. All of this conspires to keep bacteria small and simple and indeed, they appear to be as small and simple today as they were three billion years and change ago. But that isn't to say they aren't successful—you may think of them as pond scum, but if you read the bacterial blogs, they think of us as an ephemeral epiphenomenon. “It's the age of bacteria, and it always has been.”

Most popular science books deliver one central idea you'll take away from reading them. This one has a forehead slapper about every twenty pages. It is not a particularly easy read: nothing in biology is unambiguous, and you find yourself going down a road and nodding in agreement, only to find out a few pages later that a subsequent discovery has falsified the earlier conclusion. While this may be confusing, it gives a sense of how science is done, and encourages the reader toward scepticism of all “breakthroughs” reported in the legacy media.

One of the most significant results of recent research into mitochondrial function is the connection between free radical production in the respiratory pipeline and ageing. While there is a power law relationship between metabolic rate and lifespan, there are outliers (including humans, who live about twice as long as they “should” based upon their size), and a major discrepancy for birds which, while obeying the same power law, are offset toward lifespans from three to ten times as long. Current research offers a plausible explanation for this: avians require aerobic power generation much greater than mammals, and consequently have more mitochondria in their tissues and more respiratory complexes in their mitochondria. This results in lower free radical production, which retards the onset of ageing and the degenerative diseases associated with it. Maybe before long there will be a pill which amplifies the mitochondrial replication factor in humans and, even if it doesn't extend our lifespan, retards the onset of the symptoms of ageing and degenerative diseases until the very end of life (old birds are very much like young adult birds, so there's an existence proof of this). I predict that the ethical questions associated with the creation of this pill will evaporate within about 24 hours of its availability on the market. Oh, it may have side-effects, such as increasing the human lifespan to, say, 160 years. Okay, science fiction authors, over to you!

If you are even remotely interested in these questions, this is a book you'll want to read.

April 2009 Permalink

Latour, Bruno and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton: Princeton University Press, 1986. ISBN 0-691-02832-X.

September 2001 Permalink

Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. ISBN 978-0-06-147278-7.
At last we have a book which squarely takes on the central puzzle of the supposedly blind, purposeless universe to which so many scientists presently ascribe the origin of life on Earth. There's hardly any point debating evolution: it can be demonstrated in the laboratory. (Some may argue that Spiegelman's monster is an example of devolution, but recall that evolutionists must obligately eschew teleology, so selection in the direction of simplicity and rapid replication is perfectly valid, and evidenced by any number of examples in bacteria.)

No, the puzzle—indeed, the enigma— is the origin of the first replicator. Once you have a self-replicating organism and a means of variation (of which many are known to exist), natural selection can kick in and, driven by the environment and eventually competition with other organisms, select for more complexity when it confers an adaptive advantage. But how did the first replicator come to be?

In the time of Darwin, the great puzzle of biology was the origin of the apparently designed structures in organisms and the diversity of life, not the origin of the first cell. For much of Darwin's life, spontaneous generation was a respectable scientific theory, and the cell was thought to be an amorphous globule of a substance dubbed “protoplasm”, which one could imagine as originating at random through chemical reactions among naturally occurring precursor molecules.

The molecular biology revolution in the latter half of the twentieth century put the focus squarely upon the origin of life. In particular, the discovery of the extraordinarily complex digital code of the genome in DNA, the supremely complex nanomachinery of gene expression (more than a hundred proteins are involved in the translation of DNA to proteins, even in the simplest of bacteria), and the seemingly intractable chicken and egg problem posed by the fact that DNA cannot replicate its information without the proteins of the transcription mechanism, while those proteins cannot be assembled without the precise sequence information provided in the DNA, decisively excluded all scenarios for the origin of life through random chemical reactions in a “warm pond”.

As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.

So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.

What do I mean by “functional information”? Just information which has a meaning expressed in a separate domain than its raw components. For example, the information theoretic entropy of a typical mountainside is as great (and, in fact, probably greater) than that of Mount Rushmore, but the latter encodes functional (or specified) information from a separate domain: that of representations of U.S. presidents known from other sources. Similarly, a DNA sequence which encodes a protein which folds into a form which performs a specific enzymatic function is vanishingly improbable to have originated by chance, and this has been demonstrated by experiment. Without the enzymes in the cell, in fact, even if you had a primordial soup containing all of the ingredients of functional proteins, they would just cross-link into non-functional goo, as nothing would prevent their side chains from bonding to one another. Biochemists know this, which is why they're so sceptical of the glib theories of physicists and computer scientists who expound upon the origin of life.

Ever since Lyell, most scientists have embraced the principle of uniformitarianism, which holds that any phenomenon we observe in nature today must have been produced by causes we observe in action at the present time. Well, at the present time, we observe many instances of complex, structured, functional encoded data with information content in excess of 500 bits: books, music, sculpture, paintings, integrated circuits, machines, and even this book review. And to what cause would the doctrinaire uniformitarian attribute all of this complex, structured information? Well, obviously, the action of an intelligent agent: intelligent design.

Once you learn to recognise it, the signatures are relatively easy to distinguish. When you have a large amount of Shannon information, but no function (for example, the contour of a natural mountainside, or a random bit string generated by radioactive decay), then chance is the probable cause. When you have great regularity (the orbits of planets, or the behaviour of elementary particles), then natural law is likely to govern. As Jacques Monod observed, most processes in nature can be attributed to Chance and Necessity, but there remain those which do not, with which archæologists, anthropologists, and forensic scientists, among others, deal with every day.

Beyond the dichotomy of chance and necessity (or a linear combination of the two), there's the trichotomy which admits intelligent design as a cause. An Egyptologist who argued that plate tectonics was responsible for the Great Sphinx of Giza would be laughed out of the profession. And yet, when those who observe information content in the minimal self-replicating organism hundreds of orders of magnitude less likely than the Sphinx having been extruded from a volcanic vent infer evidence of intelligent design of that first replicator, they are derided and excluded from scientific discourse.

What is going on here? I would suggest there is a dogma being enforced with the same kind of rigour as the Darwinists impute to their fundamentalist opponents. In every single instance in the known universe, with the sole exception of the genome of the minimal self-replicating cell and the protein machinery which allows it to replicate, when we see 500 bits or more of functional complexity, we attribute it to the action of an intelligent agent. You aren't likely to see a CSI episode where one of the taxpayer-funded sleuths attributes the murder to a gun spontaneously assembling due to quantum fluctuations and shooting “the vic” through the heart. And yet such a Boltzmann gun is thousands of orders of magnitude more probable than a minimal genetic code and transcription apparatus assembling by chance in proximity to one another in order to reproduce.

Opponents of intelligent design hearts' go all pitty-pat because they consider it (gasp) religion. Nothing could be more absurd. Francis Crick (co-discoverer of the structure of DNA) concluded that the origin of life on Earth was sufficiently improbable that the best hypothesis was that it had been seeded here deliberately by intelligent alien lifeforms. These creatures, whatever their own origins, would have engineered their life spores to best take root in promising environments, and hence we shouldn't be surprised to discover our ancestors to have been optimised for our own environment. One possibility (of which I am fond) is that our form of life is the present one in a “chain of life” which began much closer to the Big Bang. One can imagine life, originating at the quark-gluon plasma phase or in the radiation dominated universe, and seeing the end of their dominion approaching, planting the seeds of the next form of life among their embers. Dyson, Tipler, and others have envisioned the distant descendants of humanity passing on the baton of life to other lifeforms adapted to the universe of the far future. Apply the Copernican principle: what about our predecessors?

Or consider my own favourite hypothesis of origin, that we're living in a simulation. I like to think of our Creator as a 13 year old superbeing who designed our universe as a science fair project. I have written before about the clear signs accessible to experiment which might falsify this hypothesis but which, so far, are entirely consistent with it. In addition, I've written about how the multiverse model is less parsimonious than the design hypothesis.

In addition to the arguments in that paper, I would suggest that evidence we're living in a simulation is that we find, living within it, complex structured information which we cannot explain as having originated by the physical processes we discover within the simulation. In other words, we find there has been input of information by the intelligent designer of the simulation, either explicitly as genetic information, or implicitly in terms of fine-tuning of free parameters of the simulated universe so as to favour the evolution of complexity. If you were creating such a simulation (or designing a video game), wouldn't you fine tune such parameters and pre-specify such information in order to make it “interesting”?

Look at it this way. Imagine you were a sentient character in a video game. You would observe that the “game physics” of your universe was finely tuned both in the interest of computability but also to maximise the complexity of the interactions of the simulated objects. You would discover that your own complexity and that of the agents with which you interact could not be explained by the regularities of the simulation and the laws you'd deduced from them, and hence appeared to have been put in from the outside by an intelligent designer bent on winning the science fair by making the most interesting simulation. Being intensely rationalistic, you'd dismiss the anecdotal evidence for the occasional miracle as the pimple-faced Creator tweaked this or that detail to make things more interesting and thus justify an A in Miss O'Neill's Creative Cosmology class. And you'd be wrong.

Once we have discovered we're living in a simulation and inferred, from design arguments, that we're far from the top level, all of this will be obvious, but hey, if you're reading it here for the first time, welcome to the revelation of what's going on. Opponents of intelligent design claim it's “not science” or “not testable”. Poppycock—here's a science fiction story about how conclusive evidence for design might be discovered. Heck, you can go looking for it yourself!

This is an essential book for anybody interested in the origin of life on Earth. The author is a supporter of the hypothesis of intelligent design (as am I, although I doubt we would agree on any of the details). Regardless of what you think about the issue of origins, if you're interested in the question, you really need to know the biochemical details discussed here, and the combinatorial impossibility of chance assembly of even a single functionally folded protein in our universe in the time since the Big Bang.

I challenge you to read this and reject the hypothesis of intelligent design. If you reject it, then show how your alternative is more probable. I fully accept the hypothesis of intelligent design and have since I concluded more than a decade ago it's more probable than not that we're living in a simulation. We owe our existence to the Intelligent Designer who made us to be amusing. Let's hope she wins the Science Fair and doesn't turn it off!

November 2009 Permalink

Morgan, Elaine. The Scars of Evolution. Oxford: Oxford University Press, [1990] 1994. ISBN 0-19-509431-X.

December 2001 Permalink

Pickover, Clifford A. The Science of Aliens. New York: Basic Books, 1998. ISBN 0-465-07315-8.

August 2003 Permalink

Robinson, Andrew. The Last Man Who Knew Everything. New York: Pi Press, 2006. ISBN 0-13-134304-1.
The seemingly inexorable process of specialisation in the sciences and other intellectual endeavours—the breaking down of knowledge into categories so narrow and yet so deep that their mastery at the professional level seems to demand forsaking anything beyond a layman's competence in other, even related fields, is discouraging to those who believe that some of the greatest insights come from the cross-pollination of concepts from subjects previously considered unrelated. The twentieth century was inhospitable to polymaths—even within a single field such as physics, ever narrower specialities proliferated, with researchers interacting little with those working in other areas. The divide between theorists and experimentalists has become almost impassable; it is difficult to think of a single individual who achieved greatness in both since Fermi, and he was born in 1901.

As more and more becomes known, it is inevitable that it is increasingly difficult to cram it all into one human skull, and the investment in time to master a variety of topics becomes disproportionate to the length of a human life, especially since breakthrough science is generally the province of the young. And yet, one wonders whether the conventional wisdom that hyper-specialisation is the only way to go and that anybody who aspires to broad and deep understanding of numerous subjects must necessarily be a dilettante worthy of dismissal, might underestimate the human potential and discourage those looking for insights available only by synthesising the knowledge of apparently unrelated disciplines. After all, mathematicians have repeatedly discovered deep connections between topics thought completely unrelated to one another; why shouldn't this be the case in the sciences, arts, and humanities as well?

The life of Thomas Young (1773–1829) is an inspiration to anybody who seeks to understand as much as possible about the world in which they live. The eldest of ten children of a middle class Quaker family in southwest England (his father was a cloth merchant and later a banker), from childhood he immersed himself in every book he could lay his hands upon, and in his seventeenth year alone, he read Newton's Principia and Opticks, Blackstone's Commentaries, Linnaeus, Euclid's Elements, Homer, Virgil, Sophocles, Cicero, Horace, and many other classics in the original Greek or Latin. At age 19 he presented a paper on the mechanism by which the human eye focuses on objects at different distances, and on its merit was elected a Fellow of the Royal Society a week after his 21st birthday.

Young decided upon a career in medicine and studied in Edinburgh, Göttingen, and Cambridge, continuing his voracious reading and wide-ranging experimentation in whatever caught his interest, then embarked upon a medical practice in London and the resort town of Worthing, while pursuing his scientific investigations and publications, and popularising science in public lectures at the newly founded Royal Institution.

The breadth of Young's interests and contributions have caused some biographers, both contemporary and especially more recent, to dismiss him as a dilettante and dabbler, but his achievements give lie to this. Had the Nobel Prize existed in his era, he would almost certainly have won two (Physics for the wave theory of light, explanation of the phenomena of diffraction and interference [including the double slit experiment], and birefringence and polarisation; plus Physiology or Medicine for the explanation of the focusing of the eye [based, in part, upon some cringe-inducing experiments he performed upon himself], the trireceptor theory of colour vision, and the discovery of astigmatism), and possibly three (Physics again, for the theory of elasticity of materials: “Young's modulus” is a standard part of the engineering curriculum to this day).

But he didn't leave it at that. He was fascinated by languages since childhood, and in addition to the customary Latin and Greek, by age thirteen had taught himself Hebrew and read thirty chapters of the Hebrew Bible all by himself. In adulthood he undertook an analysis of four hundred different languages (pp. 184–186) ranging from Chinese to Cherokee, with the goal of classifying them into distinct families. He coined the name “Indo-European” for the group to which most Western languages belong. He became fascinated with the enigma of Egyptian hieroglyphics, and his work on the Rosetta Stone provided the first breakthrough and the crucial insight that hieroglyphic writing was a phonetic alphabet, not a pictographic language like Chinese. Champollion built upon Young's work in his eventual deciphering of hieroglyphics. Young continued to work on the fiendishly difficult demotic script, and was the first person since the fall of the Roman Empire to be able to read some texts written in it.

He was appointed secretary of the Board of Longitude and superintendent of the Nautical Almanac, and was instrumental in the establishment of a Southern Hemisphere observatory at the Cape of Good Hope. He consulted with the admiralty on naval architecture, with the House of Commons on the design for a replacement to the original London Bridge, and served as chief actuary for a London life insurance company and did original research on mortality in different parts of Britain.

Stereotypical characters from fiction might cause you to expect that such an intellect might be a recluse, misanthrope, obsessive, or seeker of self-aggrandisement. But no…, “He was a lively, occasionally caustic letter writer, a fair conversationalist, a knowledgeable musician, a respectable dancer, a tolerable versifier, an accomplished horseman and gymnast, and throughout his life, a participant in the leading society of London and, later, Paris, the intellectual capitals of his day” (p. 12). Most of the numerous authoritative articles he contributed to the Encyclopedia Britannica, including “Bridge”, “Carpentry”, “Egypt”, “Languages”, “Tides”, and “Weights and measures”, as well as 23 biographies, were published anonymously. And he was happily married from age 31 until the end of his life.

Young was an extraordinary person, but he never seems to have thought of himself as exceptional in any way other than his desire to understand how things worked and his willingness to invest as much time and effort as it took at arrive at the goals he set for himself. Reading this book reminded me of a remark by Admiral Hyman G. Rickover, “The only way to make a difference in the world is to put ten times as much effort into everything as anyone else thinks is reasonable. It doesn't leave any time for golf or cocktails, but it gets things done.” Young's life is a testament to just how many things one person can get done in a lifetime, enjoying every minute of it and never losing balance, by judicious application of this principle.

March 2007 Permalink

Rogers, Lesley. Sexing the Brain. New York: Columbia University Press, 2001. ISBN 0-231-12010-9.

November 2001 Permalink

Sarich, Vincent and Frank Miele. Race: The Reality of Human Differences. Boulder, CO: Westview Press, 2004. ISBN 0-8133-4086-1.
This book tackles the puzzle posed by the apparent contradiction between the remarkable genetic homogeneity of humans compared to other species, while physical differences between human races (non-controversial measures such as cranial morphology, height, and body build) actually exceed those between other primate species and subspecies. Vincent Sarich, emeritus Professor of Anthropology at UC Berkeley, pioneer in the development of the “molecular clock”, recounts this scientific adventure and the resulting revolution in the human evolutionary tree and timescale. Miele (editor of Skeptic magazine) and Sarich then argue that the present-day dogma among physical anthropologists and social scientists that “race does not exist”, if taken to its logical conclusion, amounts to rejecting Darwinian evolution, which occurs through variation and selection. Consequently variation among groups is an inevitable consequence, recognised as a matter of course in other species. Throughout, the authors stress that variation of characteristics among individual humans greatly exceeds mean racial variation, which makes racial prejudice and discrimination not only morally abhorrent but stupid from the scientific standpoint. At the same time, small differences in the mean of a set of standard distributions causes large changes in their representation in the aggregate tail representing extremes of performance. This is why one should be neither surprised nor dismayed to find a “disproportionate” number of Kenyans among cross-country running champions, Polynesians in American professional football, or east Asians in mathematical research. A person who comprehends this basic statistical fact should be able to treat individuals on their own merit without denying the reality of differences among sub-populations of the human species. Due to the broad overlap among groups, members of every group, if given the opportunity, will be represented at the highest levels of performance in each field, and no individual should feel deterred nor be excluded from aspiring to such achievement due to group membership. For the argument against the biological reality of race, see the Web site for the United States Public Broadcasting Service documentary, Race: The Power of an Illusion. This book attempts to rebut each of the assertions in that documentary.

June 2004 Permalink

Spufford, Francis. Backroom Boys: The Secret Return of the British Boffin. London: Faber and Faber, 2003. ISBN 0-571-21496-7.
It is rare to encounter a book about technology and technologists which even attempts to delve into the messy real-world arena where science, engineering, entrepreneurship, finance, marketing, and government policy intersect, yet it is there, not solely in the technological domain, that the roots of both great successes and calamitous failures lie. Backroom Boys does just this and pulls it off splendidly, covering projects as disparate as the Black Arrow rocket, Concorde, mid 1980s computer games, mobile telephony, and sequencing the human genome. The discussion on pages 99 and 100 of the dynamics of new product development in the software business is as clear and concise a statement I've seen of the philosophy that's guided my own activities for the past 25 years. While celebrating the technological renaissance of post-industrial Britain, the author retains the characteristic British intellectual's disdain for private enterprise and economic liberty. In chapter 4, he describes Vodaphone's development of the mobile phone market: “It produced a blind, unplanned, self-interested search strategy, capitalism's classic method for exploring a new space in the market where profit may be found.” Well…yes…indeed, but that isn't just “capitalism's” classic method, but the very one employed with great success by life on Earth lo these four and a half billion years (see The Genius Within, April 2003). The wheels fall off in chapter 5. Whatever your position may have been in the battle between Celera and the public Human Genome Project, Spufford's collectivist bias and ignorance of economics (simply correcting the noncontroversial errors in basic economics in this chapter would require more pages than it fills) gets in the way of telling the story of how the human genome came to be sequenced five years before the original estimated date. A truly repugnant passage on page 173 describes “how science should be done”. Taxpayer-funded researchers, a fine summer evening, “floated back downstream carousing, with stubs of candle stuck to the prows, … and the voices calling to and fro across the water as the punts drifted home under the overhanging trees in the green, green, night.“ Back to the taxpayer-funded lab early next morning, to be sure, collecting their taxpayer-funded salaries doing the work they love to advance their careers. Nary a word here of the cab drivers, sales clerks, construction workers and, yes, managers of biotech start-ups, all taxed to fund this scientific utopia, who lack the money and free time to pass their own summer evenings so sublimely. And on the previous page, the number of cells in the adult body of C. elegans is twice given as 550. Gimme a break—everybody knows there are 959 somatic cells in the adult hermaphrodite, 1031 in the male; he's confusing adults with 558-cell newly-hatched L1 larvŠ.

May 2004 Permalink

Sullivan, Robert. Rats. New York: Bloomsbury, [2004] 2005. ISBN 1-58234-477-9.
Here we have one of the rarest phenomena in publishing: a thoroughly delightful best-seller about a totally disgusting topic: rats. (Before legions of rat fanciers write to berate me for bad-mouthing their pets, let me state at the outset that this book is about wild rats, not pet and laboratory rats which have been bred for docility for a century and a half. The new afterword to this paperback edition relates the story of a Brooklyn couple who caught a juvenile Bedford-Stuyvesant street rat to fill the empty cage of their recently deceased pet and, as it it matured, came to regard it with such fear that they were afraid even to release it in a park lest it turn and attack them when the cage was opened—the author suggested they might consider the strategy of “open the cage and run like hell” [p. 225–226]. One of the pioneers in the use of rats in medical research in the early years of the 20th century tried to use wild rats and concluded “they proved too savage to maintain in the laboratory” [p. 231].)

In these pages are more than enough gritty rat facts to get yourself ejected from any polite company should you introduce them into a conversation. Many misconceptions about rats are debunked, including the oft-cited estimate that the rat and human population is about the same, which would lead to an estimate of about eight million rats in New York City—in fact, the most authoritative estimate (p. 20) puts the number at about 250,000 which is still a lot of rats, especially once you begin to appreciate what a single rat can do. (But rat exaggeration gets folks' attention: here is a politician claiming there are fifty-six million rats in New York!) “Rat stories are war stories” (p. 34), and this book teems with them, including The Rat that Came Up the Toilet, which is not an urban legend but a well-documented urban nightmare. (I'd be willing to bet that the incidence of people keeping the toilet lid closed with a brick on the top is significantly greater among readers of this book.)

It's common for naturalists who study an animal to develop sympathy for it and defend it against popular aversion: snakes and spiders, for example, have many apologists. But not rats: the author sums up by stating that he finds them “disgusting”, and he isn't alone. The great naturalist and wildlife artist John James Audubon, one of the rare painters ever to depict rats, amused himself during the last years of his life in New York City by prowling the waterfront hunting rats, having received permission from the mayor “to shoot Rats in the Battery” (p. 4).

If you want to really get to know an animal species, you have to immerse yourself in its natural habitat, and for the Brooklyn-based author, this involved no more than a subway ride to Edens Alley in downtown Manhattan, just a few blocks from the site of the World Trade Center, which was destroyed during the year he spent observing rats there. Along with rat stories and observations, he sketches the history of New York City from a ratty perspective, with tales of the arrival of the brown rat (possibly on ships carrying Hessian mercenaries to fight for the British during the War of American Independence), the rise and fall of rat fighting as popular entertainment in the city, the great garbage strike of 1968 which transformed the city into something close to heaven if you happened to be a rat, and the 1964 Harlem rent strike in which rats were presented to politicians by the strikers to acquaint them with the living conditions in their tenements.

People involved with rats tend to be outliers on the scale of human oddness, and the reader meets a variety of memorable characters, present-day and historical: rat fight impresarios, celebrity exterminators, Queen Victoria's rat-catcher, and many more. Among numerous fascinating items in this rat fact packed narrative is just how recent the arrival of the mis-named brown rat, Rattus norvegicus, is. (The species was named in England in 1769, having been believed to have stowed away on ships carrying lumber from Norway. In fact, it appears to have arrived in Britain before it reached Norway.) There were no brown rats in Europe at all until the 18th century (the rats which caused the Black Death were Rattus rattus, the black rat, which followed Crusaders returning from the Holy Land). First arriving in America around the time of the Revolution, the brown rat took until 1926 to spread to every state in the United States, displacing the black rat except for some remaining in the South and West. The Canadian province of Alberta remains essentially rat-free to this day, thanks to a vigorous and vigilant rat control programme.

The number of rats in an area depends almost entirely upon the food supply available to them. A single breeding pair of rats, with an unlimited food supply and no predation or other causes of mortality, can produce on the order of fifteen thousand descendants in a single year. That makes it pretty clear that a rat population will grow until all available food is being consumed by rats (and that natural selection will favour the most aggressive individuals in a food-constrained environment). Poison or trapping can knock down the rat population in the case of a severe infestation, but without limiting the availability of food, will produce only a temporary reduction in their numbers (while driving evolution to select for rats which are immune to the poison and/or more wary of the bait stations and traps).

Given this fact, which is completely noncontroversial among pest control professionals, it is startling that in New York City, which frets over and regulates public health threats like second-hand tobacco smoke while its denizens suffer more than 150 rat bites a year, many to children, smoke-free restaurants dump their offal into rat-infested alleys in thin plastic garbage bags, which are instantly penetrated by rats. How much could it cost to mandate, or even provide, rat-proof steel containers for organic waste, compared to the budget for rodent control and the damages and health hazards of a large rat population? Rats will always be around—in 1936, the president of the professional society for exterminators persuaded the organisation to change the name of the occupation from “exterminator” to “pest control operator”, not because the word “exterminator” was distasteful, but because he felt it over-promised what could actually be achieved for the client (p. 98). But why not take some simple, obvious steps to constrain the rat population?

The book contains more than twenty pages of notes in narrative form, which contain a great deal of additional information you don't want to miss, including the origin of giant inflatable rats for labour rallies, and even a poem by exterminator guru Bobby Corrigan. There is no index.

August 2006 Permalink

Trefil, James. Are We Unique?. New York: John Wiley & Sons, 1997. ISBN 0-471-24946-7.

December 2002 Permalink

Vertosick, Frank T., Jr. The Genius Within. New York: Harcourt, 2002. ISBN 0-15-100551-6.

April 2003 Permalink

Wade, Nicholas. Before The Dawn. New York: Penguin Press, 2006. ISBN 1-59420-079-3.
Modern human beings, physically very similar to people alive today, with spoken language and social institutions including religion, trade, and warfare, had evolved by 50,000 years ago, yet written historical records go back only about 5,000 years. Ninety percent of human history, then, is “prehistory” which paleoanthropologists have attempted to decipher from meagre artefacts and rare discoveries of human remains. The degree of inference and the latitude for interpretation of this material has rendered conclusions drawn from it highly speculative and tentative. But in the last decade this has begun to change.

While humans only began to write the history of their species in the last 10% of their presence on the planet, the DNA that makes them human has been patiently recording their history in a robust molecular medium which only recently, with the ability to determine the sequence of the genome, humans have learnt to read. This has provided a new, largely objective, window on human history and origins, and has both confirmed results teased out of the archæological record over the centuries, and yielded a series of stunning surprises which are probably only the first of many to come.

Each individual's genome is a mix of genes inherited from their father and mother, plus a few random changes (mutations) due to errors in the process of transcription. The separate genome of the mitochondria (energy producing organelles) in their cells is inherited exclusively from the mother, and in males, the Y chromosome (except for the very tips) is inherited directly from the father, unmodified except for mutations. In an isolated population whose members breed mostly with one another, members of the group will come to share a genetic signature which reflects natural selection for reproductive success in the environment they inhabit (climate, sources of food, endemic diseases, competition with other populations, etc.) and the effects of random “genetic drift” which acts to reduce genetic diversity, particularly in small, isolated populations. Random mutations appear in certain parts of the genome at a reasonably constant rate, which allows them to be used as a “molecular clock” to estimate the time elapsed since two related populations diverged from their last common ancestor. (This is biology, so naturally the details are fantastically complicated, messy, subtle, and difficult to apply in practice, but the general outline is as described above.)

Even without access to the genomes of long-dead ancestors (which are difficult in the extreme to obtain and fraught with potential sources of error), the genomes of current populations provide a record of their ancestry, geographical origin, migrations, conquests and subjugations, isolation or intermarriage, diseases and disasters, population booms and busts, sources of food, and, by inference, language, social structure, and technologies. This book provides a look at the current state of research in the rapidly expanding field of genetic anthropology, and it makes for an absolutely compelling narrative of the human adventure. Obviously, in a work where the overwhelming majority of source citations are to work published in the last decade, this is a description of work in progress and most of the deductions made should be considered tentative pending further results.

Genomic investigation has shed light on puzzles as varied as the size of the initial population of modern humans who left Africa (almost certainly less than 1000, and possibly a single hunter-gatherer band of about 150), the date when wolves were domesticated into dogs and where it happened, the origin of wheat and rice farming, the domestication of cattle, the origin of surnames in England, and the genetic heritage of the randiest conqueror in human history, Genghis Khan, who, based on Y chromosome analysis, appears to have about 16 million living male descendants today.

Some of the results from molecular anthropology run the risk of being so at variance with the politically correct ideology of academic soft science that the author, a New York Times reporter, tiptoes around them with the mastery of prose which on other topics he deploys toward their elucidation. Chief among these is the discussion of the microcephalin and ASPM genes on pp. 97–99. (Note that genes are often named based on syndromes which result from deleterious mutations within them, and hence bear names opposite to their function in the normal organism. For example, the gene which triggers the cascade of eye formation in Drosophila is named eyeless.) Both of these genes appear to regulate brain size and, in particular, the development of the cerebral cortex, which is the site of higher intelligence in mammals. Specific alleles of these genes are of recent origin, and are unequally distributed geographically among the human population. Haplogroup D of Microcephalin appeared in the human population around 37,000 years ago (all of these estimates have a large margin of error); which is just about the time when quintessentially modern human behaviour such as cave painting appeared in Europe. Today, about 70% of the population of Europe and East Asia carry this allele, but its incidence in populations in sub-Saharan Africa ranges from 0 to 25%. The ASPM gene exists in two forms: a “new” allele which arose only about 5800 years ago (coincidentally[?] just about the time when cities, agriculture, and written language appeared), and an “old” form which predates this period. Today, the new allele occurs in about 50% of the population of the Middle East and Europe, but hardly at all in sub-Saharan Africa. Draw your own conclusions from this about the potential impact on human history when germline gene therapy becomes possible, and why opposition to it may not be the obvious ethical choice.

January 2007 Permalink

Wade, Nicholas. A Troublesome Inheritance. New York: Penguin Press, 2014. ISBN 978-1-59420-446-3.
Geographically isolated populations of a species (unable to interbreed with others of their kind) will be subject to natural selection based upon their environment. If that environment differs from that of other members of the species, the isolated population will begin to diverge genetically, as genetic endowments which favour survival and more offspring are selected for. If the isolated population is sufficiently small, the mechanism of genetic drift may cause a specific genetic variant to become almost universal or absent in that population. If this process is repeated for a sufficiently long time, isolated populations may diverge to such a degree they can no longer interbreed, and therefore become distinct species.

None of this is controversial when discussing other species, but in some circles to suggest that these mechanisms apply to humans is the deepest heresy. This well-researched book examines the evidence, much from molecular biology which has become available only in recent years, for the diversification of the human species into distinct populations, or “races” if you like, after its emergence from its birthplace in Africa. In this book the author argues that human evolution has been “recent, copious, and regional” and presents the genetic evidence to support this view.

A few basic facts should be noted at the outset. All humans are members of a single species, and all can interbreed. Humans, as a species, have an extremely low genetic diversity compared to most other animal species: this suggests that our ancestors went through a genetic “bottleneck” where the population was reduced to a very small number, causing the variation observed in other species to be lost through genetic drift. You might expect different human populations to carry different genes, but this is not the case—all humans have essentially the same set of genes. Variation among humans is mostly a result of individuals carrying different alleles (variants) of a gene. For example, eye colour in humans is entirely inherited: a baby's eye colour is determined completely by the alleles of various genes inherited from the mother and father. You might think that variation among human populations is then a question of their carrying different alleles of genes, but that too is an oversimplification. Human genetic variation is, in most cases, a matter of the frequency of alleles among the population.

This means that almost any generalisation about the characteristics of individual members of human populations with different evolutionary histories is ungrounded in fact. The variation among individuals within populations is generally much greater than that of populations as a whole. Discrimination based upon an individual's genetic heritage is not just abhorrent morally but scientifically unjustified.

Based upon these now well-established facts, some have argued that “race does not exist” or is a “social construct”. While this view may be motivated by a well-intentioned desire to eliminate discrimination, it is increasingly at variance with genetic evidence documenting the history of human populations.

Around 200,000 years ago, modern humans emerged in Africa. They spent more than three quarters of their history in that continent, spreading to different niches within it and developing a genetic diversity which today is greater than that of all humans in the rest of the world. Around 50,000 years before the present, by the genetic evidence, a small band of hunter-gatherers left Africa for the lands to the north. Then, some 30,000 years ago the descendants of these bands who migrated to the east and west largely ceased to interbreed and separated into what we now call the Caucasian and East Asian populations. These have remained the main three groups within the human species. Subsequent migrations and isolations have created other populations such as Australian and American aborigines, but their differentiation from the three main races is less distinct. Subsequent migrations, conquest, and intermarriage have blurred the distinctions between these groups, but the fact is that almost any child, shown a picture of a person of European, African, or East Asian ancestry can almost always effortlessly and correctly identify their area of origin. University professors, not so much: it takes an intellectual to deny the evidence of one's own eyes.

As these largely separated populations adapted to their new homes, selection operated upon their genomes. In the ancestral human population children lost the ability to digest lactose, the sugar in milk, after being weaned from their mothers' milk. But in populations which domesticated cattle and developed dairy farming, parents who passed on an allele which would allow their children to drink cow's milk their entire life would have more surviving offspring and, in a remarkably short time on the evolutionary scale, lifetime lactose tolerance became the norm in these areas. Among populations which never raised cattle or used them only for meat, lifetime lactose tolerance remains rare today.

Humans in Africa originally lived close to the equator and had dark skin to protect them from the ultraviolet radiation of the Sun. As human bands occupied northern latitudes in Europe and Asia, dark skin would prevent them from being able to synthesise sufficient Vitamin D from the wan, oblique sunlight of northern winters. These populations were under selection pressure for alleles of genes which gave them lighter skin, but interestingly Europeans and East Asians developed completely different genetic means to lighten their skin. The selection pressure was the same, but evolution blundered into two distinct pathways to meet the need.

Can genetic heritage affect behaviour? There's evidence it can. Humans carry a gene called MAO-A, which breaks down neurotransmitters that affect the transmission of signals within the brain. Experiments in animals have provided evidence that under-production of MAO-A increases aggression and humans with lower levels of MAO-A are found to be more likely to commit violent crime. MAO-A production is regulated by a short sequence of DNA adjacent to the gene: humans may have anywhere from two to five copies of the promoter; the more you have, the more the MAO-A, and hence the mellower you're likely to be. Well, actually, people with three to five copies are indistinguishable, but those with only two (2R) show higher rates of delinquency. Among men of African ancestry, 5.5% carry the 2R variant, while 0.1% of Caucasian males and 0.00067% of East Asian men do. Make of this what you will.

The author argues that just as the introduction of dairy farming tilted the evolutionary landscape in favour of those bearing the allele which allowed them to digest milk into adulthood, the transition of tribal societies to cities, states, and empires in Asia and Europe exerted a selection pressure upon the population which favoured behavioural traits suited to living in such societies. While a tribal society might benefit from producing a substantial population of aggressive warriors, an empire has little need of them: its armies are composed of soldiers, courageous to be sure, who follow orders rather than charging independently into battle. In such a society, the genetic traits which are advantageous in a hunter-gatherer or tribal society will be selected out, as those carrying them will, if not expelled or put to death for misbehaviour, be unable to raise as large a family in these settled societies.

Perhaps, what has been happening over the last five millennia or so is a domestication of the human species. Precisely as humans have bred animals to live with them in close proximity, human societies have selected for humans who are adapted to prosper within them. Those who conform to the social hierarchy, work hard, come up with new ideas but don't disrupt the social structure will have more children and, over time, whatever genetic predispositions there may be for these characteristics (which we don't know today) will become increasingly common in the population. It is intriguing that as humans settled into fixed communities, their skeletons became less robust. This same process of gracilisation is seen in domesticated animals compared to their wild congeners. Certainly there have been as many human generations since the emergence of these complex societies as have sufficed to produce major adaptation in animal species under selective breeding.

Far more speculative and controversial is whether this selection process has been influenced by the nature of the cultures and societies which create the selection pressure. East Asian societies tend to be hierarchical, obedient to authority, and organised on a large scale. European societies, by contrast, are fractious, fissiparous, and prone to bottom-up insurgencies. Is this in part the result of genetic predispositions which have been selected for over millennnia in societies which work that way?

It is assumed by many right-thinking people that all that is needed to bring liberty and prosperity to those regions of the world which haven't yet benefited from them is to create the proper institutions, educate the people, and bootstrap the infrastructure, then stand back and watch them take off. Well, maybe—but the history of colonialism, the mission civilisatrice, and various democracy projects and attempts at nation building over the last two centuries may suggest it isn't that simple. The population of the colonial, conquering, or development-aid-giving power has the benefit of millennia of domestication and adaptation to living in a settled society with division of labour. Its adaptations for tribalism have been largely bred out. Not so in many cases for the people they're there to “help”. Withdraw the colonial administration or occupation troops and before long tribalism will re-assert itself because that's the society for which the people are adapted.

Suggesting things like this is anathema in academia or political discourse. But look at the plain evidence of post-colonial Africa and more recent attempts of nation-building, and couple that with the emerging genetic evidence of variation in human populations and connections to behaviour and you may find yourself thinking forbidden thoughts. This book is an excellent starting point to explore these difficult issues, with numerous citations of recent scientific publications.

December 2014 Permalink

Ward, Peter D. and Donald Brownlee. Rare Earth: Why Complex Life Is Uncommon in the Universe. New York: Copernicus, 2000. ISBN 0-387-98701-0.

February 2001 Permalink

Ward, Peter D. and Donald Brownlee. The Life and Death of Planet Earth. New York: Times Books, 2003. ISBN 0-8050-6781-7.

February 2003 Permalink

Wheen, Francis. How Mumbo-Jumbo Conquered the World. London: Fourth Estate, 2004. ISBN 0-00-714096-7.
I picked up this book in an airport bookshop, expecting a survey of contemporary lunacy along the lines of Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds or Martin Gardner's Fads and Fallacies in the Name of Science. Instead, what we have is 312 pages of hateful, sneering political rant indiscriminately sprayed at more or less every target in sight. Mr Wheen doesn't think very much of Ronald Reagan or Margaret Thatcher (who he likens repeatedly to the Ayatollah Khomeini). Well, that's to be expected, I suppose, in a columnist for the Guardian, but there's no reason they need to be clobbered over and over, for the same things and in almost the same words, every three pages or so throughout this tedious, ill-organised, and repetitive book. Neither does the author particularly fancy Tony Blair, who comes in for the same whack-a-mole treatment. A glance at the index (which is not exhaustive) shows that between them, Blair, Thatcher, and Reagan appear on 85 pages equally sprinkled throughout the text. In fact, Mr Wheen isn't very keen on almost anybody or anything dating from about 1980 to the present; one senses an all-consuming nostalgia for that resplendent utopia which was Britain in the 1970s. Now, the crusty curmudgeon is a traditional British literary figure, but masters of the genre leaven their scorn with humour and good will which are completely absent here. What comes through instead is simply hate: the world leaders who dismantled failed socialist experiments are not, as a man of the left might argue, misguided but rather Mrs Thatcher's “drooling epigones” (p. 263). For some months, I've been pondering a phenomenon in today's twenty-something generation which I call “hate kiddies.” These are people, indoctrinated in academia by ideologues of the Sixties generation to hate their country, culture, and all of its achievements—supplanting the pride which previous generations felt with an all-consuming guilt. This seems, in many otherwise gifted and productive people, to metastasise in adulthood into an all-consuming disdain and hate for everything; it's like the end point of cultural relativism is the belief that everything is evil. I asked an exemplar of this generation once whether he could name any association of five or more people anywhere on Earth which was not evil: nope. Detesting his “evil” country and government, I asked whether he could name any other country which was less evil or even somewhat good: none came to mind. (If you want to get a taste of this foul and poisonous weltanschauung, visit the Slashdot site and read the comments posted for almost any article. This site is not a parody—this is how the young technological elite really think, or rather, can't think.) In Francis Wheen, the hate kiddies have found their elder statesman.

July 2004 Permalink

Williamson, Donald I. The Origins of Larvae. Dordrecht, The Netherlands: Kluwer Academic, 2003. ISBN 1-4020-1514-3.
I am increasingly beginning to suspect that we are living through an era which, in retrospect, will be seen, like the early years of the twentieth century, as the final days preceding revolutions in a variety of scientific fields. Precision experiments and the opening of new channels of information about the universe as diverse as the sequencing of genomes, the imminent detection of gravitational waves, and detailed measurement of the cosmic background radiation are amassing more and more discrepant data which causes scientific journeymen to further complicate their already messy “standard models”, and the more imaginative among them to think that maybe there are simple, fundamental things which we're totally missing. Certainly, when the scientific consensus is that everything we see and know about comprises less than 5% of the universe, and a majority of the last generation of theorists in high energy physics have been working on a theory which only makes sense in a universe with ten, or maybe eleven, or maybe twenty-six dimensions, there would seem to be a lot of room for an Einstein-like conceptual leap which would make everybody slap their foreheads and exclaim, “How could we have missed that!

But still we have Darwin, don't we? If the stargazers and particle smashers are puzzled by what they see, certainly the more down-to-earth folk who look at creatures that inhabit our planet still stand on a firm foundation, don't they? Well…maybe not. Perhaps, as this book argues, not only is the conventional view of the “tree of life” deeply flawed, the very concept of a tree, where progenitor species always fork into descendants, but there is never any interaction between the ramified branches, is incorrect. (Just to clarify in advance: the author does not question the fundamental mechanism of Darwinian evolution by natural selection of inherited random variations, nor argue for some other explanation for the origin of the diversity in species on Earth. His argument is that this mechanism may not be the sole explanation for the characteristics of the many species with larval forms or discordant embryonic morphology, and that the assumption made by Darwin and his successors that evolution is a pure process of diversification [or forking of species from a common ancestor, as if companies only developed by spin-offs, and never did mergers and acquisitions] may be a simplification that, while it makes the taxonomist's job easier, is not warranted by the evidence.)

Many forms of life on Earth are not born from the egg as small versions of their adult form. Instead, they are born as larvae, which are often radically different in form from the adult. The best known example is moths and butterflies, which hatch as caterpillars, and subsequently reassemble themselves into the winged insects which mate and produce eggs that hatch into the next generation of caterpillars. Larvae are not restricted to arthropoda and other icky phyla: frogs and toads are born as tadpoles and live in one body form, then transform into quite different adults. Even species, humans included, which are born as little adults, go through intermediate stages as developing embryos which have the characteristics of other, quite different species.

Now, when you look closely at this, (and many will be deterred because a great deal of larvae and the species they mature into are rather dreadful), you'll find a long list of curious things which have puzzled naturalists all the way back to Darwin and before. There are numerous examples of species which closely resemble one another and are classified by taxonomists in the same genus which have larvae which are entirely different from one another—so much so that if the larvae were classified by themselves, they would probably be put into different classes or phyla. There are almost identical larvae which develop into species only distantly related. Closely related species include those with one or more larval forms, and others which develop directly: hatching as small individuals already with the adult form. And there are animals which, in their adult form, closely resemble the larvae of other species.

What a mess—but then biology is usually messy! The author, an expert on marine invertebrates (from which the vast majority of examples in this book are drawn), argues that there is a simple explanation for all of these discrepancies and anomalies, one which, if you aren't a biologist yourself, may have already occurred to you—that larvae (and embryonic forms) are the result of a hybridisation or merger of two unrelated species, with the result being a composite which hatches in one form and then subsequently transforms into the other. The principle of natural selection would continue to operate on these inter-specific mergers, of course: complicating or extending the development process of an animal before it could reproduce would probably be selected out, but, on the other hand, adding a free-floating or swimming larval form to an animal whose adult crawls on the ocean bottom or remains fixed to a given location like a clam or barnacle could confer a huge selective advantage on the hybrid, and equip it to ride out mass extinction events because the larval form permitted the species to spread to marginal habitats where it could survive the extinction event.

The acquisition of a larva by successful hybridisation could spread among the original species with no larval form not purely by differential selection but like a sexually transmitted disease—in other words, like wildfire. Note that many marine invertebrates reproduce simply by releasing their eggs and sperm into the sea and letting nature sort it out; consequently, the entire ocean is a kind of of promiscuous pan-specific singles bar where every pelagic and benthic creature is trying to mate, utterly indiscriminately, with every other at the whim of the wave and current. Most times, as in singles bars, it doesn't work out, but suppose sometimes it does?

You have to assume a lot of improbable things for this to make sense, the most difficult of which is that you can combine the sperm and egg of vastly different creatures and (on extremely rare occasions) end up with a hybrid which is born in the form of one and then, at some point, spontaneously transforms into the other. But ruling this out (or deciding it's plausible) requires understanding the “meta-program” of embryonic development—until we do, there's always the possibility we'll slap our foreheads when we realise how straightforward the mechanism is which makes this work.

One thing is clear: this is real science; the author makes unambiguous predictions about biology which can be tested in a variety of ways: laboratory experiments in hybridisation (on p. 213–214 he advises those interested in how to persuade various species to release their eggs and sperm), analysis of genomes (which ought to show evidence of hybridisation in the past), and detailed comparison of adult species which are possible progenitors of larval forms with larvae of those with which they may have hybridised.

If you're insufficiently immersed in the utter weirdness of life forms on this little sphere we inhabit, there is plenty here to astound you. Did you know, for example, about Owenia fusiformis (p. 72), which undergoes “cataclysmic metamorphosis”, which puts the chest-burster of Alien to shame: the larva develops an emerging juvenile worm which, in less than thirty seconds, turns itself inside-out and swallows the larva, which it devours in fifteen minutes. The larva does not “develop into” the juvenile, as is often said; it is like the first stage of a rocket which is discarded after it has done its job. How could this have evolved smoothly by small, continuous changes? For sheer brrrr factor, it's hard to beat the nemertean worms, which develop from tiny larvae into adults some of which exceed thirty metres in length (p. 87).

The author is an expert, and writes for his peers. There are many paragraphs like the following (p. 189), which will send you to the glossary at the end of the text (don't overlook it—otherwise you'll spend lots of time looking up things on the Web).

Adult mantis shrimp (Stomatapoda) live in burrows. The five anterior thoracic appendages are subchelate maxillipeds, and the abdomen bears pleopods and uropods. Some hatch as antizoeas: planktonic larvae that swim with five pairs of biramous thoracic appendages. These larvae gradually change into pseudozoeas, with subchelate maxillipeds and with four or five pairs of natatory pleopods. Other stomatopods hatch as pseudozoeas. There are no uropods in the larval stages. The lack of uropods and the form of the other appendages contrasts with the condition in decapod larvae. It seems improbable that stomatopod larvae could have evolved from ancestral forms corresponding to zoeas and megalopas, and I suggest that the Decapoda and the Stomatopoda acquired their larvae from different foreign sources.
In addition to the zo÷-jargon, another deterrent to reading this book is the cost: a list price of USD 109, quoted at Amazon.com at this writing at USD 85, which is a lot of money for a 260 page monograph, however superbly produced and notwithstanding its small potential audience; so fascinating and potentially significant is the content that one would happily part with USD 15 to read a PDF, but at prices like this one's curiosity becomes constrained by the countervailing virtue of parsimony. Still, if Williamson is right, some of the fundamental assumptions underlying our understanding of life on Earth for the last century and a half may be dead wrong, and if his conjecture stands the test of experiment, we may have at hand an understanding of mysteries such as the Cambrian explosion of animal body forms and the apparent “punctuated equilibria” in the fossil record. There is a Nobel Prize here for somebody who confirms that this supposition is correct. Lynn Margulis, whose own theory of the origin of eukaryotic cells by the incorporation of previously free-living organisms as endosymbionts, which is now becoming the consensus view, co-authors a foreword which endorses Williamson's somewhat similar view of larvae.

July 2006 Permalink

Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. ISBN 1-57955-008-8.
The full text of this book may now be read online.

August 2002 Permalink