January 2007

Wade, Nicholas. Before The Dawn. New York: Penguin Press, 2006. ISBN 1-59420-079-3.
Modern human beings, physically very similar to people alive today, with spoken language and social institutions including religion, trade, and warfare, had evolved by 50,000 years ago, yet written historical records go back only about 5,000 years. Ninety percent of human history, then, is “prehistory” which paleoanthropologists have attempted to decipher from meagre artefacts and rare discoveries of human remains. The degree of inference and the latitude for interpretation of this material has rendered conclusions drawn from it highly speculative and tentative. But in the last decade this has begun to change.

While humans only began to write the history of their species in the last 10% of their presence on the planet, the DNA that makes them human has been patiently recording their history in a robust molecular medium which only recently, with the ability to determine the sequence of the genome, humans have learnt to read. This has provided a new, largely objective, window on human history and origins, and has both confirmed results teased out of the archæological record over the centuries, and yielded a series of stunning surprises which are probably only the first of many to come.

Each individual's genome is a mix of genes inherited from their father and mother, plus a few random changes (mutations) due to errors in the process of transcription. The separate genome of the mitochondria (energy producing organelles) in their cells is inherited exclusively from the mother, and in males, the Y chromosome (except for the very tips) is inherited directly from the father, unmodified except for mutations. In an isolated population whose members breed mostly with one another, members of the group will come to share a genetic signature which reflects natural selection for reproductive success in the environment they inhabit (climate, sources of food, endemic diseases, competition with other populations, etc.) and the effects of random “genetic drift” which acts to reduce genetic diversity, particularly in small, isolated populations. Random mutations appear in certain parts of the genome at a reasonably constant rate, which allows them to be used as a “molecular clock” to estimate the time elapsed since two related populations diverged from their last common ancestor. (This is biology, so naturally the details are fantastically complicated, messy, subtle, and difficult to apply in practice, but the general outline is as described above.)

Even without access to the genomes of long-dead ancestors (which are difficult in the extreme to obtain and fraught with potential sources of error), the genomes of current populations provide a record of their ancestry, geographical origin, migrations, conquests and subjugations, isolation or intermarriage, diseases and disasters, population booms and busts, sources of food, and, by inference, language, social structure, and technologies. This book provides a look at the current state of research in the rapidly expanding field of genetic anthropology, and it makes for an absolutely compelling narrative of the human adventure. Obviously, in a work where the overwhelming majority of source citations are to work published in the last decade, this is a description of work in progress and most of the deductions made should be considered tentative pending further results.

Genomic investigation has shed light on puzzles as varied as the size of the initial population of modern humans who left Africa (almost certainly less than 1000, and possibly a single hunter-gatherer band of about 150), the date when wolves were domesticated into dogs and where it happened, the origin of wheat and rice farming, the domestication of cattle, the origin of surnames in England, and the genetic heritage of the randiest conqueror in human history, Genghis Khan, who, based on Y chromosome analysis, appears to have about 16 million living male descendants today.

Some of the results from molecular anthropology run the risk of being so at variance with the politically correct ideology of academic soft science that the author, a New York Times reporter, tiptoes around them with the mastery of prose which on other topics he deploys toward their elucidation. Chief among these is the discussion of the microcephalin and ASPM genes on pp. 97–99. (Note that genes are often named based on syndromes which result from deleterious mutations within them, and hence bear names opposite to their function in the normal organism. For example, the gene which triggers the cascade of eye formation in Drosophila is named eyeless.) Both of these genes appear to regulate brain size and, in particular, the development of the cerebral cortex, which is the site of higher intelligence in mammals. Specific alleles of these genes are of recent origin, and are unequally distributed geographically among the human population. Haplogroup D of Microcephalin appeared in the human population around 37,000 years ago (all of these estimates have a large margin of error); which is just about the time when quintessentially modern human behaviour such as cave painting appeared in Europe. Today, about 70% of the population of Europe and East Asia carry this allele, but its incidence in populations in sub-Saharan Africa ranges from 0 to 25%. The ASPM gene exists in two forms: a “new” allele which arose only about 5800 years ago (coincidentally[?] just about the time when cities, agriculture, and written language appeared), and an “old” form which predates this period. Today, the new allele occurs in about 50% of the population of the Middle East and Europe, but hardly at all in sub-Saharan Africa. Draw your own conclusions from this about the potential impact on human history when germline gene therapy becomes possible, and why opposition to it may not be the obvious ethical choice.


Derbyshire, John. Unknown Quantity. Washington: Joseph Henry Press, 2006. ISBN 0-309-09657-X.
After exploring a renowned mathematical conundrum (the Riemann Hypothesis) in all its profundity in Prime Obsession (June 2003), in this book the author recounts the history of algebra—an intellectual quest sprawling over most of recorded human history and occupying some of the greatest minds our species has produced. Babylonian cuneiform tablets dating from the time of Hammurabi, about 3800 years ago, demonstrate solving quadratic equations, extracting square roots, and finding Pythagorean triples. (The methods in the Babylonian texts are recognisably algebraic but are expressed as “word problems” instead of algebraic notation.) Diophantus, about 2000 years later, was the first to write equations in a symbolic form, but this was promptly forgotten. In fact, twenty-six centuries after the Babylonians were solving quadratic equations expressed in word problems, al-Khwārizmī (the word “algebra” is derived from the title of his book,
الكتاب المختصر في حساب الجبر والمقابلة
al-Kitāb al-mukhtaṣar fī ḥisāb al-jabr wa-l-muqābala,
and “algorithm” from his name) was solving quadratic equations in word problems. It wasn't until around 1600 that anything resembling the literal symbolism of modern algebra came into use, and it took an intellect of the calibre of René Descartes to perfect it. Finally, equipped with an expressive notation, rules for symbolic manipulation, and the slowly dawning realisation that this, not numbers or geometric figures, is ultimately what mathematics is about, mathematicians embarked on a spiral of abstraction, discovery, and generalisation which has never ceased to accelerate in the centuries since. As more and more mathematics was discovered (or, if you're an anti-Platonist, invented), deep and unexpected connections were found among topics once considered unrelated, and this is a large part of the story told here, as algebra has “infiltrated” geometry, topology, number theory, and a host of other mathematical fields while, in the form of algebraic geometry and group theory, providing the foundation upon which the most fundamental theories of modern physics are built.

With all of these connections, there's a strong temptation for an author to wander off into fields not generally considered part of algebra (for example, analysis or set theory); Derbyshire is admirable in his ability to stay on topic, while not shortchanging the reader where important cross-overs occur. In a book of this kind, especially one covering such a long span of history and a topic so broad, it is difficult to strike the right balance between explaining the mathematics and sketching the lives of the people who did it, and between a historical narrative and one which follows the evolution of specific ideas over time. In the opinion of this reader, Derbyshire's judgement on these matters is impeccable. As implausible as it may seem to some that a book about algebra could aspire to such a distinction, I found this one of the more compelling page-turners I've read in recent months.

Six “math primers” interspersed in the text provide the fundamentals the reader needs to understand the chapters which follow. While excellent refreshers, readers who have never encountered these concepts before may find the primers difficult to comprehend (but then, they probably won't be reading a history of algebra in the first place). Thirty pages of end notes not only cite sources but expand, sometimes at substantial length, upon the main text; readers should not deprive themselves this valuable lagniappe.


Florey, Kitty Burns. Sister Bernadette's Barking Dog. Hoboken, NJ: Melville House, 2006. ISBN 1-933633-10-7.
In 1877, Alonzo Reed and and Brainerd Kellogg published Higher Lessons in English, which introduced their system for the grammatical diagramming of English sentences. For example, the sentence “When my father and my mother forsake me, then the Lord will take me up” (an example from Lesson 63 of their book) would be diagrammed as:
Diagrammed sentence
Diagram by Bruce D. Despain.
in the Reed and Kellogg system.

The idea was to make the grammatical structure of the sentence immediately evident, sharpening students' skills in parsing sentences and rendering grammatical errors apparent. This seems to have been one of those cases when an idea springs upon a world which has, without knowing it, been waiting for just such a thing. Sentence diagramming spread through U.S. schools like wildfire—within a few years Higher Lessons and the five other books on which Reed and Kellogg collaborated were selling at the astonishing rate of half a million copies a year, and diagramming was firmly established in the English classes of children across the country and remained so until the 1960s, when it evaporated almost as rapidly as it had appeared.

The author and I are both members of the last generation who were taught sentence diagramming at school. She remembers it as having been “fun” (p. 15), something which was not otherwise much in evidence in Sister Bernadette's sixth grade classroom. I learnt diagramming in the seventh grade, and it's the only part of English class that I recall having enjoyed. (Gertrude Stein once said [p. 73], “I really do not know anything that has ever been more exciting than diagramming sentences.” I don't think I'd go quite that far myself.) In retrospect, it seems an odd part of the curriculum: we spent about a month furiously parsing and diagramming, then dropped the whole thing and never took it up again that year or afterwards; I can't recall ever diagramming a sentence since.

This book, written by an author and professional copy editor, charmingly recounts the origin, practice, curiosities, and decline of sentence diagramming, and introduces the reader to stalwarts who are keeping it alive today. There are a wealth of examples from literature, including the 93 word concluding sentence of Proust's Time Regained, which appears as a two-page spread (pp. 94–95). (The author describes seeing a poster from the 1970s which diagrams a 958 word Proust sentence without an explicit subject.)

Does diagramming make one a better writer? The general consensus, which the author shares, is that it doesn't, which may explain why it is rarely taught today. While a diagram shows the grammatical structure of a sentence, you already have to understand the rules of grammar in order to diagram it, and you can make perfectly fine looking diagrams of barbarisms such as “Me and him gone out.” Also, as a programmer, it disturbs me that one cannot always unambiguously recover the word order of the original sentence from a diagram; this is not a problem with the tree diagrams used by linguists today. But something doesn't have to be useful to be fun (even if not, as it was to Gertrude Stein, exciting), and the structure of a complex sentence graphically elucidated on a page is marvellous to behold and rewarding to create. I'm sure some may disdain those of us who find entertainment in such arcane intellectual endeavours; after all, the first name of the co-inventor of diagramming, Brainerd Kellogg, includes both the words “brain” and “nerd”!

The author's remark on p. 120, “…I must confess that I like editing my own work more than I do writing it. I find first drafts painful; what I love is to revise and polish. Sometimes I think I write simply to have the fun of editing what I've written.” is one I share, as Gertrude Stein put it (p. 76), “completely entirely completely”—and it's a sentiment I don't ever recall seeing in print before. I think the fact that students aren't taught that a first draft is simply the raw material of a cogent, comprehensible document is why we encounter so many hideously poorly written documents on the Web.

The complete text of the 1896 Revised Edition of Reed and Kellogg's Higher Lessons in English is available from Project Gutenberg; the diagrams are rendered as ASCII art and a little difficult to read until you get used to them. Eugene R. Moutoux, who constructed the diagrams for the complicated sentences in Florey's book has a wealth of information about sentence diagramming on his Web site, including diagrams of famous first-page sentences from literature such as this beauty from Nathaniel Hawthorne's The Scarlet Letter.


Ponting, Clive. Gunpowder. London: Pimlico, 2005. ISBN 1-8441-3543-8.
When I was a kid, we learnt in history class that gunpowder had been discovered in the thirteenth century by the English Franciscan monk Roger Bacon, who is considered one of the founders of Western science. The Chinese were also said to have known of gunpowder, but used it only for fireworks, as opposed to the applications in the fields of murder and mayhem the more clever Europeans quickly devised. In The Happy Turning (July 2003), H. G. Wells remarked that “truth has a way of heaving up through the cracks of history”, and so it has been with the origin of gunpowder, as recounted here.

It is one of those splendid ironies that gunpowder, which, along with its more recent successors, has contributed to the slaughter of more human beings than any other invention with the exception of government, was discovered in the 9th century A.D. by Taoist alchemists in China who were searching for an elixir of immortality (and, in fact, gunpowder continued to be used as a medicine in China for centuries thereafter). But almost as soon as the explosive potential of gunpowder was discovered, the Chinese began to apply it to weapons and, over the next couple of centuries had invented essentially every kind of firearm and explosive weapon which exists today.

Gunpowder is not a high explosive; it does not detonate in a supersonic shock wave as do substances such as nitroglycerine and TNT, but rather deflagrates, or burns rapidly, as the heat of combustion causes the release of the oxygen in the nitrate compound in the mix. If confined, of course, the rapid release of gases and heat can cause a container to explode, but the rapid combustion of gunpowder also makes it suitable as a propellant in guns and rockets. The early Chinese formulations used a relatively small amount of saltpetre (potassium nitrate), and were used in incendiary weapons such as fire arrows, fire lances (a kind of flamethrower), and incendiary bombs launched by catapults and trebuchets. Eventually the Chinese developed high-nitrate mixes which could be used in explosive bombs, rockets, guns, and cannon (which were perfected in China long before the West, where the technology of casting iron did not appear until two thousand years after it was known in China).

From China, gunpowder technology spread to the Islamic world, where bombardment by a giant cannon contributed to the fall of Constantinople to the Ottoman Empire. Knowledge of gunpowder almost certainly reached Europe via contact with the Islamic invaders of Spain. The first known European document giving its formula, whose disarmingly candid Latin title Liber Ignium ad Comburendos Hostes translates to “Book of Fires for the Burning of Enemies”, dates from about 1300 and contains a number of untranslated Arabic words.

Gunpowder weapons soon became a fixture of European warfare, but crude gun fabrication and weak powder formulations initially limited their use mostly to huge siege cannons which launched large stone projectiles against fortifications at low velocity. But as weapon designs and the strength of powder improved, the balance in siege warfare shifted from the defender to the attacker, and the consolidation of power in Europe began to accelerate.

The author argues persuasively that gunpowder played an essential part in the emergence of the modern European state, because the infrastructure needed to produce saltpetre, manufacture gunpowder weapons in quantity, equip, train, and pay ever-larger standing armies required a centralised administration with intrusive taxation and regulation which did not exist before. Once these institutions were in place, they conferred such a strategic advantage that the ruler was able to consolidate and expand the area of control at the expense of previously autonomous regions, until coming up against another such “gunpowder state”.

Certainly it was gunpowder weapons which enabled Europeans to conquer colonies around the globe and eventually impose their will on China, where centuries of political stability had caused weapons technology to stagnate by comparison with that of conflict-ridden Europe.

It was not until the nineteenth century that other explosives and propellants discovered by European chemists brought the millennium-long era of gunpowder a close. Gunpowder shaped human history as have few other inventions. This excellent book recounts that story from gunpowder's accidental invention as an elixir to its replacement by even more destructive substances, and provides a perspective on a thousand years of world history in terms of the weapons with which so much of it was created.


Walden, George. Time to Emigrate? London: Gibson Square, 2006. ISBN 1-90393393-5.
Readers of Theodore Dalrymple's Life at the Bottom and Our Culture, What's Left of It may have thought his dire view of the state of civilisation in Britain to have been unduly influenced by his perspective as a prison and public hospital physician in one of the toughest areas of Birmingham, England. Here we have, if not the “view from the top”, a brutally candid evaluation written by a former Minister of Higher Education in the Thatcher government and Conservative member of the House of Commons from 1983 until his retirement in 1997, and it is, if anything, more disturbing.

The author says of himself (p. 219), “My life began unpromisingly, but everything's always got better. … In other words, in personal terms I've absolutely no complaints.” But he is deeply worried about whether his grown children and their children can have the same expectations in the Britain of today and tomorrow. The book is written in the form of a long (224 page) and somewhat rambling letter to a fictional son and his wife who are pondering emigrating from Britain after their young son was beaten into unconsciousness by immigrants within sight of their house in London. He describes his estimation of the culture, politics, and economy of Britain as much like the work of a house surveyor: trying to anticipate the problems which may befall those who choose to live there. Wherever he looks: immigration, multiculturalism, education, transportation, the increasingly debt-supported consumer economy, public health services, mass media, and the state of political discourse, he finds much to fret about. But this does not come across as the sputtering of an ageing Tory, but rather a thoroughly documented account of how most of the things which the British have traditionally valued (and have attracted immigrants to their shores) have eroded during his lifetime, to such an extent that he can no longer believe that his children and grandchildren will have the same opportunities he had as a lower middle class boy born twelve days after Britain declared war on Germany in 1939.

The curious thing about emigration from the British Isles today is that it's the middle class that is bailing out. Over most of history, it was the lower classes seeking opportunity (or in the case of my Irish ancestors, simply survival) on foreign shores, and the surplus sons of the privileged classes hoping to found their own dynasties in the colonies. But now, it's the middle that's being squeezed out, and it's because the collectivist state is squeezing them for all they're worth. The inexorably growing native underclass and immigrants benefit from government services and either don't have the option to leave or else consider their lot in life in Britain far better than whence they came. The upper classes can opt out of the sordid shoddiness and endless grey queues of socialism; on p. 153 the author works out the cost: for a notional family of two parents and two children, “going private” for health care, education for the kids, transportation, and moving to a “safe neighbourhood” would roughly require doubling income from what such a typical family brings home.

Is it any wonder we have so many billionaire collectivists (Buffett, Gates, Soros, etc.)? They don't have to experience the sordid consequences of their policies, but by advocating them, they can recruit the underclass (who benefit from them and are eventually made dependent and unable to escape from helotry) to vote them into power and keep them there. And they can exult in virtue as their noble policies crush those who might aspire to their own exalted station. The middle class, who pay for all of this, forced into minority, retains only the franchise which is exercised through shoe leather on pavement, and begins to get out while the property market remains booming and the doors are still open.

The author is anything but a doctrinaire Tory; he has, in fact, quit the party, and savages its present “100% Feck-Free” (my term) leader, David Cameron as, among other things, a “transexualised [Princess] Diana” (p. 218). As an emigrant myself, albeit from a different country, I think his conclusion and final recommendation couldn't be wiser (and I'm sorry if this is a spoiler, but if you're considering such a course you should read this book cover to cover anyway): go live somewhere else (I'd say, anywhere else) and see how you like it. You may discover that you're obsessed with what you miss and join the “International Club” (which usually means the place they speak the language of the Old Country), or you may find that after struggling with language, customs, and how things are done, you fit in rather well and, after a while, find most of your nightmares are about things in the place you left instead of the one you worried about moving to. There's no way to know—it could go either way. I think the author, as many people, may have put somewhat more weight on the question of emigration that it deserves. I've always looked at countries like any other product. I've never accepted that because I happened to be born within the borders of some state to whose creation and legitimacy I never personally consented, that I owe it any obligation whatsoever apart from those in compensation for services provided directly to me with my assent. Quitting Tyrania to live in Freedonia is something anybody should be able do to, assuming the residents of Freedonia welcome you, and it shouldn't occasion any more soul-searching on the part of the emigrant than somebody choosing to trade in their VW bus for a Nissan econobox because the 1972 bus was a shoddy crapwagon. Yes, you should worry and even lose sleep over all the changes you'll have to make, but there's no reason to gum up an already difficult decision process by cranking all kinds of guilt into it. Nobody (well, nobody remotely sane) gets all consumed by questions of allegiance, loyalty, or heritage when deciding whether their next computer will run Windows, MacOS, Linux, or FreeBSD. It seems to me that once you step back from the flags and anthems and monuments and kings and presidents and prime ministers and all of the other atavistic baggage of the coercive state, it's wisest to look at your polity like an operating system; it's something that you have to deal with (increasingly, as the incessant collectivist ratchet tightens the garrote around individuality and productivity), but you still have a choice among them, and given how short is our tenure on this planet, we shouldn't waste a moment of it living somewhere that callously exploits our labours in the interest of others. And, the more productive people exercise that choice, the greater the incentive is for the self-styled rulers of the various states to create an environment which will attract people like ourselves.

Many of the same issues are discussed, from a broader European perspective, in Claire Berlinski's Menace in Europe and Mark Steyn's America Alone. To fend off queries, I emigrated from what many consider the immigration magnet of the world in 1991 and have never looked back and rarely even visited the old country except for business and family obligations. But then I suspect, as the author notes on p. 197, I am one of those D4-7 allele people (look it up!) who thrive on risk and novelty; I'm not remotely claiming that this is better—Heaven knows we DRD4 7-repeat folk have caused more than our cohort's proportion of chaos and mayhem, but we just can't give it up—this is who we are.


Card, Orson Scott. Empire. New York: Tor, 2006. ISBN 0-765-31611-0.
I first heard of this novel in an Instapundit podcast interview with the author, with whom I was familiar, having read and admired Ender's Game when it first appeared in 1977 as a novelette in Analog (it was later expanded and published as a novel in 1985) and several of his books since then. I'd always pigeonholed him as a science fictioneer, so I was somewhat surprised to learn that his latest effort was a techno-thriller in the Tom Clancy vein, with the flabbergasting premise of a near future American civil war pitting the conservative “red states” against the liberal “blue states”. The interview, which largely stayed away from the book, was interesting and since I'd never felt let down by any of Card's previous work (although none of it that I'd read seemed to come up to the level of Ender's Game, but then I've read only a fraction of his prolific output), I decided to give it a try.
Spoiler warning: Plot and/or ending details follow.  
The story is set in the very near future: a Republican president detested by the left and reviled in the media is in the White House, the Republican nomination for his successor is a toss-up, and a ruthless woman is the Democratic front-runner. In fact, unless this is an alternative universe with a different calendar, we can identify the year as 2008, since that's the only presidential election year on which June 13th falls on a Friday until 2036, by which date it's unlikely Bill O'Reilly will still be on the air.

The book starts out with a bang and proceeds as a tautly-plotted, edge of the seat thriller which I found more compelling than any of Clancy's recent doorstop specials. Then, halfway through chapter 11, things go all weird. It's like the author was holding his breath and chanting the mantra “no science fiction—no science fiction” and then just couldn't take it any more, explosively exhaled, drew a deep breath, and furiously started pounding the keys. (This is not, in fact, what happened, but we don't find that out until the end material, which I'll describe below.) Anyway, everything is developing as a near-future thriller combined with a “who do you trust” story of intrigue, and then suddenly, on p. 157, our heroes run into two-legged robotic Star Wars-like imperial walkers on the streets of Manhattan and, before long, storm troopers in space helmets and body armour, death rays that shoot down fighter jets, and later, “hovercycles”—yikes.

We eventually end up at a Bond villain redoubt in Washington State built by a mad collectivist billionaire clearly patterned on George Soros, for a final battle in which a small band of former Special Ops heroes take on all of the villains and their futuristic weaponry by grit and guile. If you like this kind of stuff, you'll probably like this. The author lost me with the imperial walkers, and it has nothing to do with my last name, or my anarchist proclivities.

May we do a little physics here? Let's take a closer look at the lair of the evil genius, hidden under a reservoir formed by a boondoggle hydroelectric dam “near Highway 12 between Mount St. Helens and Mount Rainier” (p. 350). We're told (p. 282) that the entry to the facility is hidden beneath the surface of the lake formed in a canyon behind a dam, and access to it is provided by pumping water from the lake to another, smaller lake in an adjacent canyon. The smaller lake is said to be two miles long, and exposing the entrance to the rebels' headquarters causes the water to rise fifteen feet in that lake. The width of the smaller lake is never given, but most of the natural lakes in that region seem to be long and skinny, so let's guess it's only a tenth as wide as it is long, or about 300 yards wide. The smaller lake is said to be above the lake which conceals the entrance, so to expose the door would require pumping a chunk of water we can roughly estimate (assuming the canyon is rectangular) at 2 miles by 300 yards by fifteen feet. Transforming all of these imperial (there's that word again!) measures into something comprehensible, we can compute the volume of water as about 4 million cubic metres or, as the boots on the ground would probably put it, about a billion gallons. This is a lot of water.

A cubic metre of water weighs 1000 kg, or a metric ton, so in order to expose the door, the villains would have to pump 4 billion kilograms of water uphill at least 15 feet (because the smaller lake is sufficiently above the facility to allow it to be flooded [p. 308] it would almost certainly be much more, but let's be conservative)—call it 5 metres. Now the energy required to raise this quantity of water 5 metres against the Earth's gravitation is just the product of the mass (4 billion kilograms), the distance (5 metres), and gravitational acceleration of 9.8 m/s², which works out to about 200 billion joules, or 54 megawatt-hours. If the height difference were double our estimate, double these numbers. Now to pump all of that water uphill in, say, half an hour (which seems longer than the interval in which it happens on pp. 288–308) will require about 100 megawatts of power, and that's assuming the pumps are 100% efficient and there are no frictional losses in the pipes. Where does the power come from? It can't come from the hydroelectric dam, since in order to generate the power to pump the water, you'd need to run a comparable amount of water through the dam's turbines (less because the drop would be greater, but then you have to figure in the efficiency of the turbines and generators, which is about 80%), and we've already been told that dumping the water over the dam would flood communities in the valley floor. If they could store the energy from letting the water back into the lower lake, then they could re-use it (less losses) to pump it back uphill again, but there's no way to store anything like that kind of energy—in fact, pumping water uphill and releasing it through turbines is the only practical way to store large quantities of electricity, and once the water is in the lower lake, there's no place to put the power. We've already established that there are no heavy duty power lines running to the area, as that would be immediately suspicious (of course, it's also suspicious that there aren't high tension lines running from what's supposed to be a hydroelectric dam, but that's another matter). And if the evil genius had invented a way to efficiently store and release power on that scale, he wouldn't need to start a civil war—he could just about buy the government with the proceeds from such an invention.

Spoilers end here.  
Call me picky—“You're picky!”—feel better now?—but I just cannot let this go unremarked. On p. 248, one character likens another to Hari Seldon in Isaac Asimov's Foundation novels. But it's spelt “Hari Selden”, and it's not a typo because the name is given the same wrong way three times on the same page! Now I'd excuse such a goof by a thriller scribbler recalling science fiction he'd read as a kid, but this guy is a distinguished science fiction writer who has won the Hugo Awardfour times, and this book is published by Tor Books, the pre-eminent specialist science fiction press; don't they have an editor on staff who's familiar with one of the universally acknowledged classics of the genre and winner of the unique Hugo for Best All-Time Series?

One becomes accustomed to low expectations for science fiction novel cover art, but expects a slightly higher standard for techno-thrillers. The image on the dust jacket has absolutely nothing whatsoever to do with any scene in the book. It looks like a re-mix of several thriller covers chosen at random.

It is only on p. 341, in the afterword, that we learn this novel was commissioned as part of a project to create an “entertainment franchise”, and on p. 349, in the acknowledgements, that this is, in fact, the scenario of a video game already under development when the author joined the team. Frankly, it shows. As befits the founding document of an “entertainment franchise” the story ends setting the stage for the sequel, although at least to this reader, the plot for the first third of that work seems transparently obvious, but then Card is a master of the gob smack switcheroo, as the present work demonstrates. In any case, what we have here appears to be Volume One of a series of alternative future political/military novels like Allen Drury's Advise and Consent series. While that novel won a Pulitzer Prize, the sequels rapidly degenerated into shrill right-wing screeds. In Empire Card is reasonably even-handed, although his heterodox personal views are apparent. I hope the inevitable sequels come up to that standard, but I doubt I'll be reading them.


February 2007

Lukacs, John. Five Days in London. New Haven, CT: Yale University Press, 1999. ISBN 0-300-08466-8.
Winston Churchill titled the fourth volume of his memoirs of The Second World War, describing the events of 1942, The Hinge of Fate. Certainly, in the military sense, it was in that year that the tide turned in favour of the allies—the entry of the United States into the war and the Japanese defeat in the Battle of Midway, Germany's failure at Stalingrad and the beginning of the disastrous consequences for the German army, and British defeat of Rommel's army at El Alamein together marked what Churchill described as, “…not the end, nor is it even the beginning of the end, but, it is perhaps, the end of the beginning.”

But in this book, distinguished historian John Lukacs argues that the true “hinge of fate” not only of World War II, but for Western civilisation against Nazi tyranny, occurred in the five days of 24–28 May of 1940, not on the battlefields in France, but in London, around conference tables, in lunch and dinner meetings, and walks in the garden. This was a period of unmitigated, accelerating disaster for the French army and the British Expeditionary Force in France: the channel ports of Boulogne and Calais fell to the Germans, the King of Belgium capitulated to the Nazis, and more than three hundred thousand British and French troops were surrounded at Dunkirk, the last channel port still in Allied hands. Despite plans for an evacuation, as late as May 28, Churchill estimated that at most about 50,000 could be evacuated, with all the rest taken prisoner and all the military equipment lost. In his statement in the House of Commons that day, he said, “Meanwhile, the House should prepare itself for hard and heavy tidings.” It was only in the subsequent days that the near-miraculous evacuation was accomplished, with a total of 338,226 soldiers rescued by June 3rd.

And yet it was in these darkest of days that Churchill vowed that Britain would fight on, alone if necessary (which seemed increasingly probable), to the very end, whatever the cost or consequences. On May 31st, he told French premier Paul Reynaud, “It would be better far that the civilisation of Western Europe with all of its achievements should come to a tragic but splendid end than that the two great democracies should linger on, stripped of all that made life worth living.” (p. 217).

From Churchill's memoirs and those of other senior British officials, contemporary newspapers, and most historical accounts of the period, one gains the impression of a Britain unified in grim resolve behind Churchill to fight on until ultimate victory or annihilation. But what actually happened in those crucial War Cabinet meetings as the disaster in France was unfolding? Oddly, the memoirs and collected papers of the participants are nearly silent on the period, with the author describing the latter as having been “weeded” after the fact. It was not until the minutes of the crucial cabinet meetings were declassified in 1970 (thanks to a decision by the British government to reduce the “closed period” of such records from fifty to thirty years), that it became possible to reconstruct what transpired there. This book recounts a dramatic and fateful struggle of which the public and earlier historians of the period were completely unaware—a moment when Hitler may have come closer to winning the war than at any other.

The War Cabinet was, in fact, deeply divided. Churchill, who had only been Prime Minister for two weeks, was in a precarious position, with his predecessor Neville Chamberlain and the Foreign Secretary Lord Halifax, who King George VI had preferred to Churchill for Prime Minister as members, along with Labour leaders Clement Attlee and Arthur Greenwood. Halifax did not believe that Britain could resist alone, and that fighting on would surely result in the loss of the Empire and perhaps independence and liberty in Britain as well. He argued vehemently for an approach, either by Britain and France together or Britain alone, to Mussolini, with the goal of keeping Italy out of the war and making some kind of deal with Hitler which would preserve independence and the Empire, and he met on several occasions with the Italian ambassador in London to explore such possibilities.

Churchill opposed any effort to seek mediation, either by Mussolini or Roosevelt, both because he thought the chances of obtaining acceptable terms from Hitler were “a thousand to one against” (May 28, p. 183) and because any approach would put Britain on a “slippery slope” (Churchill's words in the same meeting) from which it would be impossible to restore the resolution to fight rather than make catastrophic concessions. But this was a pragmatic decision, not a Churchillian declaration of “never, never, never, never”. In the May 26 War Cabinet meeting (p. 113), Churchill made the rather astonishing statement that he “would be thankful to get out of our present difficulties on such terms, provided we retained the essentials and the elements of our vital strength, even at the cost of some territory”. One can understand why the personal papers of the principals were so carefully weeded.

Speaking of another conflict where the destiny of Europe hung in the balance, the Duke of Wellington said of Waterloo that it was “the nearest run thing you ever saw in your life”. This account makes it clear that this moment in history was much the same. It is, of course, impossible to forecast what the consequences would have been had Halifax prevailed and Britain approached Mussolini to broker a deal with Hitler. The author argues forcefully that nothing less than the fate of Western civilisation was at stake. With so many “what ifs”, one can never know. (For example, it appears that Mussolini had already decided by this date to enter the war and he might have simply rejected a British approach.) But in any case this fascinating, thoroughly documented, and lucidly written account of a little-known but crucial moment in history makes for compelling reading.


Roberts, Siobhan. King of Infinite Space. New York: Walker and Company, 2006. ISBN 0-8027-1499-4.
Mathematics is often said to be a game for the young. The Fields Medal, the most prestigious prize in mathematics, is restricted to candidates 40 years or younger. While many older mathematicians continue to make important contributions in writing books, teaching, administration, and organising and systematising topics, most work on the cutting edge is done by those in their twenties and thirties. The life and career of Donald Coxeter (1907–2003), the subject of this superb biography, is a stunning and inspiring counter-example. Coxeter's publications (all of which are listed in an appendix to this book) span a period of eighty years, with the last, a novel proof of Beecroft's theorem, completed just a few days before his death.

Coxeter was one of the last generation to be trained in classical geometry, and he continued to do original work and make striking discoveries in that field for decades after most other mathematicians had abandoned it as mined out or insufficiently rigorous, and it had disappeared from the curriculum not only at the university level but, to a great extent, in secondary schools as well. Coxeter worked in an intuitive, visual style, frequently making models, kaleidoscopes, and enriching his publications with numerous diagrams. Over the many decades his career spanned, mathematical research (at least in the West) seemed to be climbing an endless stairway toward ever greater abstraction and formalism, epitomised in the work of the Bourbaki group. (When the unthinkable happened and a diagram was included in a Bourbaki book, fittingly it was a Coxeter diagram.) Coxeter inspired an increasingly fervent group of followers who preferred to discover new structures and symmetry using the mind's powers of visualisation. Some, including Douglas Hofstadter (who contributed the foreword to this work) and John Horton Conway (who figures prominently in the text) were inspired by Coxeter to carry on his legacy. Coxeter's interactions with M. C. Escher and Buckminster Fuller are explored in two chapters, and illustrate how the purest of mathematics can both inspire and be enriched by art and architecture (or whatever it was that Fuller did, which Coxeter himself wasn't too sure about—on one occasion he walked out of a new-agey Fuller lecture, noting in his diary “Out, disgusted, after ¾ hour” [p. 178]).

When the “new math” craze took hold in the 1960s, Coxeter immediately saw it for the disaster it was to be become and involved himself in efforts to preserve the intuitive and visual in mathematics education. Unfortunately, the power of a fad promoted by purists is difficult to counter, and a generation and more paid the price of which Coxeter warned. There is an excellent discussion at the end of chapter 9 of the interplay between the intuitive and formalist approaches to mathematics. Many modern mathematicians seem to have forgotten that one proves theorems in order to demonstrate that the insights obtained by intuition are correct. Intuition without rigour can lead to error, but rigour without intuition can blind one to beautiful discoveries in the mathematical objects which stand behind the austere symbols on paper.

The main text of this 400 page book is only 257 pages. Eight appendices expand upon technical topics ranging from phyllotaxis to the quilting of toilet paper and include a complete bibliography of Coxeter's publications. (If you're intrigued by “Morley's Miracle”, a novel discovery in the plane geometry of triangles made as late as 1899, check out this page and Java applet which lets you play with it interactively. Curiously, a diagram of Morley's theorem appears on the cover of Coxeter's and Greitzer's Geometry Revisited, but is misdrawn—the trisectors are inexact and the inner triangle is therefore not equilateral.) Almost 90 pages of endnotes provide both source citations (including Web links to MathWorld for technical terms and the University of St. Andrews biographical archive for mathematicians named in the text) and detailed amplification of numerous details. There are a few typos and factual errors (for example, on p. 101 the planets Uranus and Pluto are said to have been discovered in the nineteenth century when, in fact, neither was: Herschel discovered Uranus in 1781 and Tombaugh Pluto in 1930), but none are central to the topic nor detract from this rewarding biography of an admirable and important mathematician.


Kauffman, Stuart A. Investigations. New York: Oxford University Press, 2000. ISBN 0-19-512105-8.
Few people have thought as long and as hard about the origin of life and the emergence of complexity in a biosphere as Stuart Kauffman. Medical doctor, geneticist, professor of biochemistry and biophysics, MacArthur Fellow, and member of the faculty of the Santa Fe Institute for a decade, he has sought to discover the principles which might underlie a “general biology”—the laws which would govern any biosphere, whether terrestrial, extraterrestrial, or simulated within a computer, regardless of its physical substrate.

This book, which he describes on occasion as “protoscience”, provides an overview of the principles he suspects, but cannot prove, may underlie all forms of life, and beyond that systems in general which are far from equilibrium such as a modern technological economy and the universe itself. Most of science before the middle of the twentieth century studied complex systems at or near equilibrium; only at such states could the simplifying assumptions of statistical mechanics be applied to render the problem tractable. With computers, however, we can now begin to explore open systems (albeit far smaller than those in nature) which are far from equilibrium, have dynamic flows of energy and material, and do not necessarily evolve toward a state of maximum entropy.

Kauffman believes there may be what amounts to a fourth law of thermodynamics which applies to such systems and, although we don't know enough to state it precisely, he suspects it may be that these open, extremely nonergodic, systems evolve as rapidly as possible to expand and fill their state space and that unlike, say, a gas in a closed volume or the stars in a galaxy, where the complete state space can be specified in advance (that is, the dimensionality of the space, not the precise position and momentum values of every object within it), the state space of a non-equilibrium system cannot be prestated because its very evolution expands the state space. The presence of autonomous agents introduces another level of complexity and creativity, as evolution drives the agents to greater and greater diversity and complexity to better adapt to the ever-shifting fitness landscape.

These are complicated and deep issues, and this is a very difficult book, although appearing, at first glance, to be written for a popular audience. I seriously doubt whether somebody who was not previously acquainted with these topics and thought about them at some length will make it to the end and, even if they do, take much away from the book. Those who are comfortable with the laws of thermodynamics, the genetic code, protein chemistry, catalysis, autocatalytic networks, Carnot cycles, fitness landscapes, hill-climbing strategies, the no-go theorem, error catastrophes, self-organisation, percolation phase transitions in graphs, and other technical issues raised in the arguments must still confront the author's prose style. It seems like Kauffman aspires to be a prose stylist conveying a sense of wonder to his readers along the lines of Carl Sagan and Stephen Jay Gould. Unfortunately, he doesn't pull it off as well, and the reader must wade through numerous paragraphs like the following from pp. 97–98:

Does it always take work to construct constraints? No, as we will soon see. Does it often take work to construct constraints? Yes. In those cases, the work done to construct constraints is, in fact, another coupling of spontaneous and nonspontaneous processes. But this is just what we are suggesting must occur in autonomous agents. In the universe as a whole, exploding from the big bang into this vast diversity, are many of the constraints on the release of energy that have formed due to a linking of spontaneous and nonspontaneous processes? Yes. What might this be about? I'll say it again. The universe is full of sources of energy. Nonequilibrium processes and structures of increasing diversity and complexity arise that constitute sources of energy that measure, detect, and capture those sources of energy, build new structures that constitute constraints on the release of energy, and hence drive nonspontaneous processes to create more such diversifying and novel processes, structures, and energy sources.
I have not cherry-picked this passage; there are hundreds of others like it. Given the complexity of the technical material and the difficulty of the concepts being explained, it seems to me that the straightforward, unaffected Point A to Point B style of explanation which Isaac Asimov employed would work much better. Pardon my audacity, but allow me to rewrite the above paragraph.
Autonomous agents require energy, and the universe is full of sources of energy. But in order to do work, they require energy to be released under constraints. Some constraints are natural, but others are constructed by autonomous agents which must do work to build novel constraints. A new constraint, once built, provides access to new sources of energy, which can be exploited by new agents, contributing to an ever growing diversity and complexity of agents, constraints, and sources of energy.
Which is better? I rewrite; you decide. The tone of the prose is all over the place. In one paragraph he's talking about Tomasina the trilobite (p. 129) and Gertrude the ugly squirrel (p. 131), then the next thing you know it's “Here, the hexamer is simplified to 3'CCCGGG5', and the two complementary trimers are 5'GGG3' + 5'CCC3'. Left to its own devices, this reaction is exergonic and, in the presence of excess trimers compared to the equilibrium ratio of hexamer to trimers, will flow exergonically toward equilibrium by synthesizing the hexamer.” (p. 64). This flipping back and forth between colloquial and scholarly voices leads to a kind of comprehensional kinetosis. There are a few typographical errors, none serious, but I have to share this delightful one-sentence paragraph from p. 254 (ellipsis in the original):
By iteration, we can construct a graph connecting the founder spin network with its 1-Pachner move “descendants,” 2-Pachner move descendints…N-Pachner move descendents.
Good grief—is Oxford University Press outsourcing their copy editing to Slashdot?

For the reasons given above, I found this a difficult read. But it is an important book, bristling with ideas which will get you looking at the big questions in a different way, and speculating, along with the author, that there may be some profound scientific insights which science has overlooked to date sitting right before our eyes—in the biosphere, the economy, and this fantastically complicated universe which seems to have emerged somehow from a near-thermalised big bang. While Kauffman is the first to admit that these are hypotheses and speculations, not science, they are eminently testable by straightforward scientific investigation, and there is every reason to believe that if there are, indeed, general laws that govern these phenomena, we will begin to glimpse them in the next few decades. If you're interested in these matters, this is a book you shouldn't miss, but be aware what you're getting into when you undertake to read it.


March 2007

Heinlein, Robert A. and Spider Robinson. Variable Star. New York: Tor, 2006. ISBN 0-765-31312-X.
After the death of Virginia Heinlein in 2003, curators of the Heinlein papers she had deeded to the Heinlein Prize Trust discovered notes for a “juvenile” novel which Heinlein had plotted in 1955 but never got around to writing. Through a somewhat serendipitous process, Spider Robinson, who The New York Times Book Review called “the new Robert Heinlein” in 1982 (when the original Robert Heinlein was still very much on the scene—I met him in 1984, and his last novel was published in 1987, the year before his death), was tapped to “finish” the novel from the notes. To his horror (as described in the afterword in this volume), Robinson discovered the extant notes stopped in mid-sentence, in the middle of the story, with no clue as to the ending Heinlein intended. Taking some comments Heinlein made in a radio interview as the point of departure, Robinson rose to the challenge, cranking in a plot twist worthy of the Grandmaster.

Taking on a task like this is to expose oneself to carping and criticism from purists, but to this Heinlein fan who reads for the pleasure of it, Spider Robinson has acquitted himself superbly here. He deftly blends events in recent decades into the Future History timeline, and even hints at a plausible way current events could lead to the rise of the Prophet. It is a little disconcerting to encounter Simpsons allusions in a time line in which Leslie LeCroix of Harriman Enterprises was the first to land on the Moon, but recurring Heinlein themes are blended into the story line in such a way that you're tempted to think that this is the way Heinlein would have written such a book, were he still writing today. The language and situations are substantially more racy than the classic Heinlein juveniles, but not out of line with Heinlein's novels of the 1970s and 80s.

Sigh…aren't there any adults on the editorial staff at Tor? First they let three misspellings of Asimov's character Hari Seldon slip through in Orson Scott Card's Empire, and now the very first time the Prophet appears on p. 186, his first name is missing the final “h;”, and on p. 310 the title of Heinlein's first juvenile, Rocket Ship Galileo is given as “Rocketship Galileo”. Readers intrigued by the saxophone references in the novel may wish to check out The Devil's Horn, which discusses, among many other things, the possible connection between “circular breathing” and the mortality rate of saxophonists (and I always just thought it was that “cool kills”).

As you're reading this novel, you may find yourself somewhere around two hundred pages in, looking at the rapidly dwindling hundred-odd pages to go, and wondering is anything ever going to happen? Keep turning those pages—you will not be disappointed. Nor, I think, would Heinlein, wherever he is, regarding this realisation of his vision half a century after he consigned it to a file drawer.


Horowitz, David. Radical Son. New York: Touchstone Books, 1997. ISBN 0-684-84005-7.
One the mysteries I have never been able to figure out—I remember discussing it with people before I left the U.S., so that makes it at least fifteen years of bewilderment on my part—is why so many obviously highly intelligent people, some of whom have demonstrated initiative and achieved substantial success in productive endeavours, are so frequently attracted to collectivist ideologies which deny individual excellence, suppress individualism, and seek to replace achievement with imposed equality in mediocrity. Even more baffling is why so many people remain attracted to these ideas which are as thoroughly discredited by the events of the twentieth century as any in the entire history of human intellectual endeavour, in a seeming willingness to ignore evidence, even when it takes the form of a death toll in the tens of millions of human beings.

This book does not supply a complete answer, but it provides several important pieces of the puzzle. It is the most enlightening work on this question I've read since Hayek's The Fatal Conceit (March 2005), and complements it superbly. While Hayek's work is one of philosophy and economics, Radical Son is a searching autobiography by a person who was one of the intellectual founders and leaders of the New Left in the 1960s and 70s. The author was part of the group which organised the first demonstration against the Vietnam war in Berkeley in 1962, published the standard New Left history of the Cold War, The Free World Colossus in 1965, and in 1968, the very apogee of the Sixties, joined Ramparts magazine, where he rapidly rose to a position of effective control, setting its tone through the entire period of radicalisation and revolutionary chaos which ensued. He raised the money for the Black Panther Party's “Learning Center” in Oakland California, and became an adviser and regular companion of Huey Newton. Throughout all of this his belief in the socialist vision of the future, the necessity of revolution even in a democratic society, and support for the “revolutionary vanguard”, however dubious some of their actions seemed, never wavered.

He came to these convictions almost in the cradle. Like many of the founders of the New Left (Tom Hayden was one of the rare exceptions), Horowitz was a “red diaper baby”. In his case both his mother and father were members of the Communist Party of the United States and met through political activity. Although the New Left rejected the Communist Party as a neo-Stalinist anachronism, so many of its founders had parents who were involved with it directly or knowingly in front organisations, they formed part of a network of acquaintances even before they met as radicals in their own right. It is somewhat ironic that these people who believed themselves to be and were portrayed in the press as rebels and revolutionaries were, perhaps more than their contemporaries, truly their parents' children, carrying on their radical utopian dream without ever questioning anything beyond the means to the end.

It was only in 1974, when Betty Van Patter, a former Ramparts colleague he had recommended for a job helping the Black Panthers sort out their accounts, was abducted and later found brutally murdered, obviously by the Panthers (who expressed no concern when she disappeared, and had complained of her inquisitiveness), that Horowitz was confronted with the true nature of those he had been supporting. Further, when he approached others who were, from the circumstances of their involvement, well aware of the criminality and gang nature of the Panthers well before he, they continued to either deny the obvious reality or, even worse, deliberately cover it up because they still believed in the Panther mission of revolution. (To this day, nobody has been charged with Van Patter's murder.)

The contemporary conquest of Vietnam and Cambodia and the brutal and bloody aftermath, the likelihood of which had also been denied by the New Left (as late as 1974, Tom Hayden and Jane Fonda released a film titled Introduction to the Enemy which forecast a bright future of equality and justice when Saigon fell), reinforced the author's second thoughts, leading eventually to a complete break with the Left in the mid-1980s and his 1989 book with Peter Collier, Destructive Generation, the first sceptical look at the beliefs and consequences of Sixties radicalism by two of its key participants.

Radical Son mixes personal recollection, politics, philosophy, memoirs of encounters with characters ranging from Bertrand Russell to Abbie Hoffman, and a great deal of painful introspection to tell the story of how reality finally shattered second-generation utopian illusions. Even more valuable, the reader comes to understand the power those delusions have over those who share them, and why seemingly no amount of evidence suffices to induce doubt among those in their thrall, and why the reaction to any former believer who declares their “apostasy” is so immediate and vicious.

Horowitz is a serious person, and this is a serious, and often dismaying and tragic narrative. But one cannot help to be amused by the accounts of New Leftists trying to put their ideology into practice in running communal households, publishing enterprises, and political movements. Inevitably, before long everything blows up in the tediously familiar ways of such things, as imperfect human beings fail to meet the standards of a theory which requires them to deny their essential humanity. And yet they never learn; it's always put down to “errors”, blamed on deviant individuals, oppression, subversion, external circumstances, or some other cobbled up excuse. And still they want to try again, betting the entire society and human future on it.


Robinson, Andrew. The Last Man Who Knew Everything. New York: Pi Press, 2006. ISBN 0-13-134304-1.
The seemingly inexorable process of specialisation in the sciences and other intellectual endeavours—the breaking down of knowledge into categories so narrow and yet so deep that their mastery at the professional level seems to demand forsaking anything beyond a layman's competence in other, even related fields, is discouraging to those who believe that some of the greatest insights come from the cross-pollination of concepts from subjects previously considered unrelated. The twentieth century was inhospitable to polymaths—even within a single field such as physics, ever narrower specialities proliferated, with researchers interacting little with those working in other areas. The divide between theorists and experimentalists has become almost impassable; it is difficult to think of a single individual who achieved greatness in both since Fermi, and he was born in 1901.

As more and more becomes known, it is inevitable that it is increasingly difficult to cram it all into one human skull, and the investment in time to master a variety of topics becomes disproportionate to the length of a human life, especially since breakthrough science is generally the province of the young. And yet, one wonders whether the conventional wisdom that hyper-specialisation is the only way to go and that anybody who aspires to broad and deep understanding of numerous subjects must necessarily be a dilettante worthy of dismissal, might underestimate the human potential and discourage those looking for insights available only by synthesising the knowledge of apparently unrelated disciplines. After all, mathematicians have repeatedly discovered deep connections between topics thought completely unrelated to one another; why shouldn't this be the case in the sciences, arts, and humanities as well?

The life of Thomas Young (1773–1829) is an inspiration to anybody who seeks to understand as much as possible about the world in which they live. The eldest of ten children of a middle class Quaker family in southwest England (his father was a cloth merchant and later a banker), from childhood he immersed himself in every book he could lay his hands upon, and in his seventeenth year alone, he read Newton's Principia and Opticks, Blackstone's Commentaries, Linnaeus, Euclid's Elements, Homer, Virgil, Sophocles, Cicero, Horace, and many other classics in the original Greek or Latin. At age 19 he presented a paper on the mechanism by which the human eye focuses on objects at different distances, and on its merit was elected a Fellow of the Royal Society a week after his 21st birthday.

Young decided upon a career in medicine and studied in Edinburgh, Göttingen, and Cambridge, continuing his voracious reading and wide-ranging experimentation in whatever caught his interest, then embarked upon a medical practice in London and the resort town of Worthing, while pursuing his scientific investigations and publications, and popularising science in public lectures at the newly founded Royal Institution.

The breadth of Young's interests and contributions have caused some biographers, both contemporary and especially more recent, to dismiss him as a dilettante and dabbler, but his achievements give lie to this. Had the Nobel Prize existed in his era, he would almost certainly have won two (Physics for the wave theory of light, explanation of the phenomena of diffraction and interference [including the double slit experiment], and birefringence and polarisation; plus Physiology or Medicine for the explanation of the focusing of the eye [based, in part, upon some cringe-inducing experiments he performed upon himself], the trireceptor theory of colour vision, and the discovery of astigmatism), and possibly three (Physics again, for the theory of elasticity of materials: “Young's modulus” is a standard part of the engineering curriculum to this day).

But he didn't leave it at that. He was fascinated by languages since childhood, and in addition to the customary Latin and Greek, by age thirteen had taught himself Hebrew and read thirty chapters of the Hebrew Bible all by himself. In adulthood he undertook an analysis of four hundred different languages (pp. 184–186) ranging from Chinese to Cherokee, with the goal of classifying them into distinct families. He coined the name “Indo-European” for the group to which most Western languages belong. He became fascinated with the enigma of Egyptian hieroglyphics, and his work on the Rosetta Stone provided the first breakthrough and the crucial insight that hieroglyphic writing was a phonetic alphabet, not a pictographic language like Chinese. Champollion built upon Young's work in his eventual deciphering of hieroglyphics. Young continued to work on the fiendishly difficult demotic script, and was the first person since the fall of the Roman Empire to be able to read some texts written in it.

He was appointed secretary of the Board of Longitude and superintendent of the Nautical Almanac, and was instrumental in the establishment of a Southern Hemisphere observatory at the Cape of Good Hope. He consulted with the admiralty on naval architecture, with the House of Commons on the design for a replacement to the original London Bridge, and served as chief actuary for a London life insurance company and did original research on mortality in different parts of Britain.

Stereotypical characters from fiction might cause you to expect that such an intellect might be a recluse, misanthrope, obsessive, or seeker of self-aggrandisement. But no…, “He was a lively, occasionally caustic letter writer, a fair conversationalist, a knowledgeable musician, a respectable dancer, a tolerable versifier, an accomplished horseman and gymnast, and throughout his life, a participant in the leading society of London and, later, Paris, the intellectual capitals of his day” (p. 12). Most of the numerous authoritative articles he contributed to the Encyclopedia Britannica, including “Bridge”, “Carpentry”, “Egypt”, “Languages”, “Tides”, and “Weights and measures”, as well as 23 biographies, were published anonymously. And he was happily married from age 31 until the end of his life.

Young was an extraordinary person, but he never seems to have thought of himself as exceptional in any way other than his desire to understand how things worked and his willingness to invest as much time and effort as it took at arrive at the goals he set for himself. Reading this book reminded me of a remark by Admiral Hyman G. Rickover, “The only way to make a difference in the world is to put ten times as much effort into everything as anyone else thinks is reasonable. It doesn't leave any time for golf or cocktails, but it gets things done.” Young's life is a testament to just how many things one person can get done in a lifetime, enjoying every minute of it and never losing balance, by judicious application of this principle.


Wells, David. The Penguin Dictionary of Curious and Interesting Geometry. London: Penguin Books, 1991. ISBN 0-14-011813-6.
What a treat—two hundred and seventy-five diagram-rich pages covering hundreds of geometrical curiosities ranging from the problem of Apollonius to zonohedra. Items range from classical Euclidean geometry to modern topics such as higher dimensional space, non-Euclidean geometry, and topological transformations; and from classical times until the present—it's amazing how many fundamental properties of objects as simple as triangles were discovered only in the twentieth century!

There are so many wonders here I shall not attempt to list them but simply commend this book to your own exploration and enjoyment. But one example…it's obvious that a non-convex room with black walls cannot be illuminated by a single light placed within it. But what if all the walls are mirrors? It is possible to design a mirrored room such that a light within it will still leave some part dark (p. 263)? The illustration of the Voderberg tile on p. 268 is unfortunate; the width of the lines makes it appear not to be a proper tile, but rather two tiles joined at a point. This page shows a detailed construction which makes it clear that the tile is indeed well formed and rigid.

I will confess, as a number nerd more than a geometry geek, that this book comes in second in my estimation behind the author's Penguin Book of Curious and Interesting Numbers, one single entry of which motivated me to consume three years of computer time in 1987–1990. But there are any number of wonders here, and the individual items are so short you can open the book at random and find something worth reading you can finish in a minute or so. Almost all items are given without proof, but there are citations to publications for many and you'll be able to find most of the rest on MathWorld.


Phillips, Kevin. American Theocracy. New York: Viking, 2006. ISBN 0-670-03486-X.
In 1969, the author published The Emerging Republican Majority, which Newsweek called “The political bible of the Nixon Era.” The book laid out the “Sun Belt” (a phrase he coined) strategy he developed as a senior strategist for Richard Nixon's successful 1968 presidential campaign, and argued that demographic and economic trends would reinforce the political power of what he termed the “heartland” states, setting the stage for long-term Republican dominance of national politics, just as FDR's New Deal coalition had maintained Democratic power (especially in the Congress) for more than a generation.

In this book he argues that while his 1969 analysis was basically sound and would have played out much as he forecast, had the Republican steamroller not been derailed by Watergate and the consequent losses in the 1974 and 1976 elections, since the Reagan era, and especially during the presidency of George W. Bush, things have gone terribly wrong, and that the Republican party, if it remains in power, is likely to lead the United States in disastrous directions, resulting in the end of its de facto global hegemony.

Now, this is a view with which I am generally sympathetic, but if the author's reason for writing the present volume is to persuade people in that direction, I must judge the result ineffectual if not counterproductive. The book is ill-reasoned, weakly argued, poorly written, strongly biased, scantily documented, grounded in dubious historical analogies, and rhetorically structured in the form of “proof by assertion and endless repetition”.

To start with, the title is misleading if read without the subtitle, “The Peril and Politics of Radical Religion, Oil, and Borrowed Money in the 21st Century”, which appears in 8 point sans-serif type on the cover, below an illustration of a mega-church reinforcing the the words “American Theocracy” in 60 and 48 point roman bold. In fact, of 394 pages of main text, only 164—about 40%—are dedicated to the influence of religion on politics. (Yes, there are mentions of religion in the rest, but there is plenty of discussion of the other themes in the “Too Many Preachers” part as well; this book gives the distinct impression of having been shaken, not stirred.) And nothing in that part, or elsewhere in the book provides any evidence whatsoever, or even seriously advances a claim, that there is a genuine movement toward, threat of, or endorsement by the Republican party of theocracy, which Webster's Unabridged Dictionary defines as:

  1. A form of government in which God or a deity is recognized as the supreme civil ruler, the God's or deity's laws being interpreted by the ecclesiastical authorities.
  2. A system of government by priests claiming a divine commission.
  3. A commonwealth or state under such a form or system of government.

And since Phillips's argument is based upon the Republican party's support among religious groups as diverse as Southern Baptists, northern Midwest Lutherans, Pentecostals, Mormons, Hasidic Jews, and Eastern Rite and traditionalist Catholics, it is difficult to imagine how precisely how the feared theocracy would function, given how little these separate religious groups agree upon. It would have to be an “ecumenical theocracy”, a creature for which I can recall no historical precedent.

The greater part of the book discusses the threats to the U.S. posed by a global peak in petroleum production and temptation of resource wars (of which he claims the U.S. intervention in Iraq is an example), and the explosion of debt, public and private, in the U.S., the consequent housing bubble, and the structural trade deficits which are flooding the world with greenbacks. But these are topics which have been discussed more lucidly and in greater detail by authors who know far more about them than Phillips, who cites secondary and tertiary sources and draws no novel observations.

A theme throughout the work is comparison of the present situation of the U.S. with previous world powers which fell into decline: ancient Rome, Spain in the seventeenth century, the Netherlands in the second half of the eighteenth century, and Britain in the first half of the twentieth. The parallels here, especially as regards fears of “theocracy” are strained to say the least. Constantine did not turn Rome toward Christianity until the fourth century A.D., by which time, even Gibbon concedes, the empire had been in decline for centuries. (Phillips seems to have realised this part of the way through the manuscript and ceases to draw analogies with Rome fairly early on.) Few, if any, historians would consider Spain, Holland, or Britain in the periods in question theocratic societies; each had a clear separation between civil authority and the church, and in the latter two cases there is plain evidence of a decline in the influence of organised religion on the population as the nation's power approached a peak and began to ebb. Can anybody seriously contend that the Anglican church was responsible for the demise of the British Empire? Hello—what about the two world wars, which were motivated by power politics, not religion?

Distilled to the essence (and I estimate a good editor could cut a third to half of this text just by flensing the mind-numbing repetition), Phillips has come to believe in the world view and policy prescriptions advocated by the left wing of the Democratic party. The Republican party does not agree with these things. Adherents of traditional religion share this disagreement, and consequently they predominately vote for Republican candidates. Therefore, evangelical and orthodox religious groups form a substantial part of the Republican electorate. But how does that imply any trend toward “theocracy”? People choose to join a particular church because they are comfortable with the beliefs it espouses, and they likewise vote for candidates who advocate policies they endorse. Just because there is a correlation between preferences does not imply, especially in the absence of any evidence, some kind of fundamentalist conspiracy to take over the government and impose a religious dictatorship. Consider another divisive issue which has nothing to do with religion: the right to keep and bear arms. People who consider the individual right to own and carry weapons for self-defence are highly probable to be Republican voters as well, because that party is more closely aligned with their views than the alternative. Correlation is not evidence of causality, not to speak of collusion.

Much of the writing is reminiscent of the lower tier of the UFO literature. There are dozens of statements like this one from p. 93 (my italics), “There are no records, but Cheney's reported early 2001 plotting may well have touched upon the related peril to the dollar.” May I deconstruct? So what's really being said here is, “Some conspiracy theorist, with no evidence to support his assertion, claims that Cheney was plotting to seize Iraqi oil fields, and it is possible that this speculated scheme might have been motivated by fears for the dollar.”

There are more than thirty pages of end notes set in small type, but there is less documentation here than strains the eye. Many citations are to news stories in collectivist legacy media and postings on leftist advocacy Web sites. Picking page 428 at random, we find 29 citations, only five of which are to a total of three books, one by the present author.

So blinded is the author by his own ideological bias that he seems completely oblivious to the fact that a right-wing stalwart could produce an almost completely parallel screed about the Democratic party being in thrall to a coalition of atheists, humanists, and secularists eager to use the power of the state to impose their own radical agenda. In fact, one already has. It is dubious that shrill polemics of this variety launched back and forth between the trenches of an increasingly polarised society promote the dialogue and substantive debate which is essential to confront the genuine and daunting challenges all its citizens ultimately share.


April 2007

Harris, Robert. Imperium. New York: Simon & Schuster, 2006. ISBN 0-7432-6603-X.
Marcus Tullius Tiro was a household slave who served as the personal secretary to the Roman orator, lawyer, and politician Cicero. Tiro is credited with the invention of shorthand, and is responsible for the extensive verbatim records of Cicero's court appearances and political speeches. He was freed by Cicero in 53 B.C. and later purchased a farm where he lived to around the age of 100 years. According to contemporary accounts, Tiro published a biography of Cicero of at least four volumes; this work has been lost.

In this case, history's loss is a novelist's opportunity, which alternative-history wizard Robert Harris (Fatherland [June 2002], Archangel [February 2003], Enigma, Pompeii) seizes, bringing the history of Cicero's rise from ambitious lawyer to Consul of Rome to life, while remaining true to the documented events of Cicero's career. The narrator is Tiro, who discovers both the often-sordid details of how the Roman republic actually functioned and the complexity of Cicero's character as the story progresses.

The sense one gets of Rome is perhaps a little too modern, and terminology creeps in from time to time (for example, “electoral college” [p. 91]) which seems out of place. On pp. 226–227 there is an extended passage which made me fear we were about to veer off into commentary on current events:

‘I do not believe we should negotiate with such people, as it will only encourage them in their criminal acts.’ … Where would be struck next? What Rome was facing was a threat very different from that posed by a conventional enemy. These pirates were a new type of ruthless foe, with no government to represent them and no treaties to bind them. Their bases were not confined to a single state. They had no unified system of command. They were a worldwide pestilence, a parasite which needed to be stamped out, otherwise Rome—despite her overwhelming military superiority—would never again know security or peace. … Any ruler who refuses to cooperate will be regarded as Rome's enemy. Those who are not with us are against us.
Harris resists the temptation of turning Rome into a soapbox for present-day political advocacy on any side, and quickly gets back to the political intrigue in the capital. (Not that the latter days of the Roman republic are devoid of relevance to the present situation; study of them may provide more insight into the news than all the pundits and political blogs on the Web. But the parallels are not exact, and the circumstances are different in many fundamental ways. Harris wisely sticks to the story and leaves the reader to discern the historical lessons.)

The novel comes to a rather abrupt close with Cicero's election to the consulate in 63 B.C. I suspect that what we have here is the first volume of a trilogy. If that be the case, I look forward to future installments.


Wells, H. G. Floor Games. Springfield, VA: Skirmisher, [1911] 2006. ISBN 0-9722511-7-0.
Two years before he penned the classic work on wargaming, Little Wars (September 2006), H. G. Wells drew on his experience and that of his colleagues “F.R.W.” and “G.P.W.” (his sons Frank Richard and George Philip, then aged eight and ten respectively) to describe the proper equipment, starting with a sufficiently large and out-of-the-traffic floor, which imaginative children should have at their disposal to construct the worlds of adventure conjured by their fertile minds. He finds much to deplore in the offerings of contemporary toy shops, and shows how wooden bricks, sturdy paper, plasticine clay, twigs and sprigs from the garden, books from the library, and odds and ends rescued from the trash bin can be assembled into fantasy worlds, “the floor, the boards, the bricks, the soldiers, and the railway system—that pentagram for exorcising the evil spirit of dulness from the lives of little boys and girls” (p. 65).

The entire book is just 71 pages with large type and wide margins filled with delightful line drawings; eight photographs by the author illustrate what can be made of such simple components. The text is, of course, in the public domain, and is available in a free Project Gutenberg edition, but without the illustrations and photos. This edition includes a foreword by legendary wargame designer James F. Dunnigan.

While toys have changed enormously since this book was written, young humans haven't. A parent who provides their kids these simple stimuli to imagination and ingenuity is probably doing them an invaluable service compared to the present-day default of planting them in front of a television program or video game. Besides, if the collectivist morons in Seattle who banned Lego blocks launch the next educationalism fad, it'll be up to parents to preserve imagination and individuality in their children's play.


Russell, D. A. The Design and Construction of Flying Model Aircraft. Leicester, England: Harborough Publishing, [1937, 1940] 1941. British Library Shelfmark 08771.b.3.
In 1941, Britain stood alone in the West against Nazi Germany, absorbing bombing raids on its cities, while battling back and forth in North Africa. So confident was Hitler that the British threat had been neutralised, that in June he launched the assault against the Soviet Union. And in that dark year, some people in Britain put the war out of their minds by thinking instead about model airplanes, guided by this book, written by the editor of The Aero-Modeller magazine and published in that war year.

Modellers of this era scratch built their planes—the word “kit” is absent from this book and seemingly from the vocabulary of the hobby at the time. The author addresses an audience who not only build their models from scratch, but also design them from first principles of aerodynamics—in fact, the first few chapters are one of the most lucid expositions of basic practical aerodynamics I have ever read. The text bristles with empirical equations, charts, and diagrams, as well as plenty of practical advice to the designer and builder.

While many modellers of the era built featherweight aircraft powered by rubber bands, others flew petrol-powered beasts which would intimidate many modellers today. Throughout the book the author uses as an example one of his own designs, with a wingspan of 10 feet, all-up weight in excess of 14 pounds, and powered by an 18 cc. petrol engine.

There was no radio control, of course. All of these planes simply flew free until a clockwork mechanism cut the ignition, then glided to a landing on whatever happened to be beneath them at the time. If the time switch should fail, the plane would fly on until the fuel was exhausted. Given the size, weight, and flammability of the fuel, one worried about the possibility of burning down somebody's house or barn in such a mishap, and in fact p. 214 is a full-page advert for liability insurance backed by Lloyds!

This book was found in an antique shop in the British Isles. It is, of course, hopelessly out of print, but used copies are generally available at reasonable prices. Note that the second edition (first published in 1940, reprinted in 1941) contains substantially more material than the 1937 first edition.


Guedj, Denis. Le mètre du monde. Paris: Seuil, 2000. ISBN 2-02-049989-4.
When thinking about management lessons one can learn from the French Revolution, I sometimes wonder if Louis XVI, sometime in the interval between when the Revolution lost its mind and he lost his head, ever thought, “Memo to file: when running a country seething with discontent, it's a really poor idea to invite people to compile lists of things they detest about the current regime.” Yet, that's exactly what he did in 1788, soliciting cahiers de doléances (literally, “notebooks of complaints”) to be presented to the Estates-General when it met in May of 1789. There were many, many things about which to complain in the latter years of the Ancien Régime, but one which appeared on almost every one of the lists was the lack of uniformity in weights and measures. Not only was there a bewildering multitude of different measures in use (around 2000 in France alone), but the value of measures with the same name differed from one region to another, a legacy of feudal days when one of the rights of the lord was to define the weights and measures in his fiefdom. How far is “three leagues down the road?” Well, that depends on what you mean by “league”, which was almost 40% longer in Provence than in Paris. The most common unit of weight, the “livre”, had more than two hundred different definitions across the country. And if that weren't bad enough, unscrupulous merchants and tax collectors would exploit the differences and lack of standards to cheat those bewildered by the complexity.

Revolutions, and the French Revolution in particular, have a way of going far beyond the intentions of those who launch them. The multitudes who pleaded for uniformity in weights and measures almost unanimously intended, and would have been entirely satisfied with, a standardisation of the values of the commonly used measures of length, weight, volume, and area. But perpetuating these relics of tyranny was an affront to the revolutionary spirit of remaking the world, and faced with a series of successive decisions, the revolutionary assembly chose the most ambitious and least grounded in the past on each occasion: to entirely replace all measures in use with entirely new ones, to use identical measures for every purpose (traditional measures used different units depending upon what was being measured), to abandon historical subdivisions of units in favour of a purely decimal system, and to ground all of the units in quantities based in nature and capable of being determined by anybody at any time, given only the definition.

Thus was the metric system born, and seldom have so many eminent figures been involved in what many might consider an arcane sideshow to revolution: Concordet, Coulomb, Lavoisier, Laplace, Talleyrand, Bailly, Delambre, Cassini, Legendre, Lagrange, and more. The fundamental unit, the metre, was defined in terms of the Earth's meridian, and since earlier measures failed to meet the standard of revolutionary perfection, a project was launched to measure the meridian through the Paris Observatory from Dunkirk to Barcelona. Imagine trying to make a precision measurement over such a distance as revolution, terror, hyper-inflation, counter-revolution, and war between France and Spain raged all around the savants and their surveying instruments. So long and fraught with misadventures was the process of creating the metric system that while the original decree ordering its development was signed by Louis XVI, it was officially adopted only a few months before Napoleon took power in 1799. Yet despite all of these difficulties and misadventures, the final measure of the meridian accepted in 1799 differed from the best modern measurements by only about ten metres over a baseline of more than 1000 kilometres.

This book tells the story of the metric system and the measurement of the meridian upon which it was based, against the background of revolutionary France. The author pulls no punches in discussing technical detail—again and again, just when you expect he's going to gloss over something, you turn the page or read a footnote and there it is. Writing for a largely French audience, the author may assume the reader better acquainted with the chronology, people, and events of the Revolution than readers hailing from other lands are likely to be; the chronology at the end of the book is an excellent resource when you forget what happened when. There is no index. This seems to be one of those odd cultural things; I've found French books whose counterparts published in English would almost certainly be indexed to frequently lack this valuable attribute—I have no idea why this is the case.

One of the many fascinating factoids I gleaned from this book is that the country with the longest continuous use of the metric system is not France! Napoleon replaced the metric system with the mesures usuelles in 1812, redefining the traditional measures in terms of metric base units. The metric system was not reestablished in France until 1840, by which time Belgium, Holland, and Luxembourg had already adopted it.


Judd, Denis. Someone Has Blundered. London: Phoenix, [1973] 2007. ISBN 0-7538-2181-8.
One of the most amazing things about the British Empire was not how much of the world it ruled, but how small was the army which maintained dominion over so large a portion of the globe. While the Royal Navy enjoyed unchallenged supremacy on the high seas in the 19th century, it was of little use in keeping order in the colonies, and the ground forces available were, not just by modern standards, but by those of contemporary European powers, meagre. In the 1830s, the British regular army numbered only about 100,000, and rose to just 200,000 by the end of the century. When the Indian Mutiny (or “Sepoy Rebellion”) erupted in 1857, there were just 45,522 European troops in the entire subcontinent.

Perhaps the stolid British at home were confident that the military valour and discipline of their meagre legions would prevail, or that superior technology would carry the day:

Whatever happens,
we have got,
the Maxim gun,
and they have not.
            — Joseph Hilaire Pierre René Belloc, “The Modern Traveller”, 1898
but when it came to a fight, as happened surprisingly often in what one thinks of as the Pax Britannica era (the Appendix [pp. 174–176] lists 72 conflicts and military expeditions in the Victorian era), a small, tradition-bound force, accustomed to peace and the parade ground, too often fell victim to (p. xix) “a devil's brew of incompetence, unpreparedness, mistaken and inappropriate tactics, a reckless underestimating of the enemy, a brash overconfidence, a personal or psychological collapse, a difficult terrain, useless maps, raw and panicky recruits, skilful or treacherous opponents, diplomatic hindrance, and bone-headed leadership.”

All of these are much in evidence in the campaigns recounted here: the 1838–1842 invasion of Afghanistan, the 1854–1856 Crimean War, the 1857–1859 Indian Mutiny, the Zulu War of 1879, and the first (1880–1881) and second (1899–1902) Boer Wars. Although this book was originally published more than thirty years ago and its subtitle, “Calamities of the British Army in the Victorian Age”, suggests it is a chronicle of a quaint and long-departed age, there is much to learn in these accounts of how highly-mobile, superbly trained, excellently equipped, and technologically superior military forces were humiliated and sometimes annihilated by indigenous armies with the power of numbers, knowledge of the terrain, and the motivation to defend their own land.


Smith, Edward E. Children of the Lens. Baltimore: Old Earth Books, [1947–1948, 1954] 1998. ISBN 1-882968-14-X.
This is the sixth and final installment of the Lensman series, following Triplanetary (June 2004), First Lensman (February 2005), Galactic Patrol (March 2005), Gray Lensman (August 2005), and Second Stage Lensmen (April 2006). Children of the Lens appeared in serial form in Astounding Science Fiction from November 1947 through February 1948. This book is a facsimile of the illustrated 1954 Fantasy Press edition, which was revised from the magazine edition. (Masters of the Vortex [originally titled The Vortex Blaster] is set in the Lensman universe, but is not part of the Galactic Patrol saga; it's a fine yarn, and I look forward to re-reading it, but the main story ends here.)

Twenty years have passed since the events chronicled in Second Stage Lensmen, and the five children—son Christopher, and the two pairs of fraternal twin daughters Kathryn, Karen, Camilla, and Constance—of Gray Lensman Kimball Kinnison and his wife Clarissa, the sole female Lens… er…person in the universe are growing to maturity. The ultimate products of a selective breeding program masterminded over millennia by the super-sages of planet Arisia, they have, since childhood, had the power to link their minds directly even to the forbidding intelligences of the Second Stage Lensmen.

Despite the cataclysmic events which concluded Second Stage Lensmen, mayhem in the galaxies continues, and as this story progresses it becomes clear to the Children of the Lens that they, and the entire Galactic Patrol, have been forged for the final battle between good and evil which plays out in these pages. But all is not coruscating, actinic detonations and battles of super minds; Doc Smith leavens the story with humour, and even has some fun at his own expense when he has the versatile Kimball Kinnison write a space opera potboiler, “Its terrible xmex-like snout locked on. Its zymolosely polydactile tongue crunched out, crashed down, rasped across. Slurp! Slurp! … Fools! Did they think that the airlessness of absolute space, the heatlessness of absolute zero, the yieldlessness of absolute neutronium could stop QADGOP THE MERCOTAN?” (p. 37).

This concludes my fourth lifetime traverse of this epic, and it never, ever disappoints. Since I first read it more than thirty years ago, I have considered Children of the Lens one of the very best works of science fiction ever, and this latest reading reinforces that conviction. It is, of course, the pinnacle of a story spanning billions of years, hundreds of billions of planets, innumerable species, a multitude of parallel universes, absolute good and unadulterated evil, and more than 1500 pages, so if you jump into the story near the end, you're likely to end up perplexed, not enthralled. It's best either to start at the beginning with Triplanetary or, if you'd rather skip the two slower-paced “prequels”, with Volume 3, Galactic Patrol, which was the first written and can stand alone.


May 2007

Lewis, C. S. The Abolition of Man. New York: HarperCollins, [1944] 1947. ISBN 0-06-065294-2.
This short book (or long essay—the main text is but 83 pages) is subtitled “Reflections on education with special reference to the teaching of English in the upper forms of schools” but, in fact, is much more: one of the pithiest and most eloquent defences of traditional values I recall having read. Writing in the final years of World War II, when moral relativism was just beginning to infiltrate the secondary school curriculum, he uses as the point of departure an English textbook he refers to as “The Green Book” (actually The Control of Language: A critical approach to reading and writing, by Alex King and Martin Ketley), which he dissects as attempting to “debunk” the development of a visceral sense of right and wrong in students in the guise of avoiding emotionalism and sentimentality.

From his description of “The Green Book”, it seems pretty mild compared to the postmodern, multicultural, and politically correct propaganda aimed at present-day students, but then perhaps it takes an observer with the acuity of a C. S. Lewis to detect the poison in such a dilute form. He also identifies the associated perversion of language which accompanies the subversion of values. On p. 28 is this brilliant observation, which I only began to notice myself more than sixty years after Lewis identified it. “To abstain from calling it good and to use, instead, such predicates as ‘necessary”, ‘progressive’, or ‘efficient’ would be a subterfuge. They could be forced by argument to answer the questions ‘necessary for what?’, ‘progressing toward what?’, ‘effecting what?’; in the last resort they would have to admit that some state of affairs was in their opinion good for its own sake.” But of course the “progressives” and champions of “efficiency” don't want you to spend too much time thinking about the end point of where they want to take you.

Although Lewis's Christianity informs much of his work, religion plays little part in this argument. He uses the Chinese word Tao () or “The Way” to describe what he believes are a set of values shared, to some extent, by all successful civilisations, which must be transmitted to each successive generation if civilisation is to be preserved. To illustrate the universality of these principles, he includes a 19 page appendix listing the pillars of Natural Law, with illustrations taken from texts and verbal traditions of the Ancient Egyptian, Jewish, Old Norse, Babylonian, Hindu, Confucian, Greek, Roman, Christian, Anglo-Saxon, American Indian, and Australian Aborigine cultures. It seems like those bent on jettisoning these shared values are often motivated by disdain for the frequently-claimed divine origin of such codes of values. But their very universality suggests that, regardless of what myths cultures invent to package them, they represent an encoding of how human beings work and the distillation of millennia of often tragic trial-and-error experimentation in search of rules which allow members of our fractious species to live together and accomplish shared goals.

An on-line edition is available, although I doubt it is authorised, as the copyright for this work was last renewed in 1974.


Haisch, Bernard. The God Theory. San Francisco: Weiser, 2006. ISBN 1-57863-374-5.
This is one curious book. Based on acquaintance with the author and knowledge of his work, including the landmark paper “Inertia as a zero-point-field Lorentz force” (B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678–694 [1994]), I expected this to be a book about the zero-point field and its potential to provide a limitless source of energy and Doc Smith style inertialess propulsion. The title seemed odd, but there's plenty of evidence that when it comes to popular physics books, “God sells”.

But in this case the title could not be more accurate—this book really is a God Theory—that our universe was created, in the sense of its laws of physics being defined and instantiated, then allowed to run their course, by a being with infinite potential who did so in order to experience, in the sum of the consciousness of its inhabitants, the consequences of the creation. (Defining the laws isn't the same as experiencing their playing out, just as writing down the rules of chess isn't equivalent to playing all possible games.) The reason the constants of nature appear to be fine-tuned for the existence of consciousness is that there's no point in creating a universe in which there will be no observers through which to experience it, and the reason the universe is comprehensible to us is that our consciousness is, in part, one with the being who defined them. While any suggestion of this kind is enough to get what Haisch calls adherents of “fundamentalist scientism” sputtering if not foaming at the mouth, he quite reasonably observes that these self-same dogmatic reductionists seem perfectly willing to admit an infinite number of forever unobservable parallel universes created purely at random, and to inhabit a universe which splits into undetectable multiple histories with every quantum event, rather than contemplate that the universe might have a purpose or that consciousness may play a rôle in physical phenomena.

The argument presented here is reminiscent in content, albeit entirely different in style, to that of Scott Adams's God's Debris (February 2002), a book which is often taken insufficiently seriously because its author is the creator of Dilbert. Of course, there is another possibility about which I have written again, again, again, and again, which is that our universe was not created ex nihilo by an omnipotent being outside of space and time, but is rather a simulation created by somebody with a computer whose power we can already envision, run not to experience the reality within, but just to see what happens. Or, in other words, “it isn't a universe, it's a science fair project!” In The God Theory, your consciousness is immortal because at death your experience rejoins the One which created you. In the simulation view, you live on forever on a backup tape. What's the difference?

Seriously, this is a challenging and thought-provoking argument by a distinguished scientist who has thought deeply on these matters and is willing to take the professional risk of talking about them to the general public. There is much to think about here, and integrating it with other outlooks on these deep questions will take far more time than it takes to read this book.


[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 1. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1776, 1781] 1998. ISBN 9-62634-071-1.
This is the first audiobook to appear in this list, for the excellent reason that it's the first one to which I've ever listened. I've been planning to “get around” to reading Gibbon's Decline and Fall for about twenty-five years, and finally concluded that the likelihood I was going to dive into that million-word-plus opus any time soon was negligible, so why not raise the intellectual content of my regular walks around the village with one of the masterpieces of English prose instead of ratty old podcasts?

The “Volume 1” in the title of this work refers to the two volumes of this audio edition, which is an abridgement of the first three volumes of Gibbon's history, covering the reign of Augustus through the accession of the first barbarian king, Odoacer. Volume 2 abridges the latter three volumes, primarily covering the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453. Both audio programs are almost eight hours in length, and magnificently read by Philip Madoc, whose voice is strongly reminiscent of Richard Burton's. The abridgements are handled well, with a second narrator, Neville Jason, summarising the material which is being skipped over. Brief orchestral music passages separate major divisions in the text. The whole work is artfully done and a joy to listen to, worthy of the majesty of Gibbon's prose, which is everything I've always heard it to be, from solemn praise for courage and wisdom, thundering condemnation of treason and tyranny, and occasionally laugh-out-loud funny descriptions of foibles and folly.

I don't usually read abridged texts—I figure that if the author thought it was worth writing, it's worth my time to read. But given the length of this work (and the fact that most print editions are abridged), it's understandable that the publisher opted for an abridged edition; after all, sixteen hours is a substantial investment of listening time. An Audio CD edition is available. And yes, I'm currently listening to Volume 2.


Scott, William B., Michael J. Coumatos, and William J. Birnes. Space Wars. New York: Forge, 2007. ISBN 0-765-31379-0.
I believe it was Jerry Pournelle who observed that a Special Forces operative in Afghanistan on horseback is, with his GPS target designator and satellite communications link to an F-16 above, the closest thing in our plane of existence to an angel of death. But, take away the space assets, and he's just a guy on a horse.

The increasing dependence of the U.S. military on space-based reconnaissance, signal intelligence, navigation and precision guidance, missile warning, and communications platforms has caused concern among strategic thinkers about the risk of an “asymmetrical attack” against them by an adversary. The technology needed to disable them is far less sophisticated and easier to acquire than the space assets, and the impact of their loss will disproportionately impact the U.S., which has fully integrated them into its operations. This novel, by a former chief wargamer of the U.S. Space Command (Coumatos), the editor-in-chief of Aviation Week and Space Technology (Scott), and co-author Birnes, uses a near-term fictional scenario set in 2010 to explore the vulnerabilities of military space and make the case for both active defence of these platforms and the ability to hold at risk the space-based assets of adversaries even if doing so gets the airheads all atwitter about “weapons in space” (as if a GPS constellation which lets you drop a bomb down somebody's chimney isn't a weapon). The idea, then, was to wrap the cautionary tale and policy advocacy in a Tom Clancy-style thriller which would reach a wider audience than a dull Pentagon briefing presentation.

The reality, however, as embodied in the present book, is simply a mess. I can't help but notice that the publisher, Forge, is an imprint of Tom Doherty Associates, best known for their Tor science fiction books. As I have observed earlier in comments about the recent novels by Orson Scott Card and Heinlein and Robinson, Doherty doesn't seem to pay much attention to copy editing and fact checking, and this book illustrates the problem is not just confined to the Tor brand. In fact, after this slapdash effort, I'm coming to look at Doherty as something like Que computer books in the 1980s—the name on the spine is enough to persuade me to leave it on the shelf.

Some of the following might be considered very mild spoilers, but I'm not going to put them in a spoiler warning since they don't really give away major plot elements or the ending, such as it is. The real spoiler is knowing how sloppy the whole thing is, and once you appreciate that, you won't want to waste your time on it anyway. First of all, the novel is explicitly set in the month of April 2010, and yet the “feel” and the technological details are much further out. Basically, the technologies in place three years from now are the same we have today, especially for military technologies which have long procurement times and glacial Pentagon deployment schedules. Yet we're supposed to believe than in less than thirty-six months from today, the Air Force will be operating a two-storey, 75,000 square foot floor space computer containing “an array of deeply stacked parallel nanoprocessing circuits”, with spoken natural language programming and query capability (pp. 80–81). On pp. 212–220 we're told of a super weapon inspired by Star Trek II: The Wrath of Khan which, having started its development as a jammer for police radar, is able to seize control of enemy unmanned aerial vehicles. And so protean is this weapon, its very name changes at random from SPECTRE to SCEPTRE from paragraph to paragraph.

The mythical Blackstar spaceplane figures in the story, described as incoherently as in co-author Scott's original cover story in Aviation Week. On p. 226 we're told the orbiter burns “boron-based gel fuel and atmospheric oxygen”, then on the very next page we hear of the “aerospike rocket engines”. Well, where do we start? A rocket does not burn atmospheric oxygen, but carries its own oxidiser. An aerospike is a kind of rocket engine nozzle, entirely different from the supersonic combustion ramjet one would expect on an spaceplane which used atmospheric oxygen. Further, the advantage of an aerospike is that it is efficient both at low and high altitudes, but there's no reason to use one on an orbiter which is launched at high altitude from a mother ship. And then on p. 334, the “aerospike” restarts in orbit, which you'll probably agree is pretty difficult to do when you're burning “atmospheric oxygen”, which is famously scarce at orbital altitudes.

Techno-gibberish is everywhere, reminiscent in verisimilitude to the dialogue in the television torture fantasy “24”. For example, “Yo' Jaba! Got a match on our parallel port. I am waaay cool!” (p. 247). On p. 174 a Rooskie no-goodnik finds orbital elements for U.S. satellites from “the American ‘space catalog’ she had hacked into through a Texas university's server”. Why not just go to CelesTrak, where this information has been available worldwide since 1985? The laws of orbital mechanics here differ from those of Newton; on p. 381, a satellite in a circular orbit “14,674 miles above sea level” is said to be orbiting at “17,500 MPH”. In fact, at this altitude orbital velocity is 4.35 km/sec or 9730 statute miles per hour. And astronauts in low earth orbit who lose their electrical power quickly freeze solid, “victims of space's hostile, unforgiving cold”. Actually, in intense sunlight for half of every orbit and with the warm Earth filling half the sky, getting rid of heat is the problem in low orbit. On pp. 285–290, an air-launched cruise missile is used to launch a blimp. Why not just let it go and let the helium do the job all by itself? On the political front, we're supposed to think that a spittle-flecked mullah raving that he was the incarnation of the Twelfth Imam, in the presence of the Supreme Leader and President of Iran, would not only escape being thrown in the dungeon, but walk out of the meeting with a go-ahead to launch a nuclear-tipped missile at a target in Europe. And there is much, much more like this.

I suppose it should have been a tip-off that the foreword was written by George Noory, who hosts the Coast to Coast AM radio program originally founded by Art Bell. Co-author Birnes was also co-author of the hilariously preposterous The Day After Roswell, which claims that key technologies in the second half of the twentieth century, including stealth aircraft and integrated circuits, were based on reverse-engineered alien technologies from a flying saucer which crashed in New Mexico in 1947. As stories go, Roswell, Texas seems more plausible, and a lot more fun, than this book.


Hicks, Stephen R. C. Explaining Postmodernism. Phoenix: Scholargy, 2004. ISBN 1-59247-642-2.
Starting more than ten years ago, with the mass pile-on to the Internet and the advent of sites with open content and comment posting, I have been puzzled by the extent of the anger, hatred, and nihilism which is regularly vented in such fora. Of all the people of my generation with whom I have associated over the decades (excepting, of course, a few genuine nut cases), I barely recall anybody who seemed to express such an intensively negative outlook on life and the world, or who were so instantly ready to impute “evil” (a word used incessantly for the slightest difference of opinion) to those with opposing views, or to inject ad hominem arguments or obscenity into discussions of fact and opinion. Further, this was not at all confined to traditionally polarising topics; in fact, having paid little attention to most of the hot-button issues in the 1990s, I first noticed it in nerdy discussions of topics such as the merits of different microprocessors, operating systems, and programming languages—matters which would seem unlikely, and in my experience had only rarely in the past, inspired partisans on various sides to such passion and vituperation. After a while, I began to notice one fairly consistent pattern: the most inflamed in these discussions, those whose venting seemed entirely disproportionate to the stakes in the argument, were almost entirely those who came of age in the mid-1970s or later; before the year 2000 I had begun to call them “hate kiddies”, but I still didn't understand why they were that way. One can speak of “the passion of youth”, of course, which is a real phenomenon, but this seemed something entirely different and off the scale of what I recall my contemporaries expressing in similar debates when we were of comparable age.

This has been one of those mysteries that's puzzled me for some years, as the phenomenon itself seemed to be getting worse, not better, and with little evidence that age and experience causes the original hate kiddies to grow out of their youthful excess. Then along comes this book which, if it doesn't completely explain it, at least seems to point toward one of the proximate causes: the indoctrination in cultural relativist and “postmodern” ideology which began during the formative years of the hate kiddies and has now almost entirely pervaded academia apart from the physical sciences and engineering (particularly in the United States, whence most of the hate kiddies hail). In just two hundred pages of main text, the author traces the origins and development of what is now called postmodernism to the “counter-enlightenment” launched by Rousseau and Kant, developed by the German philosophers of the 18th and 19th centuries, then transplanted to the U.S. in the 20th. But the philosophical underpinnings of postmodernism, which are essentially an extreme relativism which goes as far as denying the existence of objective truth or the meaning of texts, doesn't explain the near monolithic adherence of its champions to the extreme collectivist political Left. You'd expect that philosophical relativism would lead its believers to conclude that all political tendencies were equally right or wrong, and that the correct political policy was as impossible to determine as ultimate scientific truth.

Looking at the philosophy espoused by postmodernists alongside the the policy views they advocate and teach their students leads to the following contradictions which are summarised on p. 184:

  • On the one hand, all truth is relative; on the other hand, postmodernism tells it like it really is.
  • On the one hand, all cultures are equally deserving of respect; on the other, Western culture is uniquely destructive and bad.
  • Values are subjective—but sexism and racism are really evil. (There's that word!—JW)
  • Technology is bad and destructive—and it is unfair that some people have more technology than others.
  • Tolerance is good and dominance is bad—but when postmodernists come to power, political correctness follows.

The author concludes that it is impossible to explain these and other apparent paradoxes and the uniformly Left politics of postmodernists without understanding the history and the failures of collectivist political movements dating from Rousseau's time. On p. 173 is an absolutely wonderful chart which traces the mutation and consistent failure of socialism in its various guises from Marx to the present. With each failure, the response has been not to question the premises of collectivism itself, but rather to redefine its justification, means, and end. As failure has followed failure, postmodernism represents an abject retreat from reason and objectivity itself, either using the philosophy in a Machiavellian way to promote collectivist ideology, or to urge acceptance of the contradictions themselves in the hope of creating what Nietzsche called ressentiment, which leads directly to the “everybody is evil”, “nothing works”, and “truth is unknowable” irrationalism and nihilism which renders those who believe it pliable in the hands of agenda-driven manipulators.

Based on the some of the source citations and the fact that this work was supported in part by The Objectivist Center, the author appears to be a disciple of Ayn Rand, which is confirmed by his Web site. Although the author's commitment to rationalism and individualism, and disdain for their adversaries, permeates the argument, the more peculiar and eccentric aspects of the Objectivist creed are absent. For its size, insight, and crystal clear reasoning and exposition, I know of no better introduction to how postmodernism came to be, and how it is being used to advance a collectivist ideology which has been thoroughly discredited by sordid experience. And I think I'm beginning to comprehend how the hate kiddies got that way.


Scurr, Ruth. Fatal Purity. London: Vintage Books, 2006. ISBN 0-09-945898-5.
In May 1791, Maximilien Robespierre, not long before an obscure provincial lawyer from Arras in northern France, elected to the Estates General convened by Louis XVI in 1789, spoke before what had by then reconstituted itself as the National Assembly, engaged in debating the penal code for the new Constitution of France. Before the Assembly were a number of proposals by a certain Dr. Guillotin, among which the second was, “In all cases of capital punishment (whatever the crime), it shall be of the same kind—i.e. beheading—and it shall be executed by means of a machine.” Robespierre argued passionately against all forms of capital punishment: “A conqueror that butchers his captives is called barbaric. Someone who butchers a perverse child that he could disarm and punish seems monstrous.” (pp. 133–136)

Just two years later, Robespierre had become synonymous not only with the French Revolution but with the Terror it had spawned. Either at his direction, with his sanction, or under the summary arrest and execution without trial or appeal which he advocated, the guillotine claimed more than 2200 lives in Paris alone, 1376 between June 10th and July 27th of 1793, when Robespierre's power abruptly ended, along with the Terror, with his own date with the guillotine.

How did a mild-mannered provincial lawyer who defended the indigent and disadvantaged, amused himself by writing poetry, studied philosophy, and was universally deemed, even by his sworn enemies, to merit his sobriquet, “The Incorruptible”, become an archetypal monster of the modern age, a symbol of the darkness beneath the Enlightenment?

This lucidly written, well-argued, and meticulously documented book traces Robespierre's life from birth through downfall and execution at just age 36, and places his life in the context of the upheavals which shook France and to which, in his last few years, he contributed mightily. The author shows the direct link between Rousseau's philosophy, Robespierre's inflexible, whatever-the-cost commitment to implementing it, and its horrific consequences for France. Too many people forget that it was Rousseau who wrote in The Social Contract, “Now, as citizen, no man is judge any longer of the danger to which the law requires him to expose himself, and when the prince says to him: ‘It is expedient for the state that you should die’, then he should die…”. Seen in this light, the madness of Robespierre's reign is not the work of a madman, but of a rigorously rational application of a profoundly anti-human system of beliefs which some people persist in taking seriously even today.

A U.S. edition is available.


Buckley, Christopher. Boomsday. New York: Twelve, 2007. ISBN 0-446-57981-5.
Cassandra Devine is twenty-nine, an Army veteran who served in Bosnia, a PR genius specialising in damage control for corporate malefactors, a high-profile blogger in her spare time, and hopping mad. What's got her Irish up (and she's Irish on both sides of the family) is the imminent retirement of the baby boom generation—boomsday—when seventy-seven million members of the most self-indulgent and -absorbed generation in history will depart the labour pool and begin to laze away their remaining decades in their gated, golf-course retirement communities, sending the extravagant bills to their children and grandchildren, each two of whom can expect to support one retired boomer, adding up to an increase in total taxes on the young between 30% and 50%.

One night, while furiously blogging, it came to her. A modest proposal which would, at once, render Social Security and Medicare solvent without any tax increases, provide free medical care and prescription drugs to the retired, permit the elderly to pass on their estates to their heirs tax-free, and reduce the burden of care for the elderly on the economy. There is a catch, of course, but the scheme polls like pure electoral gold among the 18–30 “whatever generation”.

Before long, Cassandra finds herself in the middle of a presidential campaign where the incumbent's slogan is “He's doing his best. Really.” and the challenger's is “No Worse Than The Others”, with her ruthless entrepreneur father, a Vatican diplomat, a southern media preacher, Russian hookers, a nursing home serial killer, the North Koreans, and what's left of the legacy media sucked into the vortex. Buckley is a master of the modern political farce, and this is a thoroughly delightful read which makes you wonder just how the under-thirties will react when the bills run up by the boomers start to come due.


June 2007

Segell, Michael. The Devil's Horn. New York: Picador, 2005. ISBN 0-312-42557-0.
When Napoléon III seized power and proclaimed himself Emperor of France in 1851, his very first decree did not have to do with any of the social, economic, or political crises the country faced, but rather reinstating the saxophone in French military bands, reversing a ban on the instrument imposed by the Second Republic (p. 220). There is something about the saxophone—its lascivious curves and seductive sound, perhaps, or its association with avant garde and not entirely respectable music—which has made it the object of attacks by prudes, puritans, and musical elitists almost from the time of its invention in the early 1840s by Belgian Adolphe Sax. Nazi Germany banned the sax as “decadent”; Stalin considered it a “dangerous capitalist instrument” and had saxophonists shot or sent to Siberia; the League of Catholic Decency in the United States objected not to the steamy images on the screen in the 1951 film A Streetcar Named Desire, but rather the sultry saxophone music which accompanied it, and signed off on the scene when it was re-scored for French horn and strings; and in Kansas City, Missouri, it was once against the law to play a saxophone outside a nightclub from ten-thirty at night until six in the morning (which seems awfully early to me to be playing a saxophone unless you've been at it all night).

Despite its detractors, political proscribers, somewhat disreputable image, and failure to find a place in symphony orchestras, this relative newcomer has infiltrated almost all genres of music, sparked the home music and school band crazes in the United States, and became central to the twentieth century evolution of jazz, big band, rhythm and blues, and soul music. A large and rapidly expanding literature of serious and experimental music for the instrument exists, and many conservatories which once derided the “vulgar horn” now teach it.

This fascinating book tells the story of Sax, the saxophone, saxophonists, and the music and culture they have engendered. Even to folks like myself who cannot coax music from anything more complicated than an iPod (I studied saxophone for two years in grade school before concluding, with the enthusiastic concurrence of my aurally assaulted parents, that my talents lay elsewhere) will find many a curious and delightful detail to savour, such as the monstrous contrabass saxophone (which sounds something like a foghorn), and the fact that Adolphe Sax, something of a mad scientist, also invented (but, thankfully, never built) an organ powered by a locomotive engine which could “play the music of Meyerbeer for all of Paris” and the “Saxocannon”, a mortar which would fire a half-kiloton bullet 11 yards wide, which “could level an entire city” (pp. 27–28)—and people complain about the saxophone! This book will make you want to re-listen to a lot of music, which you're likely to understand much better knowing the story of how it, and those who made it, came to be.


[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 2. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1788, 1789] 1998. ISBN 9-62634-122-X.
The “Volume 2” in the title of this work refers to the two volumes of this audiobook edition. This is an abridgement of the final three volumes of Gibbon's history, primarily devoted the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453, although the fractious kingdoms of the west, the Crusades, the conquests of Genghis Khan and Tamerlane, and the origins of the great schism between the Roman and Eastern Orthodox churches all figure in this history. I understand why many people read only the first three volumes of Gibbon's masterpiece—the doings of the Byzantine court are, well, byzantine, and the steady litany of centuries of backstabbing, betrayal, intrigue, sedition, self-indulgence, and dissipation can become both tedious and depressing. Although there are are some sharply-worded passages which may have raised eyebrows in the eighteenth century, I did not find Gibbon anywhere near as negative on the influence of Christianity on the Roman Empire as I expected from descriptions of his work by others. The facile claim that “Gibbon blamed the fall of Rome on the Christians” is simply not borne out by his own words.

Please see my comments on Volume 1 for details of the (superb) production values of this seven hour recording. An Audio CD edition is available.


Tipler, Frank J. The Physics of Christianity. New York: Doubleday, 2007. ISBN 0-385-51424-7.
Oh. My. Goodness. Are you yearning for answers to the Big Questions which philosophers and theologians have puzzled over for centuries? Here you are, using direct quotes from this book in the form of a catechism of this beyond-the-fringe science cataclysm.

What is the purpose of life in the universe?
It is not enough to annihilate some baryons. If the laws of physics are to be consistent over all time, a substantial percentage of all the baryons in the universe must be annihilated, and over a rather short time span. Only if this is done will the acceleration of the universe be halted. This means, in particular, that intelligent life from the terrestrial biosphere must move out into interstellar and intergalactic space, annihilating baryons as they go. (p. 67)
What is the nature of God?
God is the Cosmological Singularity. A singularity is an entity that is outside of time and space—transcendent to space and time—and it is the only thing that exists that is not subject to the laws of physics. (p. 269)
How can the three persons of the Trinity be one God?
The Cosmological Singularity consists of three Hypostases: the Final Singularity, the All-Presents Singularity, and the Initial Singularity. These can be distinguished by using Cauchy sequences of different sorts of person, so in the Cauchy completion, they become three distinct Persons. But still, the three Hypostases of the Singularity are just one Singularity. The Trinity, in other words, consists of three Persons but only one God. (pp. 269–270.)
How did Jesus walk on water?
For example, walking on water could be accomplished by directing a neutrino beam created just below Jesus' feet downward. If we ourselves knew how to do this, we would have the perfect rocket! (p. 200)
What is Original Sin?
If Original Sin actually exists, then it must in some way be coded in our genetic material, that is, in our DNA. … By the time of the Cambrian Explosion, if not earlier, carnivores had appeared on Earth. Evil had appeared in the world. Genes now coded for behavior that guided the use of biological weapons of the carnivores. The desire to do evil was now hereditary. (pp. 188, 190)
How can long-dead saints intercede in the lives of people who pray to them?
According to the Universal Resurrection theory, everyone, in particular the long-dead saints, will be brought back into existence as computer emulations in the far future, near the Final Singularity, also called God the Father. … Future-to-past causation is usual with the Cosmological Singularity. A prayer made today can be transferred by the Singularity to a resurrected saint—the Virgin Mary, say—after the Universal Resurrection. The saint can then reflect on the prayer and, by means of the Son Singularity acting through the multiverse, reply. The reply, via future-to-past causation, is heard before it is made. It is heard billions of years before it is made. (p. 235)
When will the End of Days come?
In summary, by the year 2050 at the latest, we will see:
  1. Intelligent machines more intelligent than humans.
  2. Human downloads, effectively invulnerable and far more capable than normal humans.
  3. Most of humanity Christian.
  4. Effectively unlimited energy
  5. A rocket capable of interstellar travel.
  6. Bombs that are to atomic bombs as atomic bombs are to spitballs, and these weapons will be possessed by practically anybody who wants one.
(p. 253)

Hey, I said answers, not correct answers! This is only a tiny sampler of the side-splitting “explanations” of Christian mysteries and miracles in this book. Others include the virgin birth, the problem of evil, free will, the resurrection of Jesus, the shroud of Turin and the holy grail, the star of Bethlehem, transubstantiation, quantum gravity, the second coming, and more, more, more. Quoting them all would mean quoting almost the whole book—if you wish to be awed by or guffaw at them all, you're going to have to read the whole thing. And that's not all, since it seems like every other page or so there's a citation of Tipler's 1994 opus, The Physics of Immortality (read my review), so some sections are likely to be baffling unless you suspend disbelief and slog your way through that tome as well.

Basically, Tipler sees your retro-causality and raises to retro-teleology. In order for the laws of physics, in particular the unitarity of quantum mechanics, to be valid, then the universe must evolve to a final singularity with no event horizons—the Omega Point. But for this to happen, as it must, since the laws of physics are never violated, then intelligent life must halt the accelerating expansion of the universe and turn it around into contraction. Because this must happen, the all-knowing Final Singularity, which Tipler identifies with God the Father, acts as a boundary condition which causes fantastically improbable events such as the simultaneous tunnelling disintegration of every atom of the body of Jesus into neutrinos to become certainties, because otherwise the Final Singularity Omega Point will not be formed. Got that?

I could go on and on, but by now I think you'll have gotten the point, even if it isn't an Omega Point. The funny thing is, I'm actually sympathetic to much of what Tipler says here: his discussion of free will in the multiverse and the power of prayer or affirmation is not that unlike what I suggest in my eternally under construction “General Theory of Paranormal Phenomena”, and I share Tipler's optimism about the human destiny and the prospects, in a universe of which 95% of the mass is made of stuff we know absolutely nothing about, of finding sources of energy as boundless and unimagined as nuclear fission and fusion were a century ago. But folks, this is just silly. One of the most irritating things is Tipler's interpreting scripture to imply a deep knowledge of recently-discovered laws of physics and then turning around, a few pages later, when the argument requires it, to claim that another passage was influenced by contemporary beliefs of the author which have since been disproved. Well, which is it?

If you want to get a taste of this material, see “The Omega Point and Christianity”, which contains much of the physics content of the book in preliminary form. The entire first chapter of the published book can be downloaded in icky Microsoft Word format from the author's Web site, where additional technical and popular articles are available.

For those unacquainted with the author, Frank J. Tipler is a full professor of mathematical physics at Tulane University in New Orleans, pioneer in global methods in general relativity, discoverer of the massive rotating cylinder time machine, one of the first to argue that the resolution of the Fermi Paradox is, as his paper was titled, “Extraterrestrial Intelligent Beings Do Not Exist”, and, with John Barrow, author of The Anthropic Cosmological Principle, the definitive work on that topic. Say what you like, but Tipler is a serious and dedicated scientist with world-class credentials who believes that the experimentally-tested laws of physics as we understand them are not only consistent with, but require, many of the credal tenets which traditional Christians have taken on faith. The research program he proposes (p. 271), “… would make Christianity a branch of physics.” Still, as I wrote almost twelve years ago, were I he, I'd be worried about getting on the wrong side of the Old One.

Finally, and this really bothers me, I can't close these remarks without mentioning that notwithstanding there being an entire chapter titled “Anti-Semitism Is Anti-Christian” (pp. 243–256), which purports to explain it on the last page, this book is dedicated, “To God's Chosen People, the Jews, who for the first time in 2,000 years are advancing Christianity.” I've read the book; I've read the explanation; and this remark still seems both puzzling and disturbing to me.


Brozik, Matthew David and Jacob Sager Weinstein. The Government Manual for New Superheroes. Kansas City: Andrews McMeel, 2005. ISBN 0-7407-5462-9.
(Guest review by The Punctuator)
The Government of the Unified Nations has done a tremendous service to all superheroes: whether alien, mutant, or merely righteous human do-gooders, by publishing this essential manual filled with tips for getting your crimefighting career off to the right start and avoiding the many pitfalls of the profession. Short, pithy chapters provide wise counsel on matters such as choosing a name, designing a costume, finding an exotic hideaway, managing a secret identity, and more. The chapter on choosing a sidekick would have allowed me to avoid the whole unpleasant and regrettable business with Octothorpe and proceed directly to my entirely satisfactory present protégé, Apostrophe Squid. The advantages and drawbacks of joining a team of superheroes are discussed candidly, along with the warning signs that you may be about to inadvertently join a cabal of supervillains (for example, their headquarters is named “The whatever of Doom” as opposed to “The whatever of Justice”). An afterword by The Eviliminator: Eliminator of Evil Things but Defender of Good Ones reveals the one sure-fire way to acquire superpowers, at least as long as you aren't a troublemaking, question-asking pinko hippie egghead. The book is small, printed with rounded corners, and ideal for slipping into a cape pocket. I would certainly never leave it behind when setting out in pursuit of the nefarious Captain Comma Splice. Additional information is available on the Government's Bureau of Superheroics Web site.


Kondo, Yoji, Frederick Bruhweiler, John Moore, and Charles Sheffield eds. Interstellar Travel and Multi-Generation Space Ships. Burlington, Ontario, Canada: Apogee Books, 2003. ISBN 1-896522-99-8.
This book is a collection of papers presented at a symposium organised in 2002 by the American Association for the Advancement of Science. More than half of the content discusses the motivations, technology, and prospects for interstellar flight (both robotic probes and “generation ship” exploration and colonisation missions), while the balance deals with anthropological, genetic, and linguistic issues in crew composition for a notional mission with a crew of 200 with a flight time of two centuries. An essay by Freeman Dyson on “Looking for Life in Unlikely Places” explores the signatures of ubiquitous vacuum-adapted life and how surprisingly easy it might be to detect, even as far as one light-year from Earth.

This volume contains the last published works of Charles Sheffield and Robert L. Forward, both of whom died in 2002. The papers are all accessible to the scientifically literate layman and, with one exception, of high quality. Regrettably, nobody seemed to have informed the linguist contributor that any interstellar mission would certainly receive a steady stream of broadband transmissions from the home planet (who would fund a multi-terabuck mission without the ability to monitor it and receive the results?), but that chapter is only four pages and may be deemed comic relief.


Bawer, Bruce. While Europe Slept. New York: Doubleday, 2006. ISBN 0-385-51472-7.
In 1997, the author visited the Netherlands for the first time and “thought I'd found the closest thing to heaven on earth”. Not long thereafter, he left his native New York for Europe, where he has lived ever since, most recently in Oslo, Norway. As an American in Europe, he has identified and pointed out many of the things which Europeans, whether out of politeness, deference to their ruling elites, or a “what-me-worry?” willingness to defer the apocalypse to their dwindling cohort of descendants, rarely speak of, at least in the public arena.

As the author sees it, Europe is going down, the victim of multiculturalism, disdain and guilt for their own Western civilisation, and “tolerance for [the] intolerance” of a fundamentalist Muslim immigrant population which, by its greater fertility, “fetching marriages”, and family reunification, may result in Muslim majorities in one or more European countries by mid-century.

This is a book which may open the eyes of U.S. readers who haven't spent much time in Europe to just how societally-suicidal many of the mainstream doctrines of Europe's ruling elites are, and how wide the gap is between this establishment (which is a genuine cultural phenomenon in Europe, encompassing academia, media, and the ruling class, far more so than in the U.S.) and the population, who are increasingly disenfranchised by the profoundly anti-democratic commissars of the odious European Union.

But this is, however, an unsatisfying book. The author, who has won several awards and been published in prestigious venues, seems more at home with essays than the long form. The book reads like a feature article from The New Yorker which grew to book length without revision or editorial input. The 237 page text is split into just three chapters, putatively chronologically arranged but, in fact, rambling all over the place, each mixing the author's anecdotal observations with stories from secondary sources, none of which are cited, neither in foot- or end-notes, nor in a bibliography.

If you're interested in these issues (and in the survival of Western civilisation and Enlightenment values), you'll get a better picture of the situation in Europe from Claire Berlinski's Menace in Europe (July 2006). As a narrative of the experience of a contemporary American in Europe, or as an assessment of the cultural gap between Western (and particularly Northern) Europe and the U.S., this book may be useful for those who haven't experienced these cultures for themselves, but readers should not over-generalise the author's largely anecdotal reporting in a limited number of countries to Europe as a whole.


Dyson, Freeman J. The Scientist as Rebel. New York: New York Review Books, 2006. ISBN 1-59017-216-7.
Freeman Dyson is one of the most consistently original thinkers of our time. This book, a collection of his writings between 1964 and 2006, amply demonstrates the breadth and depth of his imagination. Twelve long book reviews from The New York Review of Books allow Dyson, after doing his duty to the book and author, to depart on his own exploration of the subject matter. One of these reviews, of Brian Greene's The Fabric of the Cosmos, is where Dyson first asked whether it was possible, using any apparatus permitted by the laws of physics and the properties of our universe, to ever detect a single graviton and, if not, whether quantum gravity has any physical meaning. It was this remark which led to the Rothman and Boughn paper, “Can Gravitons be Detected?” in which is proposed what may be the most outrageous scientific apparatus ever suggested.

Three chapters of Dyson's 1984 book Weapons and Hope (now out of print) appear here, along with other essays, forewords to books, and speeches on topics as varied as history, poetry, great scientists, war and peace, colonising the galaxy comet by comet, nanotechnology, biological engineering, the post-human future, religion, the paranormal, and more. Dyson's views on religion will send the Dawkins crowd around the bend, and his open-minded attitude toward the paranormal (in particular, chapter 27) will similarly derange dogmatic sceptics (he even recommends Rupert Sheldrake's Dogs That Know When Their Owners Are Coming Home). Chapters written some time ago are accompanied by postscripts updating them to 2006.

This is a collection of gems with nary a clinker in the lot. Anybody who rejoices in visionary thinking and superb writing will find much of both. The chapters are almost completely independent of one another and can be read in any order, so you can open the book at random and be sure to delight in what you find.


July 2007

Crichton, Michael. Next. New York: HarperCollins, 2006. ISBN 0-06-087298-5.
Several of the essays in Freeman Dyson's The Scientist as Rebel (June 2007) predict that “the next Big Thing” and a central theme of the present century will be the discovery of the fine-grained details of biology and the emergence of technologies which can achieve essentially anything which is possible with the materials and processes of life. This, Dyson believes, will have an impact on the lives of humans and the destiny of humanity and the biosphere which dwarf those of any of the technological revolutions of the twentieth century.

In this gripping novel, page-turner past master (and medical doctor) Michael Crichton provides a glimpse of a near-term future in which these technologies are coming up to speed. It's going to be a wild and wooly world once genes start jumping around among metazoan species with all the promiscuity of prokaryotic party time, and Crichton weaves this into a story which is simultaneously entertaining, funny, and cautionary. His trademark short chapters (averaging just a little over four pages) are like potato chips to the reader—just one more, you think, when you know you ought to have gotten to sleep an hour ago.

For much of the book, the story seems like a collection of independent short stories interleaved with one another. As the pages dwindle, you begin to wonder, “How the heck is he going to pull all this together?” But that's what master story tellers do, and he succeeds delightfully. One episode in this book describes what is perhaps the worst backseat passenger on a road trip in all of English fiction; you'll know what I'm talking about when you get to it. The author has a great deal of well-deserved fun at the expense of the legacy media: it's payback time for all of those agenda-driven attack reviews of State of Fear (January 2005).

I came across two amusing typos: at the bottom of p. 184, I'm pretty sure “A transgender higher primate” is supposed to be “A transgenic higher primate”, and on p. 428 in the bibliography, I'm certain that the title of Sheldon Krimsky's book is Science in the Private Interest, not “Science in the Primate Interest”—what a difference a letter can make!

In an Author's Note at the end, Crichton presents one of the most succinct and clearly argued cases I've encountered why the patenting of genes is not just destructive of scientific inquiry and medical progress, but also something which even vehement supporters of intellectual property in inventions and artistic creations can oppose without being inconsistent.


Epstein, Robert. The Case Against Adolescence. Sanger, CA: Quill Driver Books, 2007. ISBN 1-884956-70-X.
What's the matter with kids today? In this exhaustively documented breakthrough book, the author argues that adolescence, as it is presently understood in developed Western countries, is a social construct which was created between 1880 and 1920 by well-intentioned social reformers responding to excesses of the industrial revolution and mass immigration to the United States. Their remedies—compulsory education, child labour laws, the juvenile justice system, and the proliferation of age-specific restrictions on “adult” activities such as driving, drinking alcohol, and smoking—had the unintended consequence of almost completely segregating teenagers from adults, trapping them in a vacuous peer culture and prolonging childhood up to a decade beyond the age at which young people begin to assume the responsibilities of adulthood in traditional societies.

Examining anthropological research on other cultures and historical evidence from past centuries, the author concludes that the “storm and stress” which characterises modern adolescence is the consequence of the infantilisation of teens, and their confinement in a peer culture with little contact with adults. In societies and historical periods where the young became integrated into adult society shortly after puberty and began to shoulder adult responsibilities, there is no evidence whatsoever for anything like the dysfunctional adolescence so often observed in the modern West—in fact, a majority of preindustrial cultures have no word in their language for the concept of adolescence.

Epstein, a psychologist who did his Ph.D. under B. F. Skinner at Harvard, and former editor-in-chief of Psychology Today magazine, presents results of a comprehensive test of adultness he developed along with Diane Dumas which demonstrate that in most cases the competencies of people in the 13 to 17 year range do not differ from those of adults between twenty and seventy-one by a statistically significant margin. (I should note that the groups surveyed, as described on pp. 154–155, differed wildly in ethnic and geographic composition from the U.S. population as a whole; I'd love to see the cross-tabulations.) An abridged version of the test is included in the book; you can take the complete test online. (My score was 98%, with most of the demerits due to placing less trust in figures of authority than the author deems wise.)

So, if there is little difference in the basic competences of teens and adults, why are so many adolescents such vapid, messed-up, apathetic losers? Well, consider this: primates learn by observing (monkey see) and by emulating (monkey do). For millions of years our ancestors have lived in bands in which the young had most of their contact with adults, and began to do the work of adults as soon as they were physically and mentally capable of doing so. This was the near-universal model of human societies until the late 19th century and remains so in many non-Western cultures. But in the West, this pattern has been replaced by warehousing teenagers in government schools which effectively confine them with others of their age. Their only adult contacts apart from (increasingly absent) parents are teachers, who are inevitably seen as jailors. How are young people to be expected to turn their inherent competencies into adult behaviour if they spend almost all of their time only with their peers?

Now, the author doesn't claim that everybody between the ages of 13 and 17 has the ability to function as an adult. Just as with physical development, different individuals mature at different rates, and one may have superb competence in one area and remain childish in another. But, on the other hand, simply turning 18 or 21 or whatever doesn't magically endow someone with those competencies either—many adults (defined by age) perform poorly as well.

In two breathtaking final chapters, the author argues for the replacement of virtually all age-based discrimination in the law with competence testing in the specific area involved. For example, a 13 year old could entirely skip high school by passing the equivalency examination available to those 18 or older. There's already a precedent for this—we don't automatically allow somebody to drive, fly an airplane, or operate an amateur radio station when they reach a certain age: they have to pass a comprehensive examination on theory, practice, and law. Why couldn't this basic concept be extended to most of the rights and responsibilities we currently grant based purely upon age? Think of the incentive such a system would create for teens to acquire adult knowledge and behaviour as early as possible, knowing that it would be rewarded with adult rights and respect, instead of being treated like children for what could be some of the most productive years of their lives.

Boxes throughout the text highlight the real-world achievements of young people in history and the present day. (Did you know that Sergey Karjakin became a chess grandmaster at the age of 12 years and 7 months? He is among seven who achieved grandmaster ranking at an age younger than Bobby Fischer's 15 years and 6 months.) There are more than 75 pages of end notes and bibliography. (I wonder if the author is aware that note 68 to chapter 5 [p. 424] cites a publication of the Lyndon LaRouche organisation.)

It isn't often you pick up a book with laudatory blurbs by a collection of people including Albert Ellis, Deepak Chopra, Joyce Brothers, Alvin Toffler, George Will, John Taylor Gatto, Suzanne Somers, and Buzz Aldrin. I concur with them that the author has put his finger precisely on the cause of a major problem in modern society, and laid out a challenging yet plausible course for remedying it. I discovered this book via an excellent podcast interview with the author on “The Glenn and Helen Show”.

About halfway through this book, I had one of the most chilling visions of the future I've experienced in many years. One of the things I've puzzled over for ages is what, precisely, is the end state of the vision of those who call themselves “progressives”—progress toward what, anyway? What would society look like if they had their way across the board? And then suddenly it hit me like a brick. If you want to see what the “progressive” utopia looks like, just take a glance at the lives of teenagers today, who are deprived of a broad spectrum of rights and denied responsibilities “for their own good”. Do-gooders always justify their do-badding “for the children”, and their paternalistic policies, by eviscerating individualism and autonomous judgement, continually create ever more “children”. The nineteenth century reformers, responding to genuine excesses of the industrial revolution, extended childhood from puberty to years later, inventing what we call adolescence. The agenda of today's “progressives” is inexorably extending adolescence to create a society of eternal adolescents, unworthy of the responsibilities of adults, and hence forever the childlike wards of an all-intrusive state and the elites which govern it. If you want a vision of the “progressive” future, imagine being back in high school—forever.


Frankfurt, Harry G. On Bullshit. Princeton: Princeton University Press, 2005. ISBN 0-691-12294-6.
This tiny book (just 67 9½×15 cm pages—I'd estimate about 7300 words) illustrates that there is no topic, however mundane or vulgar, which a first-rate philosopher cannot make so complicated and abstruse that it appears profound. The author, a professor emeritus of philosophy at Princeton University, first published this essay in 1986 in the Raritan Review. In it, he tackles the momentous conundrum of what distinguishes bullshit from lies. Citing authorities including Wittgenstein and Saint Augustine, he concludes that while the liar is ultimately grounded in the truth (being aware that what he is saying is counterfactual and crafting a lie to make the person to whom he tells it believe that), the bullshitter is entirely decoupled (or, perhaps in his own estimation, liberated) from truth and falsehood, and is simply saying whatever it takes to have the desired effect upon the audience.

Throughout, it's obvious that we're in the presence of a phil-oss-o-pher doing phil-oss-o-phy right out in the open. For example, on p. 33 we have:

It is in this sense that Pascal's (Fania Pascal, an acquaintance of Wittgenstein in the 1930s, not Blaise—JW) statement is unconnected to a concern with the truth; she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true.
(The Punctuator applauds the use of colons and semicolons in the passage quoted above!)

All of this is fine, but it seems to me that the author misses an important aspect of bullshit: the fact that in many cases—perhaps the overwhelming majority—the bulshittee is perfectly aware of being bullshitted by the bullshitter, and the bullshitter is conversely aware that the figurative bovid excrement emitted is being dismissed as such by those whose ears it befouls. Now, this isn't always the case: sometimes you find yourself in a tight situation faced with a difficult question and manage to bullshit your way through, but in the context of a “bull session”, only the most naïve would assume that what was said was sincere and indicative of the participants' true beliefs: the author cites bull sessions as a venue in which people can try on beliefs other than their own in a non-threatening environment.


Pyle, Ernie. Brave Men. Lincoln, NE: Bison Books, [1944] 2001. ISBN 0-8032-8768-2.
Ernie Pyle is perhaps the most celebrated war correspondent of all time, and this volume amply illustrates why. A collection of his columns for the Scripps-Howard newspapers edited into book form, it covers World War II from the invasion of Sicily in 1943 through the Normandy landings and the liberation of Paris in 1944. This is the first volume of three collections of his wartime reportage: the second and third, Here is Your War and Ernie Pyle in England, are out of print, but used copies are readily available at a reasonable price.

While most readers today know Pyle only from his battle dispatches, he was, in fact, a renowned columnist even before the United States entered the war—in the 1930s he roamed the nation, filing columns about Americana and Americans which became as beloved as the very similar television reportage decades later by Charles Kuralt who, in fact, won an Ernie Pyle Award for his reporting.

Pyle's first love and enduring sympathy was with the infantry, and few writers have expressed so eloquently the experience of being “in the line” beyond what most would consider the human limits of exhaustion, exertion, and fear. But in this book he also shows the breadth of the Allied effort, profiling Navy troop transport and landing craft, field hospitals, engineering troops, air corps dive and light bombers, artillery, ordnance depots, quartermaster corps, and anti-aircraft guns (describing the “scientific magic” of radar guidance without disclosing how it worked).

Apart from the prose, which is simultaneously unaffected and elegant, the thing that strikes a reader today is that in this entire book, written by a superstar columnist for the mainstream media of his day, there is not a single suggestion that the war effort, whatever the horrible costs he so candidly documents, is misguided, or that there is any alternative or plausible outcome other than victory. How much things have changed…. If you're looking for this kind of with the troops on the ground reporting today, you won't find it in the legacy dead tree or narrowband one-to-many media, but rather in reader-supported front-line journalists such as Michael Yon—if you like what he's writing, hit the tip jar and keep him at the front; think of it like buying the paper with Ernie Pyle's column.

Above, I've linked to a contemporary reprint edition of this work. Actually, I read a hardbound sixth printing of the 1944 first edition which I found in a used bookstore in Marquette, Michigan (USA) for less than half the price of the paperback reprint; visit your local bookshop—there are wonderful things there to be discovered.


August 2007

[Audiobook] Caesar, Gaius Julius and Aulus Hirtius. The Commentaries. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [ca. 52–51 B.C., ca. 45 B.C.] 2004. ISBN 1-929718-44-6.
This audiobook is an unabridged reading of English translations of Caesar's commentaries on the Gallic (Commentarii de Bello Gallico) and Civil (Commentarii de Bello Civili) wars between 58 and 48 B.C. (The eighth book of the Gallic wars commentary, covering the minor campaigns of 51 B.C., was written by his friend Aulus Hirtius after Caesar's assassination.) The recording is based upon the rather eccentric Rex Warner translation, which is now out of print. In the original Latin text, Caesar always referred to himself in the third person, as “Caesar”. Warner rephrased the text (with the exception of the book written by Hirtius) as a first person narrative. For example, the first sentence of paragraph I.25 of The Gallic Wars:
Caesar primum suo, deinde omnium ex conspectu remotis equis, ut aequato omnium periculo spem fugae tolleret, cohortatus suos proelium commisit.
in Latin, is conventionally translated into English as something like this (from the rather stilted 1869 translation by W. A. McDevitte and W. S. Bohn):
Caesar, having removed out of sight first his own horse, then those of all, that he might make the danger of all equal, and do away with the hope of flight, after encouraging his men, joined battle.
but the Warner translation used here renders this as:
I first of all had my own horse taken out of the way and then the horses of other officers. I wanted the danger to be the same for everyone, and for no one to have any hope of escape by flight. Then I spoke a few words of encouragement to the men before joining battle.   [1:24:17–30]
Now, whatever violence this colloquial translation does to the authenticity of Caesar's spare and eloquent Latin, from a dramatic standpoint it works wonderfully with the animated reading of award-winning narrator Charlton Griffin; the listener has the sense of being across the table in a tavern from GJC as he regales all present with his exploits.

This is “just the facts” war reporting. Caesar viewed this work not as history, but rather the raw material for historians in the future. There is little discussion of grand strategy nor, even in the commentaries on the civil war, the political conflict which provoked the military confrontation between Caesar and Pompey. While these despatches doubtless served as propaganda on Caesar's part, he writes candidly of his own errors and the cost of the defeats they occasioned. (Of course, since these are the only extant accounts of most of these events, there's no way to be sure there isn't some Caesarian spin in his presentation, but since these commentaries were published in Rome, which received independent reports from officers and literate legionaries in Caesar's armies, it's unlikely he would have risked embellishing too much.)

Two passages of unknown length in the final book of the Civil war commentaries have been lost—these are handled by the reader stopping in mid-sentence, with another narrator explaining the gap and the historical consensus of the events in the lost text.

This audiobook is distributed in three parts, totalling 16 hours and 40 minutes. That's a big investment of time in the details of battles which took place more than two thousand years ago, but I'll confess I found it fascinating, especially since some of the events described took place within sight of where I take the walks on which I listened to this recording over several weeks. An Audio CD edition is available.


Carr, Bernard, ed. Universe or Multiverse? Cambridge: Cambridge University Press, 2007. ISBN 0-521-84841-5.
Before embarking upon his ultimately successful quest to discover the laws of planetary motion, Johannes Kepler tried to explain the sizes of the orbits of the planets from first principles: developing a mathematical model of the orbits based upon nested Platonic solids. Since, at the time, the solar system was believed by most to be the entire universe (with the fixed stars on a sphere surrounding it), it seemed plausible that the dimensions of the solar system would be fixed by fundamental principles of science and mathematics. Even though he eventually rejected his model as inaccurate, he never completely abandoned it—it was for later generations of astronomers to conclude that there is nothing fundamental whatsoever about the structure of the solar system: it is simply a contingent product of the history of its condensation from the solar nebula, and could have been entirely different. With the discovery of planets around other stars in the late twentieth century, we now know that not only do planetary systems vary widely, many are substantially more weird than most astronomers or even science fiction writers would have guessed.

Since the completion of the Standard Model of particle physics in the 1970s, a major goal of theoretical physicists has been to derive, from first principles, the values of the more than twenty-five “free parameters” of the Standard Model (such as the masses of particles, relative strengths of forces, and mixing angles). At present, these values have to be measured experimentally and put into the theory “by hand”, and there is no accepted physical explanation for why they have the values they do. Further, many of these values appear to be “fine-tuned” to allow the existence of life in the universe (or at least, life which resembles ourselves)—a tiny change, for example, in the mass ratio of the up and down quarks and the electron would result in a universe with no heavy elements or chemistry; it's hard to imagine any form of life which could be built out of just protons or neutrons. The emergence of a Standard Model of cosmology has only deepened the mystery, adding additional apparently fine-tunings to the list. Most stunning is the cosmological constant, which appears to have a nonzero value which is 124 orders of magnitude smaller than predicted from a straightforward calculation from quantum physics.

One might take these fine-tunings as evidence of a benevolent Creator (which is, indeed, discussed in chapters 25 and 26 of this book), or of our living in a simulation crafted by a clever programmer intent on optimising its complexity and degree of interestingness (chapter 27). But most physicists shy away from such deus ex machina and “we is in machina” explanations and seek purely physical reasons for the values of the parameters we measure.

Now let's return for a moment to Kepler's attempt to derive the orbits of the planets from pure geometry. The orbit of the Earth appears, in fact, fine-tuned to permit the existence of life. Were it more elliptical, or substantially closer to or farther from the Sun, persistent liquid water on the surface would not exist, as seems necessary for terrestrial life. The apparent fine-tuning can be explained, however, by the high probability that the galaxy contains a multitude of planetary systems of every possible variety, and such a large ensemble is almost certain to contain a subset (perhaps small, but not void) in which an earthlike planet is in a stable orbit within the habitable zone of its star. Since we can only have evolved and exist in such an environment, we should not be surprised to find ourselves living on one of these rare planets, even though such environments represent an infinitesimal fraction of the volume of the galaxy and universe.

As efforts to explain the particle physics and cosmological parameters have proved frustrating, and theoretical investigations into cosmic inflation and string theory have suggested that the values of the parameters may have simply been chosen at random by some process, theorists have increasingly been tempted to retrace the footsteps of Kepler and step back from trying to explain the values we observe, and instead view them, like the masses and the orbits of the planets, as the result of an historical process which could have produced very different results. The apparent fine-tuning for life is like the properties of the Earth's orbit—we can only measure the parameters of a universe which permits us to exist! If they didn't, we wouldn't be here to do the measuring.

But note that like the parallel argument for the fine-tuning of the orbit of the Earth, this only makes sense if there are a multitude of actually existing universes with different random settings of the parameters, just as only a large ensemble of planetary systems can contain a few like the one in which we find ourselves. This means that what we think of as our universe (everything we can observe or potentially observe within the Hubble volume) is just one domain in a vastly larger “multiverse”, most or all of which may remain forever beyond the scope of scientific investigation.

Now such a breathtaking concept provides plenty for physicists, cosmologists, philosophers, and theologians to chew upon, and macerate it they do in this thick (517 page), heavy (1.2 kg), and expensive (USD 85) volume, which is drawn from papers presented at conferences held between 2001 and 2005. Contributors include two Nobel laureates (Steven Weinberg and Frank Wilczek), and just about everybody else prominent in the multiverse debate, including Martin Rees, Stephen Hawking, Max Tegmark, Andrei Linde, Alexander Vilenkin, Renata Kallosh, Leonard Susskind, James Hartle, Brandon Carter, Lee Smolin, George Ellis, Nick Bostrom, John Barrow, Paul Davies, and many more. The editor's goal was that the papers be written for the intelligent layman: like articles in the pre-dumbed-down Scientific American or “front of book” material in Nature or Science. In fact, the chapters vary widely in technical detail and difficulty; if you don't follow this stuff closely, your eyes may glaze over in some of the more equation-rich chapters.

This book is far from a cheering section for multiverse theories: both sides are presented and, in fact, the longest chapter is that of Lee Smolin, which deems the anthropic principle and anthropic arguments entirely nonscientific. Many of these papers are available in preliminary form for free on the arXiv preprint server; if you can obtain a list of the chapter titles and authors from the book, you can read most of the content for free. Renata Kallosh's chapter contains an excellent example of why one shouldn't blindly accept the recommendations of a spelling checker. On p. 205, she writes “…the gaugino condensate looks like a fractional instant on effect…”—that's supposed to be “instanton”!


Wilson, Daniel H. Where's My Jetpack? New York: Bloomsbury, 2007. ISBN 1-59691-136-0.
One of the best things about the past was that the future was so much cooler then! I mean, here we are, more than halfway through the first decade of the flippin' twenty-first century for heaven's sake, and there's nary a flying car, robot servant, underwater city, orbital hotel, or high-speed slidewalk anywhere in sight, and many of the joyless scolds who pass for visionaries in this timid and unimaginative age think we'd all be better off renouncing technology and going back to being hunter-gatherers—sheesh.

This book, by a technology columnist for Popular Mechanics, wryly surveys the promise and present-day reality of a variety of wonders from the golden age of boundless technological optimism. You may be surprised at the slow yet steady progress being made toward some of these visionary goals (but don't hold your breath waiting for the Star Trek transporter!). I was completely unaware, for example, of the “anti-sleeping pill” modafinil, which, based upon tests by the French Foreign Legion, the UK Ministry of Defence, and the U.S. Air Force, appears to allow maintaining complete alertness for up to 40 hours with no sleep and minimal side effects. And they said programmer productivity had reached its limits!

The book is illustrated with stylish graphics, but there are no photos of the real-world gizmos mentioned in the next, nor are there source citations or links to Web sites describing them—you're on your own following up the details. To answer the question in the title, “Where's My Jetpack?”, look here and here.


LeBlanc, Steven A. with Katherine E. Register. Constant Battles. New York: St. Martin's Griffin, 2003. ISBN 0-312-31090-0.
Steven LeBlanc is the Director of Collections at Harvard University's Peabody Museum of Archaeology and Ethnology. When he began his fieldwork career in the early 1970s, he shared the opinion of most of the archaeologists and anthropologists of his generation and present-day laymen that most traditional societies in the hunter-gatherer and tribal farming eras were mostly peaceful and lived in balance with their environments. It was, according to this view, only with the emergence of large chiefdoms and state-level societies that environmental degradation began to appear and mass conflict emerge, culminating in the industrialised slaughter of the 20th century.

But, to the author, as a dispassionate scientist, looking at the evidence on the ground or dug up from beneath it in expeditions in the American Southwest, Turkey, and Peru, and in the published literature, there were many discrepancies from this consensus narrative. In particular, why would “peaceful” farming people build hilltop walled citadels far from their fields and sources of water if not for defensibility? And why would hard-working farmers obsess upon defence were there not an active threat from their neighbours?

Further investigations argue convincingly that the human experience, inherited directly from our simian ancestors, has been one of relentless population growth beyond the carrying capacity of our local environment, degradation of the ecosystem, and the inevitable conflict with neighbouring bands over scarce resources. Ironically, many of the reports of early ethnographers which appeared to confirm perennially-wrong philosopher Rousseau's vision of the “noble savage” were based upon observations of traditional societies which had recently been impacted by contact with European civilisation: population collapse due to exposure to European diseases to which they had no immunity, and increases in carrying capacity of the land thanks to introduction of European technologies such as horses, steel tools, and domestic animals, which had temporarily eased the Malthusian pressure upon these populations and suspended resource wars. But the archaeological evidence is that such wars are the norm, not an aberration.

In fact, notwithstanding the horrific death toll of twentieth century warfare, the rate of violent death among the human population has fallen to an all-time low in the nation-state era. Hunter-gatherer (or, as the authors prefer to call them, “forager”) and tribal farming societies typically lose about 25% of their male population and 5% of the females to warfare with neighbouring bands. Even the worst violence of the nation-state era, averaged over a generation, has a death toll only one eighth this level.

Are present-day humans (or, more specifically, industrialised Western humans) unprecedented despoilers of our environment and aggressors against inherently peaceful native people? Nonsense argues this extensively documented book. Unsustainable population growth, resource exhaustion, environmental degradation, and lethal conflict with neighbours are as human as bipedalism and speech. Conflict is not inevitable, and civilisation, sustainable environmental policy, and yield-improving and resource-conserving technology are the best course to reducing the causes of conflict. Dreaming of a nonexistent past of peaceful people living in harmony with their environment isn't.

You can read any number of books about military history, from antiquity to the present, without ever encountering a discussion of “Why we fight”—that's the subtitle of this book, and I've never encountered a better source to begin to understand the answer to this question than you'll find here.


Radin, Dean. Entangled Minds. New York: Paraview Pocket Books, 2006. ISBN 1-4165-1677-8.
If you're looking to read just one book about parapsychology, written from the standpoint of a researcher who judges the accumulated evidence from laboratory investigations overwhelmingly persuasive, this is your book. (The closest runner-up, in my estimation, is the same author's The Conscious Universe from 1997.) The evidence for a broad variety of paranormal (or psi) phenomena is presented, much of it from laboratory studies from the 1990s and the present decade, including functional MRI imaging of the brain during psi experiments and the presentiment experiments of Radin and Dick Bierman. The history of parapsychology research is sketched in chapter 4, but the bulk of the text is devoted to recent, well-controlled laboratory work. Anecdotal psi phenomena are mentioned only in passing, and other paranormal mainstays such as UFOs, poltergeists, Bigfoot, and the like are not discussed at all.

For each topic, the author presents a meta-analysis of unimpeached published experimental results, controlling for quality of experimental design and estimating the maximum impact of the “file drawer effect”, calculating how many unpublished experiments with chance results would have to exist to reduce the probability of the reported results to the chance expectation. All of the effects reported are very small, but a meta-meta analysis across all the 1019 experiments studied yields odds against the results being due to chance of 1.3×10104 to 1.

Radin draws attention to the similarities between psi phenomena, where events separated in space and time appear to have a connection which can't be explained by known means of communication, and the entanglement of particles resulting in correlations measured at spacelike separated intervals in quantum mechanics, and speculates that there may be a kind of macroscopic form of entanglement in which the mind is able to perceive information in a shared consciousness field (for lack of a better term) as well as through the senses. The evidence for such a field from the Global Consciousness Project (to which I have contributed software and host two nodes) is presented in chapter 11. Forty pages of endnotes provide extensive source citations and technical details. On several occasions I thought the author was heading in the direction of the suggestion I make in my Notes toward a General Theory of Paranormal Phenomena, but he always veered away from it. Perhaps the full implications of the multiverse are weirder than those of psi!

There are a few goofs. On p. 215, a quote from Richard Feynman is dated from 1990, while Feynman died in 1988. Actually, the quote is from Feynman's 1985 book QED, which was reprinted in 1990. The discussion of the Quantum Zeno Effect on p. 259 states that “the act of rapidly observing a quantum system forces that system to remain in its wavelike, indeterminate state, rather than to collapse into a particular, determined state.” This is precisely backwards—rapidly repeated observations cause the system's state to repeatedly collapse, preventing its evolution. Consequently, this effect is also called the “quantum watched pot” effect, after the aphorism “a watched pot never boils”. On the other side of the balance, the discussion of Bell's theorem on pp. 227–231 is one of the clearest expositions for layman I have ever read.

I try to avoid the “Washington read”: picking up a book and immediately checking if my name appears in the index, but in the interest of candour since I am commending this book to your attention, I should note that it does here—I am mentioned on p. 195. If you'd like to experiment with this spooky stuff yourself, try Fourmilab's online RetroPsychoKinesis experiments, which celebrated their tenth anniversary on the Web in January of 2007 and to date have recorded 256,584 experiments performed by 24,862 volunteer subjects.


MacKinnon, Douglas. America's Last Days. New York: Leisure Books, 2007. ISBN 0-8439-5802-2.
There are some books which are perfect for curling up with in front of a fireplace. Then there are those which are best used, ripped apart, for kindling; this is one of the latter. The premise of the novel is that the “Sagebrush Rebellion” gets deadly serious when a secretive group funded by a billionaire nutcase CEO of a major defence contractor plots the secession of two Western U.S. states to re-found a republic on the principles of the Founders, by threatening the U.S. with catastrophe unless the government accedes to their demands. Kind of like the Free State Project, but with nukes.

To liken the characters, dialogue, and plotting of this story to a comic book would be to disparage the comics, some of which, though certainly not all, far surpass this embarrassingly amateurish effort. Although the author's biography states him to have been a former White House and Pentagon “official” (he declines to state in which capacity), he appears to have done his research on how senior government and corporate executives behave and speak from watching reruns of “24”.

Spoiler warning: Plot and/or ending details follow.  
Ask yourself, is it plausible that the CEO of a billion dollar defence contractor would suggest, in an audience consisting not only of other CEOs, but a senior Pentagon staffer and an analyst for the CIA, that a Presidential candidate should be assassinated? Or that the director of the FBI would tell a foreign national in the employ of the arch-villain that the FBI was about to torture one of her colleagues?
Spoilers end here.  
I'm not going to bother with the numerous typos and factual errors—any number of acronyms appear to have been rendered phonetically based upon a flawed memory. The whole book is one big howler, and picking at details is like brushing flies off a decomposing elephant carcass. The writing is formulaic: like beginners' stories in a fiction workshop, each character is introduced with a little paragraph which fingerpaints the cardboard cut-out we're about to meet. Talented writers, or even writers with less talent but more experience, weave what background we need to know seamlessly into the narrative. There is a great deal of gratuitous obscenity, much of which is uttered in contexts where I would expect decorum to prevail. After dragging along for 331 pages devoid of character development and with little action, the whole thing gets wrapped up in the the final six preposterously implausible pages. Perhaps, given the content, it's for the best that there is plenty of white space; the average chapter in this mass market paperback is less than five pages in length.

As evidence of the literary erudition and refinement of the political and media elite in the United States, this book bears laudatory blurbs from Larry King, James Carville, Bob Dole, Dee Dee Myers, and Tom Brokaw.


September 2007

Mead, Rebecca. One Perfect Day. New York: Penguin Press, 2007. ISBN 1-59420-088-2.
This book does for for the wedding industry what Jessica Mitford's The American Way of Death did for that equally emotion-exploiting industry which preys upon the other end of adult life. According to the American Wedding Study, published annually by the Condé Nast Bridal Group, the average cost of a wedding in the United States in 2006 was US$27,852. Now, as the author points out on p. 25, this number, without any doubt, is overstated—it is compiled by the publisher of three bridal magazines which has every incentive to show the market they reach to be as large as possible, and is based upon a survey of those already in contact in one way or another with the wedding industry; those who skip all of the theatrics and expense and simply go to City Hall or have a quiet ceremony with close family at home or at the local church are “off the radar” in a survey of this kind and would, if included, bring down the average cost. Still, it's the only figure available, and it is representative of what the wedding industry manages to extract from those who engage (if I may use the word) with it.

To folks who have a sense of the time value of money, this is a stunning figure. The average age at which Americans marry has been increasing for decades and now stands at around 26 years for women and 27 years for men. So let's take US$27,000 and, instead of blowing it out on a wedding, assume the couple uses it to open an investment account at age 27, and that they simply leave the money in the account to compound, depositing nothing more until they retire at age 65. If the account has a compounded rate of return of 10% per annum (which is comparable to the long-term return of the U.S. stock market as a whole), then at age 65, that US$27,000 will have grown to just a bit over a million dollars—a pretty nice retirement nest egg as the couple embarks upon their next big change of life, especially since government Ponzi scheme retirement programs are likely to have collapsed by then. (The OpenOffice spreadsheet I used to make this calculation is available for downloading. It also allows you to forecast the alternative of opting for an inexpensive education and depositing the US$19,000 average student loan burden into an account at age 21—that ends up yielding more than 1.2 million at age 65. The idea for this analysis came from Richard Russell's “Rich Man, Poor Man”, which is the single most lucid and important document on lifetime financial planning I have ever read.) The computation assumes the wedding costs are paid in cash by the couple and/or their families. If they're funded by debt, the financial consequences are even more dire, as the couple finds itself servicing a debt in the very years where saving for retirement has the largest ultimate payoff. Ever helpful, in this book we find the Bank of America marketing home equity loans to finance wedding blow-outs.

So how do you manage to spend twenty-seven thousand bucks on a one day party? Well, as the author documents, writing with a wry sense of irony which never descends into snarkiness, the resourceful wedding business makes it downright easy, and is continually inventing new ways to extract even more money from their customers. We learn the ways of the wedding planner, the bridal shop operator, the wedding media, resorts, photographers and videographers, à la carte “multi-faith” ministers, drive-through Las Vegas wedding chapels, and the bridal apparel industry, including a fascinating look inside one of the Chinese factories where “the product” is made. (Most Chinese factory workers are paid on a piecework basis. So how do you pay the person who removes the pins after lace has been sewed in place? By the weight of pins removed—US$2 per kilogram.)

With a majority of U.S. couples who marry already living together, some having one or more children attending the wedding, the ceremony and celebration, which once marked a major rite of passage and change in status within the community now means…precisely what? Well, not to worry, because the wedding industry has any number of “traditions” for sale to fill the void. The author tracks down the origins of a number of them: the expensive diamond engagement ring (invented by the N. W. Ayer advertising agency in the 1930s for their client, De Beers), the Unity Candle ceremony (apparently owing its popularity to a television soap opera in the 1970s), and the “Apache Indian Prayer”, a favourite of the culturally eclectic, which was actually penned by a Hollywood screenwriter for the 1950 film Broken Arrow.

The bottom line (and this book is very much about that) is that in the eyes of the wedding industry, and in the words of Condé Nast executive Peter K. Hunsinger, the bride is not so much a princess preparing for a magic day and embarking upon the lifetime adventure of matrimony, but (p. 31) “kind of the ultimate consumer, the drunken sailor. Everyone is trying to get to her.” There is an index, but no source citations; you'll have to find the background information on your own.


Barrow, John D. The Infinite Book. New York: Vintage Books, 2005. ISBN 1-4000-3224-5.
Don't panic—despite the title, this book is only 330 pages! Having written an entire book about nothing (The Book of Nothing, May 2001), I suppose it's only natural the author would take on the other end of the scale. Unlike Rudy Rucker's Infinity and the Mind, long the standard popular work on the topic, Barrow spends only about half of the book on the mathematics of infinity. Philosophical, metaphysical, and theological views of the infinite in a variety of cultures are discussed, as well as the history of the infinite in mathematics, including a biographical portrait of the ultimately tragic life of Georg Cantor. The physics of an infinite universe (and whether we can ever determine if our own universe is infinite), the paradoxes of an infinite number of identical copies of ourselves necessarily existing in an infinite universe, the possibility of machines which perform an infinite number of tasks in finite time, whether we're living in a simulation (and how we might discover we are), and the practical and moral consequences of immortality and time travel are also explored.

Mathematicians and scientists have traditionally been very wary of the infinite (indeed, the appearance of infinities is considered an indication of the limitations of theories in modern physics), and Barrow presents any number of paradoxes which illustrate that, as he titles chapter four, “infinity is not a big number”: it is fundamentally different and requires a distinct kind of intuition if nonsensical results are to be avoided. One of the most delightful examples is Zhihong Xia's five-body configuration of point masses which, under Newtonian gravitation, expands to infinite size in finite time. (Don't worry: the finite speed of light, formation of an horizon if two bodies approach too closely, and the emission of gravitational radiation keep this from working in the relativistic universe we inhabit. As the author says [p. 236], “Black holes might seem bad but, like growing old, they are really not so bad when you consider the alternatives.”)

This is an enjoyable and enlightening read, but I found it didn't come up to the standard set by The Book of Nothing and The Constants of Nature (June 2003). Like the latter book, this one is set in a hideously inappropriate font for a work on mathematics: the digit “1” is almost indistinguishable from the letter “I”. If you look very closely at the top serif on the “1” you'll note that it rises toward the right while the “I” has a horizontal top serif. But why go to the trouble of distinguishing the two characters and then making the two glyphs so nearly identical you can't tell them apart without a magnifying glass? In addition, the horizontal bar of the plus sign doesn't line up with the minus sign, which makes equations look awful.

This isn't the author's only work on infinity; he's also written a stage play, Infinities, which was performed in Milan in 2002 and 2003.


[Audiobook] Dickens, Charles. A Tale of Two Cities. (Audiobook, Unabridged). Hong Kong: Naxos Audiobooks, [1859] 2005. ISBN 9-62634-359-1.
Like many people whose high school years predated the abolition of western civilisation from the curriculum, I was compelled to read an abridgement of this work for English class, and only revisited it in this audiobook edition let's say…some years afterward. My rather dim memories of the first read was that it was one of the better novels I was forced to read, but my memory of it was tarnished by my life-long aversion to compulsion of every kind. What I only realise now, after fourteen hours and forty-five minutes of listening to this superb unabridged audio edition, is how much injury is done to the masterful prose of Dickens by abridgement. Dickens frequently uses repetition as a literary device, acting like a basso continuo to set a tone of the inexorable playing out of fate. That very repetition is the first thing to go in abridgement, along with lengthy mood-setting descriptive passages, and they are sorely missed. Having now listened to every word Dickens wrote, I don't begrudge a moment I spent doing so—it's worth it.

The novel is narrated or, one might say, performed by British actor Anton Lesser, who adopts different dialects and voice pitches for each character's dialogue. It's a little odd at first to hear French paysans speaking in the accents of rustic Britons, but you quickly get accustomed to it and recognise who's speaking from the voice.

The audible.com download edition is sold in two separate “volumes”: volume 1 (7 hours 17 minutes) and volume 2 (7 hours 28 minutes), each about a 100 megabyte download at MP3 quality. An Audio CD edition (12 discs!) is available.


Lindley, David. Degrees Kelvin. Washington: Joseph Henry Press, 2004. ISBN 0-309-09618-9.
When 17 year old William Thomson arrived at Cambridge University to study mathematics, Britain had become a backwater of research in science and mathematics—despite the technologically-driven industrial revolution being in full force, little had been done to build upon the towering legacy of Newton, and cutting edge work had shifted to the Continent, principally France and Germany. Before beginning his studies at Cambridge, Thomson had already published three research papers in the Cambridge Mathematical Journal, one of which introduced Fourier's mathematical theory of heat to English speaking readers, defending it against criticism from those opposed to the highly analytical French style of science which Thomson found congenial to his way of thinking.

Thus began a career which, by the end of the 19th century, made Thomson widely regarded as the preeminent scientist in the world: a genuine scientific celebrity. Over his long career Thomson fused the mathematical rigour of the Continental style of research with the empirical British attitude and made fundamental progress in the kinetic theory of heat, translated Michael Faraday's intuitive view of electricity and magnetism into a mathematical framework which set the stage for Maxwell's formal unification of the two in electromagnetic field theory, and calculated the age of the Earth based upon heat flow from the interior. The latter calculation, in which he estimated only 20 to 40 million years, proved to be wrong, but was so because he had no way to know about radioactive decay as the source of Earth's internal heat: he was explicit in stating that his result assumed no then-unknown source of heat or, as we'd now say, “no new physics”. Such was his prestige that few biologists and geologists whose own investigations argued for a far more ancient Earth stepped up and said, “Fine—so start looking for the new physics!” With Peter Tait, he wrote the Treatise on Natural Philosophy, the first unified exposition of what we would now call classical physics.

Thomson believed that science had to be founded in observations of phenomena, then systematised into formal mathematics and tested by predictions and experiments. To him, understanding the mechanism, ideally based upon a mechanical model, was the ultimate goal. Although acknowledging that Maxwell's equations correctly predicted electromagnetic phenomena, he considered them incomplete because they didn't explain how or why electricity and magnetism behaved that way. Heaven knows what he would have thought of quantum mechanics (which was elaborated after his death in 1907).

He'd probably have been a big fan of string theory, though. Never afraid to add complexity to his mechanical models, he spent two decades searching for a set of 21 parameters which would describe the mechanical properties of the luminiferous ether—what string “landscape” believers might call the moduli and fluxes of the vacuum, and argued for a “vortex atom” model in which extended vortex loops replaced pointlike billiard ball atoms to explain spectrographic results. These speculations proved, as they say, not even wrong.

Thomson was not an ivory tower theorist. He viewed the occupation of the natural philosopher (he disliked the word “physicist”) as that of a problem solver, with the domain of problems encompassing the practical as well as fundamental theory. He was a central figure in the development of the first transatlantic telegraphic cable and invented the mirror galvanometer which made telegraphy over such long distances possible. He was instrumental in defining the units of electricity we still use today. He invented a mechanical analogue computer for computation of tide tables, and a compass compensated for the magnetic distortion of iron and steel warships which became the standard for the Royal Navy. These inventions made him wealthy, and he indulged his love of the sea by buying a 126 ton schooner and inviting his friends and colleagues on voyages.

In 1892, he was elevated to a peerage by Queen Victoria, made Baron Kelvin of Largs, the first scientist ever so honoured. (Numerous scientists, including Newton and Thomson himself in 1866 had been knighted, but the award of a peerage is an honour of an entirely different order.) When he died in 1907 at age 83, he was buried in Westminster Abbey next to the grave of Isaac Newton. For one who accomplished so much, and was so celebrated in his lifetime, Lord Kelvin is largely forgotten today, remembered mostly for the absolute temperature scale named in his honour and, perhaps, for the Kelvinator company of Detroit, Michigan which used his still-celebrated name to promote their ice-boxes and refrigerators. While Thomson had his hand in much of the creation of the edifice of classical physics in the 19th century, there isn't a single enduring piece of work you can point to which is entirely his. This isn't indicative of any shortcoming on his part, but rather of the maturation of science from rare leaps of insight by isolated geniuses to a collective endeavour by an international community reading each other's papers and building a theory by the collaborative effort of many minds. Science was growing up, and Kelvin's reputation has suffered, perhaps, not due to any shortcomings in his contributions, but because they were so broad, as opposed to being identified with a single discovery which was entirely his own.

This is a delightful biography of a figure whose contributions to our knowledge of the world we live in are little remembered. Lord Kelvin never wavered from his belief that science consisted in collecting the data, developing a model and theory to explain what was observed, and following the implications of that theory to their logical conclusions. In doing so, he was often presciently right and occasionally spectacularly wrong, but he was always true to science as he saw it, which is how most scientists see their profession today.

Amusingly, the chapter titles are:

  1. Cambridge
  2. Conundrums
  3. Cable
  4. Controversies
  5. Compass
  6. Kelvin


Phares, Walid. Future Jihad. New York: Palgrave Macmillan, [2005] 2006. ISBN 1-4039-7511-6.
It seems to me that at the root of the divisive and rancorous dispute over the war on terrorism (or whatever you choose to call it), is an individual's belief in one of the following two mutually exclusive propositions.

  1. There is a broad-based, highly aggressive, well-funded, and effective jihadist movement which poses a dire threat not just to secular and pluralist societies in the Muslim world, but to civil societies in Europe, the Americas, and Asia.
  2. There isn't.

In this book, Walid Phares makes the case for the first of these two statements. Born in Lebanon, after immigrating to the United States in 1990, he taught Middle East studies at several universities, and is currently a professor at Florida Atlantic University. He is the author of a number of books on Middle East history, and appears as a commentator on media outlets ranging from Fox News to Al Jazeera.

Ever since the early 1990s, the author has been warning of what he argued was a constantly growing jihadist threat, which was being overlooked and minimised by the academic experts to whom policy makers turn for advice, largely due to Saudi-funded and -indoctrinated Middle East Studies programmes at major universities. Meanwhile, Saudi funding also financed the radicalisation of Muslim communities around the world, particularly the large immigrant populations in many Western European countries. In parallel to this top-down approach by the Wahabi Saudis, the Muslim Brotherhood and its affiliated groups, including Hamas and the Front Islamique du Salut in Algeria, pursued a bottom-up strategy of radicalising the population and building a political movement seeking to take power and impose an Islamic state. Since the Iranian revolution of 1979, a third stream of jihadism has arisen, principally within Shiite communities, promoted and funded by Iran, including groups such as Hezbollah.

The present-day situation is placed in historical content dating back to the original conquests of Mohammed and the spread of Islam from the Arabian peninsula across three continents, and subsequent disasters at the hands of the Mongols and Crusaders, the reconquista of the Iberian peninsula, and the ultimate collapse of the Ottoman Empire and Caliphate following World War I. This allows the reader to grasp the world-view of the modern jihadist which, while seemingly bizarre from a Western standpoint, is entirely self-consistent from the premises whence the believers proceed.

Phares stresses that modern jihadism (which he dates from the abolition of the Ottoman Caliphate in 1923, an event which permitted free-lance, non-state actors to launch jihad unconstrained by the central authority of a caliph), is a political ideology with imperial ambitions: the establishment of a new caliphate and its expansion around the globe. He argues that this is only incidentally a religious conflict: although the jihadists are Islamic, their goals and methods are much the same as believers in atheistic ideologies such as communism. And just as one could be an ardent Marxist without supporting Soviet imperialism, one can be a devout Muslim and oppose the jihadists and intolerant fundamentalists. Conversely, this may explain the curious convergence of the extreme collectivist left and puritanical jihadists: red diaper baby and notorious terrorist Carlos “the Jackal” now styles himself an Islamic revolutionary, and the corpulent caudillo of Caracas has been buddying up with the squinty dwarf of Tehran.

The author believes that since the terrorist strikes against the United States in September 2001, the West has begun to wake up to the threat and begin to act against it, but that far more, both in realising the scope of the problem and acting to avert it, remains to be done. He argues, and documents from post-2001 events, that the perpetrators of future jihadist strikes against the West are likely to be home-grown second generation jihadists radicalised and recruited among Muslim communities within their own countries, aided by Saudi financed networks. He worries that the emergence of a nuclear armed jihadist state (most likely due to an Islamist takeover of Pakistan or Iran developing its own bomb) would create a base of operations for jihad against the West which could deter reprisal against it.

Chapter thirteen presents a chilling scenario of what might have happened had the West not had the wake-up call of the 2001 attacks and begun to mobilise against the threat. The scary thing is that events could still go this way should the threat be real and the West, through fatigue, ignorance, or fear, cease to counter it. While defensive measures at home and direct action against terrorist groups are required, the author believes that only the promotion of democratic and pluralistic civil societies in the Muslim world can ultimately put an end to the jihadist threat. Toward this end, a good first step would be, he argues, for the societies at risk to recognise that they are not at war with “terrorism” or with Islam, but rather with an expansionist ideology with a political agenda which attacks targets of opportunity and adapts quickly to countermeasures.

In all, I found the arguments somewhat over the top, but then, unlike the author, I haven't spent most of my career studying the jihadists, nor read their publications and Web sites in the original Arabic as he has. His warnings of cultural penetration of the West, misdirection by artful propaganda, and infiltration of policy making, security, and military institutions by jihadist covert agents read something like J. Edgar Hoover's Masters of Deceit, but then history, in particular the Venona decrypts, has borne out many of Hoover's claims which were scoffed at when the book was published in 1958. But still, one wonders how a “movement” composed of disparate threads many of whom hate one another (for example, while the Saudis fund propaganda promoting the jihadists, most of the latter seek to eventually depose the Saudi royal family and replace it with a Taliban-like regime; Sunni and Shiite extremists view each other as heretics) can effectively co-ordinate complex operations against their enemies.

A thirty page afterword in this paperback edition provides updates on events through mid-2006. There are some curious things: while transliteration of Arabic and Farsi into English involves a degree of discretion, the author seems very fond of the letter “u”. He writes the name of the leader of the Iranian revolution as “Khumeini”, for example, which I've never seen elsewhere. The book is not well-edited: occasionally he used “Khomeini”, spells Sayid Qutb's last name as “Kutb” on p. 64, and on p. 287 refers to “Hezbollah” and “Hizbollah” in the same sentence.

The author maintains a Web site devoted to the book, as well as a personal Web site which links to all of his work.


October 2007

Scalzi, John. The Last Colony. New York: Tor, 2007. ISBN 0-7653-1697-8.
This novel concludes the Colonial Union trilogy begun with the breakthrough Old Man's War (April 2005), for which the author won the John W. Campbell Award for Best New Writer, and its sequel, The Ghost Brigades (August 2006), which fleshed out the shadowy Special Forces and set the stage for a looming three-way conflict among the Colonial Union, the Conclave of more than four hundred alien species, and the Earth. As this novel begins, John Perry and Jane Sagan, whom we met in the first two volumes, have completed their military obligations and, now back in normal human bodies, have married and settled into new careers on a peaceful human colony world. They are approached by a Colonial Defense Forces general with an intriguing proposition: to become administrators of a new colony, the first to be formed by settlers from other colony worlds instead of emigrants from Earth.

As we learnt in The Ghost Brigades, when it comes to deceit, disinformation, manipulation, and corruption, the Colonial Union is a worthy successor to its historical antecedents, the Soviet Union and the European Union, and the newly minted administrators quickly discover that all is not what it appears to be and before long find themselves in a fine pickle indeed. The story moves swiftly and plausibly toward a satisfying conclusion I would never have guessed even twenty pages from the end.

In the acknowledgements at the end, the author indicates that this book concludes the adventures of John Perry and Jane Sagan and, for the moment, the Colonial Union universe. He says he may revisit that universe someday, but at present has no plans to do so. So while we wait to see where he goes next, here's a neatly wrapped up and immensely entertaining trilogy to savour. By the way, both Old Man's War and The Ghost Brigades are now available in inexpensive mass-market paperback editions. Unlike The Ghost Brigades, which can stand on its own without the first novel, you'll really enjoy this book and understand the characters much more if you've read the first two volumes before.


Harsanyi, David. Nanny State. New York: Broadway Books, 2007. ISBN 0-7679-2432-0.
In my earlier review of The Case Against Adolescence (July 2007), I concluded by observing that perhaps the end state of the “progressive” vision of the future is “being back in high school—forever”. Reading this short book (just 234 pages of main text, with 55 pages of end notes, bibliography, and index) may lead you to conclude that view was unduly optimistic. As the author documents, seemingly well-justified mandatory seat belt and motorcycle helmet laws in the 1980s punched through the barrier which used to deflect earnest (or ambitious) politicians urging “We have to do something”. That barrier, the once near-universal consensus that “It isn't the government's business”, had been eroded to a paper-thin membrane by earlier encroachments upon individual liberty and autonomy. Once breached, a torrent of infantilising laws, regulations, and litigation was unleashed, much of it promoted by single-issue advocacy groups and trial lawyers with a direct financial interest in the outcome, and often backed by nonexistent or junk science. The consequence, as the slippery slope became a vertical descent in the nineties and oughties, is the emergence of a society which seems to be evolving into a giant kindergarten, where children never have the opportunity to learn to be responsible adults, and nominal adults are treated as production and consumption modules, wards of a state which regulates every aspect of their behaviour, and surveils their every action.

It seems to me that the author has precisely diagnosed the fundamental problem: that once you accept the premise that the government can intrude into the sphere of private actions for an individual's own good (or, Heaven help us, “for the children”), then there is no limit whatsoever on how far it can go. Why, you might have security cameras going up on every street corner, cities banning smoking in the outdoors, and police ticketing people for listening to their iPods while crossing the street—oh, wait. Having left the U.S. in 1991, I was unaware of the extent of the present madness and the lack of push-back by reasonable people and the citizens who are seeing their scope of individual autonomy shrink with every session of the legislature. Another enlightening observation is that this is not, as some might think, entirely a phenomenon promoted by paternalist collectivists and manifest primarily in moonbat caves such as Seattle, San Francisco, and New York. The puritanical authoritarians of the right are just as willing to get into the act, as egregious examples from “red states” such as Texas and Alabama illustrate.

Just imagine how many more intrusions upon individual choice and lifestyle will be coming if the U.S. opts for socialised medicine. It's enough to make you go out and order a Hamdog!


Holland, Tom. Rubicon. London: Abacus, 2003. ISBN 0-349-11563-X.
Such is historical focus on the final years of the Roman Republic and the emergence of the Empire that it's easy to forget that the Republic survived for more than four and a half centuries prior to the chaotic events beginning with Caesar's crossing the Rubicon which precipitated its transformation into a despotism, preserving the form but not the substance of the republican institutions. When pondering analogies between Rome and present-day events, it's worth keeping in mind that representative self-government in Rome endured about twice as long as the history of the United States to date. This superb history recounts the story of the end of the Republic, placing the events in historical context and, to an extent I have never encountered in any other work, allowing the reader to perceive the personalities involved and their actions through the eyes and cultural assumptions of contemporary Romans, which were often very different from those of people today.

The author demonstrates how far-flung territorial conquests and the obligations they imposed, along with the corrupting influence of looted wealth flowing into the capital, undermined the institutions of the Republic which had, after all, evolved to govern just a city-state and limited surrounding territory. Whether a republican form of government could work on a large scale was a central concern of the framers of the U.S. Constitution, and this narrative graphically illustrates why their worries were well-justified and raises the question of whether a modern-day superpower can resist the same drift toward authoritarian centralism which doomed consensual government in Rome.

The author leaves such inference and speculation to the reader. Apart from a few comments in the preface, he simply recounts the story of Rome as it happened and doesn't draw lessons from it for the present. And the story he tells is gripping; it may be difficult to imagine, but this work of popular history reads like a thriller (I mean that entirely as a compliment—historical integrity is never sacrificed in the interest of storytelling), and he makes the complex and often contradictory characters of figures such as Sulla, Cato, Cicero, Mark Antony, Pompey, and Marcus Brutus come alive and the shifting alliances among them comprehensible. Source citations are almost entirely to classical sources although, as the author observes, ancient sources, though often referred to as primary, are not necessarily so: for example, Plutarch was born 90 years after the assassination of Caesar. A detailed timeline lists events from the foundation of Rome in 753 B.C. through the death of Augustus in A.D. 14.

A U.S. edition is now available.


Buckley, Christopher. Thank You for Smoking. New York: Random House, 1994. ISBN 0-8129-7652-5.
Nick Naylor lies for a living. As chief public “smokesman” for the Big Tobacco lobby in Washington, it's his job to fuzz the facts, deflect the arguments, and subvert the sanctimonious neo-prohibitionists, all with a smile. As in Buckley's other political farces, it seems to be an axiom that no matter how far down you are on the moral ladder in Washington D.C., there are always an infinite number of rungs below you, all occupied, mostly by lawyers. Nick's idea of how to sidestep government advertising bans and make cigarettes cool again raises his profile to such an extent that some of those on the rungs below him start grasping for him with their claws, tentacles, and end-effectors, with humourous and delightfully ironic (at least if you aren't Nick) consequences, and then when things have gotten just about as bad as they can get, the FBI jumps in to demonstrate that things are never as bad as they can get.

About a third of the way through reading this book, I happened to see the 2005 movie made from it on the illuminatus. I've never done this before—watch a movie based on a book I was currently reading. The movie was enjoyable and very funny, and seeing it didn't diminish my enjoyment of the book one whit; this is a wickedly hilarious book which contains dozens of laugh out loud episodes and subplots that didn't make it into the movie.


Chesterton, Gilbert K. What's Wrong with the World. San Francisco: Ignatius Press, [1910] 1994. ISBN 0-89870-489-8.
Writing in the first decade of the twentieth century in his inimitable riddle-like paradoxical style, Chesterton surveys the scene around him as Britain faced the new century and didn't find much to his satisfaction. A thorough traditionalist, he finds contemporary public figures, both Conservative and Progressive/Socialist, equally contemptible, essentially disagreeing only upon whether the common man should be enslaved and exploited in the interest of industry and commerce, or by an all-powerful monolithic state. He further deplores the modernist assumption, shared by both political tendencies, that once a change in society is undertaken, it must always be pursued: “You can't put the clock back”. But, as he asks, why not? “A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.” (p. 33). He urges us not to blindly believe in “progress” or “modernisation”, but rather to ask whether these changes have made things better or worse and, if worse, to undertake to reverse them.

In five sections, he surveys the impact of industrial society on the common man, of imperialism upon the colonisers and colonised, of feminism upon women and the family, of education upon children, and of collectivism upon individuality and the human spirit. In each he perceives the pernicious influence of an intellectual elite upon the general population who, he believes, are far more sensible about how to live their lives than those who style themselves their betters. For a book published almost a hundred years ago, this analysis frequently seems startlingly modern (although I'm not sure that's a word Chesterton would take as a compliment) and relevant to the present-day scene. While some of the specific issues (for example, women's suffrage, teaching of classical languages in the schools, and eugenics) may seem quaint, much of the last century has demonstrated the disagreeable consequences of the “progress” he discusses and accurately anticipated.

This reprint edition includes footnotes which explain Chesterton's many references to contemporary and historical figures and events which would have been familiar to his audience in 1910 but may be obscure to readers almost a century later. A free electronic edition (but without the explanatory footnotes) is available from Project Gutenberg.


Cadbury, Deborah. Space Race. London: Harper Perennial, 2005. ISBN 0-00-720994-0.
This is an utterly compelling history of the early years of the space race, told largely through the parallel lives of mirror-image principals Sergei Korolev (anonymous Chief Designer of the Soviet space program, and beforehand slave labourer in Stalin's Gulag) and Wernher von Braun, celebrity driving force behind the U.S. push into space, previously a Nazi party member, SS officer, and user of slave labour to construct his A-4/V-2 weapons. Drawing upon material not declassified by the United States until the 1980s and revealed after the collapse of the Soviet Union, the early years of these prime movers of space exploration are illuminated, along with how they were both exploited by and deftly manipulated their respective governments. I have never seen the story of the end-game between the British, Americans, and Soviets to spirit the V-2 hardware, technology, and team from Germany in the immediate post-surrender chaos told so well in a popular book. The extraordinary difficulties of trying to get things done in the Soviet command economy are also described superbly, and underline how inspired and indefatigable Korolev must have been to accomplish what he did.

Although the book covers the 1930s through the 1969 Moon landing, the main focus is on the competition between the U.S. and the Soviet Union between the end of World War II and the mid-1960s. Out of 345 pages of main text, the first 254 are devoted to the period ending with the flights of Yuri Gagarin and Alan Shepard in 1961. But then, that makes sense, given what we now know about the space race (and you'll know, if you don't already, after reading this book). Although nobody in the West knew at the time, the space race was really over when the U.S. made the massive financial commitment to Project Apollo and the Soviets failed to match it. Not only was Korolev compelled to work within budgets cut to half or less of his estimated requirements, the modest Soviet spending on space was divided among competing design bureaux whose chief designers engaged in divisive and counterproductive feuds. Korolev's N-1 Moon rocket used 30 first stage engines designed by a jet engine designer with modest experience with rockets because Korolev and supreme Soviet propulsion designer Valentin Glushko were not on speaking terms, and he was forced to test the whole grotesque lash-up for the first time in flight, as there wasn't the money for a ground test stand for the complete first stage. Unlike the “all-up” testing of the Apollo-Saturn program, where each individual component was exhaustively ground tested in isolation before being committed to flight, it didn't work. It wasn't just the Soviets who took risks in those wild and wooly days, however. When an apparent fuel leak threatened to delay the launch of Explorer-I, the U.S. reply to Sputnik, brass in the bunker asked for a volunteer “without any dependants” to go out and scope out the situation beneath the fully-fuelled rocket, possibly leaking toxic hydrazine (p. 175).

There are a number of factual goofs. I'm not sure the author fully understands orbital mechanics which is, granted, a pretty geeky topic, but one which matters when you're writing about space exploration. She writes that the Jupiter C re-entry experiment reached a velocity (p. 154) of 1600 mph (actually 16,000 mph), that Yuri Gararin's Vostok capsule orbited (p. 242) at 28,000 mph (actually 28,000 km/h), and that if Apollo 8's service module engine had failed to fire after arriving at the Moon (p. 325), the astronauts “would sail on forever, lost in space” (actually, they were on a “free return” trajectory, which would have taken them back to Earth even if the engine failed—the critical moment was actually when they fired the same engine to leave lunar orbit on Christmas Day 1968, which success caused James Lovell to radio after emerging from behind the Moon after the critical burn, “Please be informed, there is a Santa Claus”). Orbital attitude (the orientation of the craft) is confused with altitude (p. 267), and retro-rockets are described as “breaking rockets” (p. 183)—let's hope not! While these and other quibbles will irk space buffs, they shouldn't deter you from enjoying this excellent narrative.

A U.S. edition is now available. The author earlier worked on the production of a BBC docu-drama also titled Space Race, which is now available on DVD. Note, however, that this is a PAL DVD with a region code of 2, and will not play unless you have a compatible DVD player and television; I have not seen this programme.


November 2007

Krakauer, Jon. Into Thin Air. New York: Anchor Books, [1997] 1999. ISBN 0-385-49478-5.
It's amazing how much pain and suffering some people will endure in order to have a perfectly awful time. In 1996, the author joined a guided expedition to climb Mount Everest, on assignment by Outside magazine to report on the growing commercialisation of Everest, with guides taking numerous people, many inexperienced in alpinism, up the mountain every season. On May 10th, 1996, he reached the summit where, exhausted and debilitated by hypoxia and other effects of extreme altitude (although using supplementary oxygen), he found “I just couldn't summon the energy to care” (p. 7). This feeling of “whatever” while standing on the roof of the world was, nonetheless, the high point of the experience which quickly turned into a tragic disaster. While the climbers were descending from the summit to their highest camp, a storm, not particularly violent by Everest standards, reduced visibility to near zero and delayed progress until many climbers had exhausted their supplies of bottled oxygen. Of the six members of the expedition Krakauer joined who reached the summit, four died on the mountain, including the experienced leader of the team. In all, eight people died as a result of that storm, including the leader of another expedition which reached the summit that day.

Before joining the Everest expedition, the author had had extensive technical climbing experience but had never climbed as high as the Base Camp on Mount Everest: 17,600 feet. Most of the clients of his and other expeditions had far less mountaineering experience than the author. The wisdom of encouraging people with limited qualifications but large bank balances to undertake a potentially deadly adventure underlies much of the narrative: we encounter a New York socialite having a Sherpa haul a satellite telephone up the mountain to stay in touch from the highest camp. The supposed bond between climbers jointly confronting the hazards of a mountain at high altitude is called into question on several occasions: a Japanese expedition ascending from the Tibetan side via the Northeast Ridge passed three disabled climbers from an Indian expedition and continued on to the summit without offering to share food, oxygen, or water, nor to attempt a rescue: all of the Indians died on the mountain.

This is a disturbing account of adventure at the very edge of personal endurance, and the difficult life-and-death choices people make under such circumstances. A 1999 postscript in this paperback edition is a rebuttal to the alternative presentation of events in The Climb, which I have not read.


Siegel, Jerry and John Forte. Tales of the Bizarro World. New York: DC Comics, [1961, 1962] 2000. ISBN 1-56389-624-9.
In 1961, the almost Euclidean logic of the Superman comics went around a weird bend in reality, foretelling other events to transpire in that decade. Superman fans found their familar axioms of super powers and kryptonite dissolving into pulsating phosphorescent Jello on the Bizarro World, populated by imperfect and uniformly stupid replicas of Superman, Lois Lane, and other denizens of Metropolis created by a defective duplicator ray. Everything is backwards, or upside-down, or inside-out on the Bizarro World, which itself is cubical, not spherical.

These stories ran in Adventure Comics in 1961 and 1962 and then disappeared into legend, remaining out of print for more than 35 years until this compilation was published. Not only are all of the Bizarro stories here, there are profiles of the people who created Bizarro, and even an interview with Bizarro himself.

I fondly remember the Bizarro stories from the odd comic books I came across in my youth, and looked forward to revisiting them, but I have to say that what seemed exquisitely clever in small doses to a twelve year old may seem a bit strained and tedious in a 190 page collection read by somebody, er…a tad more mature. Still, ya gotta chuckle at Bizarro starting a campfire (p. 170) by rubbing two boy scouts together—imagine the innuendos which would be read into that today!


Albrecht, Katherine and Liz McIntyre. Spychips. Nashville: Nelson Current, 2005. ISBN 0-452-28766-9.
Imagine a world in which every manufactured object, and even living creatures such as pets, livestock, and eventually people, had an embedded tag with a unique 96-bit code which uniquely identified it among all macroscopic objects on the planet and beyond. Further, imagine that these tiny, unobtrusive and non-invasive tags could be interrogated remotely, at a distance of up to several metres, by safe radio frequency queries which would provide power for them to transmit their identity. What could you do with this? Well, a heck of a lot. Imagine, for example, a refrigerator which sensed its entire contents, and was able to automatically place an order on the Internet for home delivery of whatever was running short, or warned you that the item you'd just picked up had passed its expiration date. Or think about breezing past the checkout counter at the Mall-Mart with a cart full of stuff without even slowing down—all of the goods would be identified by the portal at the door, and the total charged to the account designated by the tag in your customer fidelity card. When you're shopping, you could be automatically warned when you pick up a product which contains an ingredient to which you or a member of your family is allergic. And if a product is recalled, you'll be able to instantly determine whether you have one of the affected items, if your refrigerator or smart medicine cabinet hasn't already done so. The benefits just go on and on…imagine.

This is the vision of an “Internet of Things”, in which all tangible objects are, in a real sense, on-line in real-time, with their position and status updated by ubiquitous and networked sensors. This is not a utopian vision. In 1994 I sketched Unicard, a unified personal identity document, and explored its consequences; people laughed: “never happen”. But just five years later, the Auto-ID Labs were formed at MIT, dedicated to developing a far more ubiquitous identification technology. With the support of major companies such as Procter & Gamble, Philip Morris, Wal-Mart, Gillette, and IBM, and endorsement by organs of the United States government, technology has been developed and commercialised to implement tagging everything and tracking its every movement.

As I alluded to obliquely in Unicard, this has its downsides. In particular, the utter and irrevocable loss of all forms of privacy and anonymity. From the moment you enter a store, or your workplace, or any public space, you are tracked. When you pick up a product, the amount of time you look at it before placing it in your shopping cart or returning it to the shelf is recorded (and don't even think about leaving the store without paying for it and having it logged to your purchases!). Did you pick the bargain product? Well, you'll soon be getting junk mail and electronic coupons on your mobile phone promoting the premium alternative with a higher profit margin to the retailer. Walk down the street, and any miscreant with a portable tag reader can “frisk” you without your knowledge, determining the contents of your wallet, purse, and shopping bag, and whether you're wearing a watch worth snatching. And even when you discard a product, that's a public event: garbage voyeurs can drive down the street and correlate what you throw out by the tags of items in your trash and the tags on the trashbags they're in.

“But we don't intend to do any of that”, the proponents of radio frequency identification (RFID) protest. And perhaps they don't, but if it is possible and the data are collected, who knows what will be done with it in the future, particularly by governments already installing surveillance cameras everywhere. If they don't have the data, they can't abuse them; if they do, they may; who do you trust with a complete record of everywhere you go, and everything you buy, sell, own, wear, carry, and discard?

This book presents, in a form that non-specialists can understand, the RFID-enabled future which manufacturers, retailers, marketers, academics, and government are co-operating to foist upon their consumers, clients, marks, coerced patrons, and subjects respectively. It is not a pretty picture. Regrettably, this book could be much better than it is. It's written in a kind of breathy muckraking rant style, with numerous paragraphs like (p. 105):

Yes, you read that right, they plan to sell data on our trash. Of course. We should have known that BellSouth was just another megacorporation waiting in the wings to swoop down on the data revealed once its fellow corporate cronies spychip the world.
I mean, I agree entirely with the message of this book, having warned of modest steps in that direction eleven years before its publication, but prose like this makes me feel like I'm driving down the road in a 1964 Vance Packard getting all righteously indignant about things we'd be better advised to coldly and deliberately draw our plans against. This shouldn't be so difficult, in principle: polls show that once people grasp the potential invasion of privacy possible with RFID, between 2/3 and 3/4 oppose it. The problem is that it's being deployed via stealth, starting with bulk pallets in the supply chain and, once proven there, migrated down to the individual product level.

Visibility is a precious thing, and one of the most insidious properties of RFID tags is their very invisibility. Is there a remotely-powered transponder sandwiched into the sole of your shoe, linked to the credit card number and identity you used to buy it, which “phones home” every time you walk near a sensor which activates it? Who knows? See how the paranoia sets in? But it isn't paranoia if they're really out to get you. And they are—for our own good, naturally, and for the children, as always.

In the absence of a policy fix for this (and the extreme unlikelihood of any such being adopted given the natural alliance of business and the state in tracking every move of their customers/subjects), one extremely handy technical fix would be a broadband, perhaps software radio, which listened on the frequency bands used by RFID tag readers and snooped on the transmissions of tags back to them. Passing the data stream to a package like RFDUMP would allow decoding the visible information in the RFID tags which were detected. First of all, this would allow people to know if they were carrying RFID tagged products unbeknownst to them. Second, a portable sniffer connected to a PDA would identify tagged products in stores, which clients could take to customer service desks and ask to be returned to the shelves because they were unacceptable for privacy reasons. After this happens several tens of thousands of times, it may have an impact, given the razor-thin margins in retailing. Finally, there are “active measures”. These RFID tags have large antennas which are connected to a super-cheap and hence fragile chip. Once we know the frequency it's talking on, why we could…. But you can work out the rest, and since these are all unlicensed radio bands, there may be nothing wrong with striking an electromagnetic blow for privacy.

Don't you put,
your tag on me!


[Audiobook] Bryson, Bill. A Short History of Nearly Everything (Audiobook, Unabridged). Westminster, MD: Books on Tape, 2003. ISBN 0-7366-9320-3.
What an astonishing achievement! Toward the end of the 1990s, Bill Bryson, a successful humorist and travel writer, found himself on a flight across the Pacific and, looking down on the ocean, suddenly realised that he didn't know how it came to be, how it affected the clouds above it, what lived in its depths, or hardly anything else about the world and universe he inhabited, despite having lived in an epoch in which science made unprecedented progress in understanding these and many other things. Shortly thereafter, he embarked upon a three year quest of reading popular science books and histories of science, meeting with their authors and with scientists in numerous fields all around the globe, and trying to sort it all out into a coherent whole.

The result is this stunning book, which neatly packages the essentials of human knowledge about the workings of the universe, along with how we came to know all of these things and the stories of the often fascinating characters who figured it all out, into one lucid, engaging, and frequently funny package. Unlike many popular works, Bryson takes pains to identify what we don't know, of which there is a great deal, not just in glamourous fields like particle physics but in stuffy endeavours such as plant taxonomy. People who find themselves in Bryson's position at the outset—entirely ignorant of science—can, by reading this single work, end up knowing more about more things than even most working scientists who specialise in one narrow field. The scope is encyclopedic: from quantum mechanics and particles to galaxies and cosmology, with chemistry, the origin of life, molecular biology, evolution, genetics, cell biology, paleontology and paleoanthropology, geology, meteorology, and much, much more, all delightfully told, with only rare errors, and with each put into historical context. I like to think of myself as reasonably well informed about science, but as I listened to this audiobook over a period of several weeks on my daily walks, I found that every day, in the 45 to 60 minutes I listened, there was at least one and often several fascinating things of which I was completely unaware.

This audiobook is distributed in three parts, totalling 17 hours and 48 minutes. The book is read by British narrator Richard Matthews, who imparts an animated and light tone appropriate to the text. He does, however mispronounce the names of several scientists, for example physicists Robert Dicke (whose last name he pronounces “Dick”, as opposed to the correct “Dickey”) and Richard Feynman (“Fane-man” instead of “Fine-man”), and when he attempts to pronounce French names or phrases, his accent is fully as affreux as my own, but these are minor quibbles which hardly detract from an overall magnificent job. If you'd prefer to read the book, it's available in paperback now, and there's an illustrated edition, which I haven't seen. I would probably never have considered this book, figuring I already knew it all, had I not read Hugh Hewitt's encomium to it and excerpts therefrom he included (parts 1, 2, 3).


Walton, Jo. Farthing. New York: Tor, 2006. ISBN 0-7653-5280-X.
This is an English country house murder mystery in the classic mould, but set in an alternative history timeline in which the European war of 1939 ended in the “Peace with Honour”, when Britain responded to Rudolf Hess's flight to Scotland in May 1941 with a diplomatic mission which ended the war, with Hitler ceding the French colonies in Africa to Britain in return for a free hand to turn east and attack the Soviet Union. In 1949, when the story takes place, the Reich and the Soviets are still at war, in a seemingly endless and bloody stalemate. The United States, never drawn into the war, remains at peace, adopting an isolationist stance under President Charles Lindbergh; continental Europe has been consolidated into the Greater Reich.

When the architect of the peace between Britain and the Reich is found murdered with a yellow star of David fixed to his chest with a dagger, deep currents: political, family, financial, racial, and sexual, converge to muddle the situation which a stolid although atypical Scotland Yard inspector must sort through under political pressure and a looming deadline.

The story is told in alternating chapters, the odd numbered being the first-person narrative of one of the people in the house at the time of the murder and the even numbered in the voice of an omniscient narrator following the inspector. We can place the story precisely in (alternative) time: on p. 185 the year is given as 1949, and on p. 182 we receive information which places the murder as on the night of 7–8 May of that year. I'm always impressed when an author makes the effort to get the days of the week right in an historical novel, and that's the case here. There is, however, a little bit of bad astronomy. On p. 160, as the inspector is calling it a day, we read, “It was dusk; the sky was purple and the air was cool. … Venus was just visible in the east.” Now, I'm impressed, because at dusk on that day Venus was visible near the horizon—that is admirable atmosphere and attention to detail! But Venus can never be visible in the East at dusk: it's an inner planet and never gets further than 48° from the Sun, so in the evening sky it's always in the West; on that night, near Winchester England, it would be near the west-northwest horizon, with Mercury higher in the sky.

The dénouement is surprising and chilling at the same time. The story illustrates how making peace with tyranny can lead to successive, seemingly well-justified, compromises which can inoculate the totalitarian contagion within even the freest and and most civil of societies.


Sinclair, Upton. Dragon's Teeth. Vol. 1. Safety Harbor, FL: Simon Publications, [1942] 2001. ISBN 1-931313-03-2.
Between 1940 and 1953, Upton Sinclair published a massive narrative of current events, spanning eleven lengthy novels, in which real-world events between 1913 and 1949 were seen through the eyes of Lanny Budd, scion of a U.S. munitions manufacturer family become art dealer and playboy husband of an heiress whose fortune dwarfs his own. His extended family and contacts in the art and business worlds provide a window into the disasters and convulsive changes which beset Europe and America in two world wars and the period between them and afterward.

These books were huge bestsellers in their time, and this one won the Pulitzer Prize, but today they are largely forgotten. Simon Publications have made them available in facsimile reprint editions, with each original novel published in two volumes of approximately 300 pages each. This is the third novel in the saga, covering the years 1929–1934; this volume, comprising the first three books of the novel, begins shortly after the Wall Street crash of 1929 and ends with the Nazi consolidation of power in Germany after the Reichstag fire in 1933.

It's easy to understand both why these books were such a popular and critical success at the time and why they have since been largely forgotten. In each book, we see events of a few years before the publication date from the perspective of socialites and people in a position of power (in this book Lanny Budd meets “Adi” Hitler and gets to see both his attraction and irrationality first-hand), but necessarily the story is written without the perspective of knowing how it's going to come out, which makes it “current events fiction”, not historical fiction in the usual sense. Necessarily, that means it's going to be dated not long after the books scroll off the bestseller list. Also, the viewpoint characters are mostly rather dissipated and shallow idlers, wealthy dabblers in “pink” or “red” politics, who, with hindsight, seem not so dissimilar to the feckless politicians in France and Britain who did nothing as Europe drifted toward another sanguinary catastrophe.

Still, I enjoyed this book. You get the sense that this is how the epoch felt to the upper-class people who lived through it, and it was written so shortly after the events it chronicles that it avoids the simplifications that retrospection engenders. I will certainly read the second half of this reprint, which currently sits on my bookshelf, but I doubt if I'll read any of the others in the epic.


Bernstein, Jeremy. Plutonium. Washington: Joseph Henry Press, 2007. ISBN 0-309-10296-0.
When the Manhattan Project undertook to produce a nuclear bomb using plutonium-239, the world's inventory of the isotope was on the order of a microgram, all produced by bombarding uranium with neutrons produced in cyclotrons. It wasn't until August of 1943 that enough had been produced to be visible under a microscope. When, in that month, the go-ahead was given to build the massive production reactors and separation plants at the Hanford site on the Columbia River, virtually nothing was known of the physical properties, chemistry, and metallurgy of the substance they were undertaking to produce. In fact, it was only in 1944 that it was realised that the elements starting with thorium formed a second group of “rare earth” elements: the periodic table before World War II had uranium in the column below tungsten and predicted that the chemistry of element 94 would resemble that of osmium. When the large-scale industrial production of plutonium was undertaken, neither the difficulty of separating the element from the natural uranium matrix in which it was produced nor the contamination with Pu-240 which would necessitate an implosion design for the plutonium bomb were known. Notwithstanding, by the end of 1947 a total of 500 kilograms of the stuff had been produced, and today there are almost 2000 metric tons of it, counting both military inventories and that produced in civil power reactors, which crank out about 70 more metric tons a year.

These are among the fascinating details gleaned and presented in this history and portrait of the most notorious of artificial elements by physicist and writer Jeremy Bernstein. He avoids getting embroiled in the building of the bomb, which has been well-told by others, and concentrates on how scientists around the world stumbled onto nuclear fission and transuranic elements, puzzled out what they were seeing, and figured out the bizarre properties of what they had made. Bizarre is not too strong a word for the chemistry and metallurgy of plutonium, which remains an active area of research today with much still unknown. When you get that far down on the periodic table, both quantum mechanics and special relativity get into the act (as they start to do even with gold), and you end up with six allotropic phases of the metal (in two of which volume decreases with increasing temperature), a melting point of just 640° C and an anomalous atomic radius which indicates its 5f electrons are neither localised nor itinerant, but somewhere in between.

As the story unfolds, we meet some fascinating characters, including Fritz Houtermans, whose biography is such that, as the author notes (p. 86), “if one put it in a novel, no one would find it plausible.” We also meet stalwarts of the elite 26-member UPPU Club: wartime workers at Los Alamos whose exposure to plutonium was sufficient that it continues to be detectable in their urine. (An epidemiological study of these people which continues to this day has found no elevated rates of mortality, which is not to say that plutonium is not a hideously hazardous substance.)

The text is thoroughly documented in the end notes, and there is an excellent index; the entire book is just 194 pages. I have two quibbles. On p. 110, the author states of the Little Boy gun-assembly uranium bomb dropped on Hiroshima, “This is the only weapon of this design that was ever detonated.” Well, I suppose you could argue that it was the only such weapon of that precise design detonated, but the implication is that it was the first and last gun-type bomb to be detonated, and this is not the case. The U.S. W9 and W33 weapons, among others, were gun-assembly uranium bombs, which between them were tested three times at the Nevada Test Site. The price for plutonium-239 quoted on p. 155, US$5.24 per milligram, seems to imply that the plutonium for a critical mass of about 6 kg costs about 31 million dollars. But this is because the price quoted is for 99–99.99% isotopically pure Pu-239, which has been electromagnetically separated from the isotopic mix you get from the production reactor. Weapons-grade plutonium can have up to 7% Pu-240 contamination, which doesn't require the fantastically expensive isotope separation phase, just chemical extraction of plutonium from reactor fuel. In fact, you can build a bomb from so-called “reactor-grade” plutonium—the U.S. tested one in 1962.


December 2007

Johnson, Steven. The Ghost Map. New York: Riverhead Books, 2006. ISBN 1-59448-925-4.
From the dawn of human civilisation until sometime in the nineteenth century, cities were net population sinks—the increased mortality from infectious diseases, compounded by the unsanitary conditions, impure water, and food transported from the hinterland and stored without refrigeration so shortened the lives of city-dwellers (except for the ruling class and the wealthy, a small fraction of the population) that a city's population was maintained only by a constant net migration to it from the countryside. In densely-packed cities, not only does an infected individual come into contact with many more potential victims than in a rural environment, highly virulent strains of infectious agents which would “burn out” due to rapidly killing their hosts in farm country or a small village can prosper in a city, since each infected host still has the opportunity to infect many others before succumbing. Cities can be thought of as Petri dishes for evolving killer microbes.

No civic culture medium was as hospitable to pathogens as London in the middle of the 19th century. Its population, 2.4 million in 1851, had exploded from just one million at the start of the century, and all of these people had been accommodated in a sprawling metropolis almost devoid of what we would consider a public health infrastructure. Sewers, where they existed, were often open and simply dumped into the Thames, whence other Londoners drew their drinking water, downstream. Other residences dumped human waste in cesspools, emptied occasionally (or maybe not) by “night-soil men”. Imperial London was a smelly, and a deadly place. Observing it first-hand is what motivated Friedrich Engels to document and deplore The Condition of the Working Class in England (January 2003).

Among the diseases which cut down inhabitants of cities, one of the most feared was cholera. In 1849, an outbreak killed 14,137 in London, and nobody knew when or where it might strike next. The prevailing theory of disease at this epoch was that infection was caused by and spread through “miasma”: contaminated air. Given how London stank and how deadly it was to its inhabitants, this would have seemed perfectly plausible to people living before the germ theory of disease was propounded. Edwin Chadwick, head of the General Board of Health in London at the epoch, went so far as to assert (p. 114) “all smell is disease”. Chadwick was, in many ways, one of the first advocates and implementers of what we have come to call “big government”—that the state should take an active role in addressing social problems and providing infrastructure for public health. Relying upon the accepted “miasma” theory and empowered by an act of Parliament, he spent the 1840s trying to eliminate the stink of the cesspools by connecting them to sewers which drained their offal into the Thames. Chadwick was, by doing so, to provide one of the first demonstrations of that universal concomitant of big government, unintended consequences: “The first defining act of a modern, centralized public-health authority was to poison an entire urban population.” (p. 120).

When, in 1854, a singularly virulent outbreak of cholera struck the Soho district of London, physician and pioneer in anæsthesia John Snow found himself at the fulcrum of a revolution in science and public health toward which he had been working for years. Based upon his studies of the 1849 cholera outbreak, Snow had become convinced that the pathogen spread through contamination of water supplies by the excrement of infected individuals. He had published a monograph laying out this theory in 1849, but it swayed few readers from the prevailing miasma theory. He was continuing to document the case when cholera exploded in his own neighbourhood. Snow's mind was not only prepared to consider a waterborne infection vector, he was also one of the pioneers of the emerging science of epidemiology: he was a founding member of the London Epidemiological Society in 1850. Snow's real-time analysis of the epidemic caused him to believe that the vector of infection was contaminated water from the Broad Street pump, and his persuasive presentation of the evidence to the Board of Governors of St. James Parish caused them to remove the handle from that pump, after which the contagion abated. (As the author explains, the outbreak was already declining at the time, and in all probability the water from the Broad Street pump was no longer contaminated then. However, due to subsequent events and discoveries made later, had the handle not been removed there would have likely been a second wave of the epidemic, with casualties comparable to the first.)

Afterward, Snow, with the assistance of initially-sceptical clergyman Henry Whitehead, whose intimate knowledge of the neighbourhood and its residents allowed compiling the data which not only confirmed Snow's hypothesis but identified what modern epidemiologists would call the “index case” and “vector of contagion”, revised his monograph to cover the 1854 outbreak, illustrated by a map which illustrated its casualties that has become a classic of on-the-ground epidemiology and the graphical presentation of data. Most brilliant was Snow's use (and apparent independent invention) of a Voronoi diagram to show the boundary, by streets, of the distance, not in Euclidean space, but by walking time, of the area closer to the Broad Street pump than to others in the neighbourhood. (Oddly, the complete map with this crucial detail does not appear in the book: only a blow-up of the central section without the boundary. The full map is here; depending on your browser, you may have to click on the map image to display it at full resolution. The dotted and dashed line is the Voronoi cell enclosing the Broad Street pump.)

In the following years, London embarked upon a massive program to build underground sewers to transport the waste of its millions of residents downstream to the tidal zone of the Thames and later, directly to the sea. There would be one more cholera outbreak in London in 1866—in an area not yet connected to the new sewers and water treatment systems. Afterward, there has not been a single epidemic of cholera in London. Other cities in the developed world learned this lesson and built the infrastructure to provide their residents clean water. In the developing world, cholera continues to take its toll: in the 1990s an outbreak in South America infected more than a million people and killed almost 10,000. Fortunately, administration of rehydration therapy (with electrolytes) has drastically reduced the likelihood of death from a cholera infection. Still, you have to wonder why, in a world where billions of people lack access to clean water and third world mega-cities are drawing millions to live in conditions not unlike London in the 1850s, that some believe that laptop computers are the top priority for children growing up there.

A paperback edition is now available.


Hoagland, Richard C. and Mike Bara. Dark Mission. Los Angeles: Feral House, 2007. ISBN 1-932595-26-0.
Author Richard C. Hoagland first came to prominence as an “independent researcher” and advocate that “the face on Mars” was an artificially-constructed monument built by an ancient extraterrestrial civilisation. Hoagland has established himself as one of the most indefatigable and imaginative pseudoscientific crackpots on the contemporary scene, and this œuvre pulls it all together into a side-splittingly zany compendium of conspiracy theories, wacky physics, imaginative image interpretation, and feuds within the “anomalist” community—a tempest in a crackpot, if you like.

Hoagland seems to possess a visual system which endows him with a preternatural ability, undoubtedly valuable for an anomalist, of seeing things that aren't there. Now you may look at a print of a picture taken on the lunar surface by an astronaut with a Hasselblad camera and see, in the black lunar sky, negative scratches, film smudges, lens flare, and, in contrast-stretched and otherwise manipulated digitally scanned images, artefacts of the image processing filters applied, but Hoagland immediately perceives “multiple layers of breathtaking ‘structural construction’ embedded in the NASA frame; multiple surviving ‘cell-like rooms,’ three-dimensional ‘cross-bracing,’ angled ‘stringers,’ etc… all following logical structural patterns for a massive work of shattered, but once coherent, glass-like mega-engineering” (p. 153, emphasis in the original). You can see these wonders for yourself on Hoagland's site, The Enterprise Mission. From other Apollo images Hoagland has come to believe that much of the near side of the Moon is covered by the ruins of glass and titanium domes, some which still reach kilometres into the lunar sky and towered over some of the Apollo landing sites.

Now, you might ask, why did the Apollo astronauts not remark upon these prodigies, either while presumably dodging them when landing and flying back to orbit, nor on the surface, nor afterward. Well, you see, they must have been sworn to secrecy at the time and later (p. 176) hypnotised to cause them to forget the obvious evidence of a super-civilisation they were tripping over on the lunar surface. Yeah, that'll work.

Now, Occam's razor advises us not to unnecessarily multiply assumptions when formulating our hypotheses. On the one hand, we have the mainstream view that NASA missions have honestly reported the data they obtained to the public, and that these data, to date, include no evidence (apart from the ambiguous Viking biology tests on Mars) for extraterrestrial life nor artefacts of another civilisation. On the other, Hoagland argues:

  • NASA has been, from inception, ruled by three contending secret societies, all of which trace their roots to the gods of ancient Egypt: the Freemasons, unrepentant Nazi SS, and occult disciples of Aleister Crowley.
  • These cults have arranged key NASA mission events to occur at “ritual” times, locations, and celestial alignments. The Apollo 16 lunar landing was delayed due to a faked problem with the SPS engine so as to occur on Hitler's birthday.
  • John F. Kennedy was assassinated by a conspiracy including Lyndon Johnson and Congressman Albert Thomas of Texas because Kennedy was about to endorse a joint Moon mission with the Soviets, revealing to them the occult reasons behind the Apollo project.
  • There are two factions within NASA: the “owls”, who want to hide the evidence from the public, and the “roosters”, who are trying to get it out by covert data releases and cleverly coded clues.

    But wait, there's more!

  • The energy of the Sun comes, at least in part, from a “hyperdimensional plane” which couples to rotating objects through gravitational torsion (you knew that was going to come in sooner or later!) This energy expresses itself through a tetrahedral geometry, and explains, among other mysteries, the Great Red Spot of Jupiter, the Great Dark Spot of Neptune, Olympus Mons on Mars, Mauna Kea in Hawaii, and the precession of isolated pulsars.
  • The secrets of this hyperdimensional physics, glimpsed by James Clerk Maxwell in his quaternion (check off another crackpot checklist item) formulation of classical electrodynamics, were found by Hoagland to be encoded in the geometry of the “monuments” of Cydonia on Mars.
  • Mars was once the moon of a “Planet V”, which exploded (p. 362).

    And that's not all!

  • NASA's Mars rover Opportunity imaged a fossil in a Martian rock and then promptly ground it to dust.
  • The terrain surrounding the rover Spirit is littered with artificial objects.
  • Mars Pathfinder imaged a Sphinx on Mars.

    And if that weren't enough!

  • Apollo 17 astronauts photographed the head of an anthropomorphic robot resembling C-3PO lying in Shorty Crater on the Moon (p. 487).

It's like Velikovsky meets The Illuminatus! Trilogy, with some of the darker themes of “Millennium” thrown in for good measure.

Now, I'm sure, as always happens when I post a review like this, the usual suspects are going to write to demand whatever possessed me to read something like this and/or berate me for giving publicity to such hyperdimensional hogwash. Lighten up! I read for enjoyment, and to anybody with a grounding in the Actual Universe™, this stuff is absolutely hilarious: there's a chortle every few pages and a hearty guffaw or two in each chapter. The authors actually write quite well: this is not your usual semi-literate crank-case sludge, although like many on the far fringes of rationality they seem to be unduly challenged by the humble apostrophe. Hoagland is inordinately fond of the word “infamous”, but this becomes rather charming after the first hundred or so, kind of like the verbal tics of your crazy uncle, who Hoagland rather resembles. It's particularly amusing to read the accounts of Hoagland's assorted fallings out and feuds with other “anomalists”; when Tom Van Flandern concludes you're a kook, then you know you're out there, and I don't mean hanging with the truth.


Gurstelle, William. Whoosh Boom Splat. New York: Three Rivers Press, 2007. ISBN 0-307-33948-3.
So you've read The Dangerous Book for Boys and now you're wondering, “Where's the dangerous book for adults?”. Well, here you go. Subtitled “The Garage Warrior's Guide to Building Projectile Shooters”, in just 160 pages with abundant illustrations, the author shows how with inexpensive materials, handyman tools, and only the most modest of tinkering skills, you can build devices including a potato cannon which can shoot a spud more than 200 metres powered by hairspray, a no-moving-parts pulse jet built from a mason jar and pipe fittings, a steam cannon, a “snap shooter” made from an ordinary spring-type wooden clothespin which can launch small objects across a room (or, should that not be deemed dangerous enough, flaming matches [outside, please!]), and more. The detailed instructions for building the devices and safety tips for operating them are accompanied by historical anecdotes and background on the science behind the gadgets. Ever-versatile PVC pipe is used in many of the projects, and no welding or metalworking skills (beyond drilling holes) are required.

If you find these projects still lacking that certain frisson, you might want to check out the author's Adventures from the Technology Underground (February 2006), which you can think of as The Absurdly Dangerous Book for Darwin Award Candidates, albeit without the detailed construction plans of the present volume. Enough scribbling—time to get back to work on that rail gun.


Edwards-Jones, Imogen. Fashion Babylon. London: Corgi Books, 2006. ISBN 0-552-15443-1.
This is a hard-to-classify but interesting and enjoyable book. I'm not sure even whether to call it fiction or nonfiction: the author has invented a notional co-author, “Anonymous”, who relates, condensed into a single six-month fashion season, anecdotes from a large collection of sources within the British fashion industry, all of which the author vouches for as authentic. Celebrities appear under their own names, and the stories involving them (often bizarre) are claimed to be genuine.

If you're looking for snark, cynicism, cocaine, cigarettes, champagne, anorexia, and other decadence and dissipation, you'll find it, but you'll also take away a thorough grounding in the economics of a business fully as bizarre as the software industry. The gross margin is almost as high and, except for the brand name and associated logos, there is essentially zero protection of intellectual property (as long as you don't counterfeit the brand, you can knock-off any design, just as you can create a work-alike for almost any non-patent-protected software product and sell it for a tiny fraction of the price of the prototype). The vertiginous plunge from the gross margin to the meagre bottom line is mostly promotional hype: blow-outs to “build the brand”. So it may increasingly become in the software business as increases in functionality in products appeal to a smaller and smaller fraction of the customer base, or even reduce usability (Windows Vista, anybody?).

A U.S. Edition will be published in February 2008.


Zubrin, Robert Energy Victory. Amherst, NY: Prometheus Books, 2007. ISBN 1-59102-591-5.
This is a tremendous book—jam-packed with nerdy data of every kind. The author presents a strategy aiming for the total replacement of petroleum as a liquid fuel and chemical feedstock with an explicit goal of breaking the back of OPEC and, as he says, rendering the Middle East's near-monopoly on oil as significant on the world economic stage as its near-monopoly on camel milk.

The central policy recommendation is a U.S. mandate that all new vehicles sold in the U.S. be “flex-fuel” capable: able to run on gasoline, ethanol, or methanol in any mix whatsoever. This is a proven technology; there are more than 6 million gasoline/ethanol vehicles on the road at present, more than five times the number of gasoline/electric hybrids (p. 27), and the added cost over a gas-only vehicle is negligible. Gasoline/ethanol flex-fuel vehicles are approaching 100% of all new sales in Brazil (pp. 165–167), and that without a government mandate. Present flex vehicles are either gasoline/ethanol or gasoline/methanol, not tri-fuel, but according to Zubrin that's just a matter of tweaking the exhaust gas sensor and reprogramming the electronic fuel injection computer.

Zubrin argues that methanol capability in addition to ethanol is essential because methanol can be made from coal or natural gas, which the U.S. has in abundance, and it enables utilisation of natural gas which is presently flared due to being uneconomical to bring to market in gaseous form. This means that it isn't necessary to wait for a biomass ethanol economy to come on line. Besides, even if you do produce ethanol from, say, maize, you can still convert the cellulose “waste” into methanol economically. You can also react methanol into dimethyl ether, an excellent diesel fuel that burns cleaner than petroleum-based diesel. Coal-based methanol production produces greenhouse gases, but less than burning the coal to make electricity, then distributing it and using it in plug-in hybrids, given the efficiencies along the generation and transmission chain.

With full-flex, the driver becomes a genuine market player: you simply fill up from whatever pump has the cheapest fuel among those available wherever you happen to be: the car will run fine on any mix you end up with in the tank. People in Brazil have been doing this for the last several years, and have been profiting from their flex-fuel vehicles now that domestic ethanol is cheaper than gasoline. Brazil, in fact, reduced its net petroleum imports to zero in 2005 (from 80% in 1974), and is now a net exporter of energy (p. 168), rendering the Brazilian economy entirely immune to the direct effects of OPEC price shocks.

Zubrin also demolishes the argument that ethanol is energy neutral or a sink: recent research indicates that corn ethanol multiplies the energy input by a factor between 6 and 20. Did you know that of the two authors of an oft-cited 2005 “ethanol energy sink” paper, one (David Pimentel) is a radical Malthusian who wants to reduce the world population by a factor of three and the other (Tadeusz Patzek) comes out of the “all bidness” (pp. 126–135)?

The geopolitical implications of energy dependence and independence are illustrated with examples from both world wars and the present era, and a hopeful picture sketched in which the world transitions from looting developed countries to fill the coffers of terror masters and kleptocrats to a future where the funds for the world's liquid fuel energy needs flow instead to farmers in the developing world who create sustainable, greenhouse-neutral fuel by their own labour and intellect, rather than pumping expendable resources from underground.

Here we have an optimistic, pragmatic, and open-ended view of the human prospect. The post-petroleum era could be launched on a global scale by a single act of the U.S. Congress which would cost U.S. taxpayers nothing and have negligible drag on the domestic or world economy. The technologies required date mostly from the 19th century and are entirely mature today, and the global future advocated has already been prototyped in a large, economically and socially diverse country, with stunning success. Perhaps people in the second half of the 21st century will regard present-day prophets of “peak oil” and “global warming” as quaint as the doomsayers who foresaw the end of civilisation when firewood supplies were exhausted, just years before coal mines began to fuel the industrial revolution.


Brown, Paul. The Rocketbelt Caper. Newcastle upon Tyne: Tonto Press, 2007. ISBN 0-95521-837-3.
Few things are as iconic of the 21st century imagined by visionaries and science fictioneers of the 20th as the personal rocketbelt: just strap one on and take to the air, without complications such as wings, propellers, pilots, fuselage, or landing gear. Flying belts were a fixture of Buck Rogers comic strips and movie serials, and in 1965 Isaac Asimov predicted that by 1990 office workers would beat the traffic by commuting to work in their personal rocketbelts.

The possibilities of a personal flying machine did not escape the military, which imagined infantry soaring above the battlefield and outflanking antiquated tanks and troops on the ground. In the 1950s, engineers at the Bell Aircraft Corporation, builders of the X-1, the first plane to break the sound barrier, built prototypes of rocketbelts powered by monopropellant hydrogen peroxide, and eventually won a U.S. Army contract to demonstrate such a device. On April 20th, 1961, the first free flight occurred, and a public demonstration was performed the following June 8th. The rocketbelt was an immediate sensation. The Bell rocketbelt appeared in the James Bond film Thunderball, was showcased at the 1964 World's Fair in New York, at Disneyland, and at the first Super Bowl of American football in 1967. Although able to fly only twenty-odd seconds and reach an altitude of about 20 metres, here was Buck Rogers made real—certainly before long engineers would work out the remaining wrinkles and everybody would be taking to the skies.

And then a funny thing happened—nothing. Wendell Moore, creator of the rocketbelt at Bell, died in 1969 at age 51, and with no follow-up interest from the U.S. Army, the project was cancelled and the Bell rocketbelt never flew again. Enter Nelson Tyler, engineer and aerial photographer, who on his own initiative built a copy of the Bell rocketbelt which, under his ownership and subsequent proprietors made numerous promotional appearances around the world, including the opening ceremony of the 1984 Olympics in Los Angeles, before a television audience estimated in excess of two billion.

All of this is prologue to the utterly bizarre story of the RB-2000 rocketbelt, launched by three partners in 1992, motivated both by their individual obsession with flying a rocketbelt and dreams of the fortune they'd make from public appearances: the owners of the Tyler rocketbelt were getting US$25,000 per flight at the time. Obsession is not a good thing to bring to a business venture, and things rapidly went from bad to worse to truly horrid. Even before the RB-2000's first and last public flight in June 1995 (which was a complete success), one of the partners had held a gun to another's head who, in return, assaulted the first with a hammer, inflicting serious wounds. In July of 1998, the third partner was brutally murdered in his home, and to this day no charges have been made in the case. Not long thereafter one of the two surviving partners sued the other and won a judgement in excess of US$10 million and custody of the RB-2000, which had disappeared immediately after its sole public flight. When no rocketbelt or money was forthcoming, the plaintiff kidnapped the defendant and imprisoned him in a wooden box for eight days, when fortuitous circumstances permitted the victim to escape. The kidnapper was quickly apprehended and subsequently sentenced to life plus ten years for the crime (the sentence was later reduced to eight years). The kidnappee later spent more than five months in jail for contempt of court for failing to produce the RB-2000 in a civil suit. To this day, the whereabouts of the RB-2000, if it still exists, are unknown.

Now, you don't need to be a rocket scientist to figure out that flitting through the sky with a contraption powered by highly volatile and corrosive propellant, with total flight time of 21 seconds, and no backup systems of any kind is a perilous undertaking. But who would have guessed that trying to do so would entail the kinds of consequences the RB-2000 venture inflicted upon its principals?

A final chapter covers recent events in rocketbelt land, including the first International Rocketbelt Convention in 2006. The reader is directed to Peter Gijsberts' www.rocketbelt.nl site for news and additional information on present-day rocketbelt projects, including commercial ventures attempting to bring rocketbelts to market. One of the most remarkable things about the curious history of rocketbelts is that, despite occasional claims and ambitious plans, in the more than 45 years which have elapsed since the first flight of the Bell rocketbelt, nobody has substantially improved upon its performance.

A U.S. Edition was published in 2005, but is now out of print.


Lileks, James. Gastroanomalies. New York: Crown Publishers, 2007. ISBN 0-307-38307-5.
Should you find this delightful book under your tree this Christmas Day, let me offer you this simple plea. Do not curl up with it late at night after the festivities are over and you're winding down for the night. If you do:

  1. You will not get to sleep until you've finished it.
  2. Your hearty guffaws will keep everybody else awake as well.
  3. And finally, when you do drift off to sleep, visions of the culinary concoctions collected here may impede digestion of your holiday repast.

This sequel to The Gallery of Regrettable Food (April 2004) presents hundreds of examples of tasty treats from cookbooks and popular magazines from the 1930s through the 1960s. Perusal of these execrable entrées will make it immediately obvious why the advertising of the era featured so many patent remedies for each and every part of the alimentary canal. Most illustrations are in ghastly colour, with a few in merciful black and white. It wasn't just Americans who outdid themselves crafting dishes in the kitchen to do themselves in at the dinner table—a chapter is devoted to Australian delicacies, including some of the myriad ways to consume “baiycun”. There's something for everybody: mathematicians will savour the countably infinite beans-and-franks open-face sandwich (p. 95), goths will delight in discovering the dish Satan always brings to the pot luck (p. 21), political wonks need no longer wonder which appetiser won the personal endorsement of Earl Warren (p. 23), movie buffs will finally learn the favourite Bisquick recipes of Joan Crawford, Clark Gable, Bing Crosby, and Bette Davis (pp. 149–153), and all of the rest of us who've spent hours in the kitchen trying to replicate grandma's chicken feet soup will find the secret revealed here (p. 41). Revel in the rediscovery of aspic: the lost secret of turning unidentifiable food fragments into a gourmet treat by entombing them in jiggly meat-flavoured Jello-O. Bon appétit!

Many other vintage images of all kinds are available on the author's Web site.


Hellman, Hal. Great Feuds in Mathematics. Hoboken, NJ: John Wiley & Sons, 2006. ISBN 0-471-64877-9.
Since antiquity, many philosophers have looked upon mathematics as one thing, perhaps the only thing, that we can know for sure, “the last fortress of certitude” (p. 200). Certainly then, mathematicians must be dispassionate explorers of this frontier of knowledge, and mathematical research a grand collaborative endeavour, building upon the work of the past and weaving the various threads of inquiry into a seamless intellectual fabric. Well, not exactly….

Mathematicians are human, and mathematical research is a human activity like any other, so regardless of the austere crystalline perfection of the final product, the process of getting there can be as messy, contentious, and consequently entertaining as any other enterprise undertaken by talking apes. This book chronicles ten of the most significant and savage disputes in the history of mathematics. The bones of contention vary from the tried-and-true question of priority (Tartaglia vs. Cardano on the solution to cubic polynomials, Newton vs. Leibniz on the origin of the differential and integral calculus), the relation of mathematics to the physical sciences (Sylvester vs. Huxley), the legitimacy of the infinite in mathematics (Kronecker vs. Cantor, Borel vs. Zermelo), the proper foundation for mathematics (Poincaré vs. Russell, Hilbert vs. Brouwer), and even sibling rivalry (Jakob vs. Johann Bernoulli). A final chapter recounts the incessantly disputed question of whether mathematicians discover structures that are “out there” (as John D. Barrow puts it, “Pi in the Sky”) or invent what is ultimately as much a human construct as music or literature.

The focus is primarily on people and events, less so on the mathematical questions behind the conflict; if you're unfamiliar with the issues involved, you may want to look them up in other references. The stories presented here are an excellent antidote to the retrospective view of many accounts which present mathematical history as a steady march forward, with each generation building upon the work of the previous. The reality is much more messy, with the directions of inquiry chosen for reasons of ego and national pride as often as inherent merit, and the paths not taken often as interesting as those which were. Even if you believe (as I do) that mathematics is “out there”, the human struggle to discover and figure out how it all fits together is interesting and ultimately inspiring, and this book provides a glimpse into that ongoing quest.