Biography

Aagaard, Finn. Aagaard's Africa. Washington: National Rifle Association, 1991. ISBN 0-935998-62-4.
The author was born in Kenya in 1932 and lived there until 1977 when, after Kenya's ban on game hunting destroyed his livelihood as a safari guide, he emigrated to the United States, where he died in April 2000. This book recounts his life in Kenya, from boyhood through his career as a professional hunter and guide. If you find the thought of hunting African wildlife repellent, this is not the book for you. It does provide a fine look at Africa and its animals by a man who clearly cherished the land and the beasts which roam it, and viewed the responsible hunter as an integral part of a sustainable environment. A little forensic astronomy allows us to determine the day on which the kudu hunt described on page 124 took place. Aagaard writes, “There was a total eclipse of the sun that afternoon, but it seemed a minor event to us. Laird and I will always remember that day as ‘The Day We Shot The Kudu’.” Checking the canon of 20th century solar eclipses shows that the only total solar eclipse crossing Kenya during the years when Aagaard was hunting there was on June 30th, 1973, a seven minute totality once in a lifetime spectacle. So, the kudu hunt had to be that morning. To this amateur astronomer, no total solar eclipse is a minor event, and the one I saw in Africa will forever remain a major event in my life. A solar eclipse with seven minutes of totality is something I shall never live to see (the next occurring on June 25th, 2150), so I would have loved to have seen the last and would never have deemed it a “minor event”, but then I've never shot a kudu the morning of an eclipse!

This book is out of print and used copies, at this writing, are offered at outrageous prices. I bought this book directly from the NRA more than a decade ago—books sometimes sit on my shelf a long time before I read them. I wouldn't pay more than about USD 25 for a used copy.

July 2005 Permalink

Aldrin, Buzz. Magnificent Desolation. London: Bloomsbury, 2009. ISBN 978-1-4088-0416-2.
What do you do with the rest of your life when you were one of the first two humans to land on the Moon before you celebrated your fortieth birthday? This relentlessly candid autobiography answers that question for Buzz Aldrin (please don't write to chastise me for misstating his name: while born as Edwin Eugene Aldrin, Jr., he legally changed his name to Buzz Aldrin in 1979). Life after the Moon was not easy for Aldrin. While NASA trained their astronauts for every imaginable in-flight contingency, they prepared them in no way for their celebrity after the mission was accomplished, and detail-oriented engineers were suddenly thrust into the public sphere, sent as goodwill ambassadors around the world with little or no concern for the effects upon their careers or family lives.

All of this was not easy for Aldrin, and in this book he chronicles his marriages (3), divorces (2), battles against depression and alcoholism, search for a post-Apollo career, which included commanding the U.S. Air Force test pilot school at Edwards Air Force Base, writing novels, serving as a corporate board member, and selling Cadillacs. In the latter part of the book he describes his recent efforts to promote space tourism, develop affordable private sector access to space, and design an architecture which will permit exploration and exploitation of the resources of the Moon, Mars and beyond with budgets well below those of the Apollo era.

This book did not work for me. Buzz Aldrin has lived an extraordinary life: he developed the techniques for orbital rendezvous used to this day in space missions, pioneered underwater neutral buoyancy training for spacewalks then performed the first completely successful extra-vehicular activity on Gemini 12, demonstrating that astronauts can do useful work in the void, and was the second man to set foot on the Moon. But all of this is completely covered in the first three chapters, and then we have 19 more chapters describing his life after the Moon. While I'm sure it's fascinating if you've lived though it yourself, it isn't necessarily all that interesting to other people. Aldrin comes across as, and admits to being, self-centred, and this is much in evidence here. His adventures, ups, downs, triumphs, and disappointments in the post-Apollo era are those that many experience in their own lives, and I don't find them compelling to read just because the author landed on the Moon forty years ago.

Buzz Aldrin is not just an American hero, but a hero of the human species: he was there when the first naked apes reached out and set foot upon another celestial body (hear what he heard in his headphones during the landing). His life after that epochal event has been a life well-lived, and his efforts to open the high frontier to ordinary citizens are to be commended. This book is his recapitulation of his life so far, but I must confess I found the post-Apollo narrative tedious. But then, they wouldn't call him Buzz if there wasn't a buzz there! Buzz is 80 years old and envisions living another 20 or so. Works for me: I'm around 60, so that gives me 40 or so to work with. Given any remotely sane space policy, Buzz could be the first man to set foot on Mars in the next 15 years, and Lois could be the first woman. Maybe I and the love of my life will be among the crew to deliver them their supplies and the essential weasels for their planetary colonisation project.

A U.S. edition is available.

January 2011 Permalink

Aron, Leon. Yeltsin: A Revolutionary Life. New York: St. Martin's, 2000. ISBN 0-312-25185-8.

November 2001 Permalink

Bin Ladin, Carmen. The Veiled Kingdom. London: Virago Press, 2004. ISBN 1-84408-102-8.
Carmen Bin Ladin, a Swiss national with a Swiss father and Iranian mother, married Yeslam Bin Ladin in 1974 and lived in Jeddah, Saudi Arabia from 1976 to 1985. Yeslam Bin Ladin is one of the 54 sons and daughters sired by that randy old goat Sheikh Mohamed Bin Laden on his twenty-two wives including, of course, murderous nutball Osama. (There is no unique transliteration of Arabic into English. Yeslam spells his name “Bin Ladin”, while other members of the clan use “Bin Laden”, the most common spelling in the media. This book uses “Bin Ladin” when referring to Yeslam, Carmen, and their children, and “Bin Laden” when referring to the clan or other members of it.) This autobiography provides a peek, through the eyes of a totally Westernised woman, into the bizarre medieval life of Saudi women and the arcane customs of that regrettable kingdom. The author separated from her husband in 1988 and presently lives in Geneva. The link above is to a U.K. paperback edition. I believe the same book is available in the U.S. under the title Inside the Kingdom : My Life in Saudi Arabia, but at the present time only in hardcover.

September 2004 Permalink

Brookhiser, Richard. Founding Father. New York: Free Press, 1996. ISBN 0-684-83142-2.
This thin (less than 200 pages of main text) volume is an enlightening biography of George Washington. It is very much a moral biography in the tradition of Plutarch's Lives; the focus is on Washington's life in the public arena and the events in his life which formed his extraordinary character. Reading Washington's prose, one might assume that he, like many other framers of the U.S. Constitution, had an extensive education in the classics, but in fact his formal education ended at age 15, when he became an apprentice surveyor—among U.S. presidents, only Andrew Johnson had less formal schooling. Washington's intelligence and voracious reading—his library numbered more than 900 books at his death—made him the intellectual peer of his just sprouting Ivy League contemporaries. One historical footnote I'd never before encountered is the tremendous luck the young U.S. republic had in escaping the risk of dynasty—among the first five U.S. presidents, only John Adams had a son who survived to adulthood (and his eldest son, John Quincy Adams, became the sixth president).

May 2005 Permalink

Brown, Brandon R. Planck. Oxford: Oxford University Press, 2015. ISBN 978-0-19-021947-5.
Theoretical physics is usually a young person's game. Many of the greatest breakthroughs have been made by researchers in their twenties, just having mastered existing theories while remaining intellectually flexible and open to new ideas. Max Planck, born in 1858, was an exception to this rule. He spent most of his twenties living with his parents and despairing of finding a paid position in academia. He was thirty-six when he took on the project of understanding heat radiation, and forty-two when he explained it in terms which would launch the quantum revolution in physics. He was in his fifties when he discovered the zero-point energy of the vacuum, and remained engaged and active in science until shortly before his death in 1947 at the age of 89. As theoretical physics editor for the then most prestigious physics journal in the world, Annalen der Physik, in 1905 he approved publication of Einstein's special theory of relativity, embraced the new ideas from a young outsider with neither a Ph.D. nor an academic position, extended the theory in his own work in subsequent years, and was instrumental in persuading Einstein to come to Berlin, where he became a close friend.

Sometimes the simplest puzzles lead to the most profound of insights. At the end of the nineteenth century, the radiation emitted by heated bodies was such a conundrum. All objects emit electromagnetic radiation due to the thermal motion of their molecules. If an object is sufficiently hot, such as the filament of an incandescent lamp or the surface of the Sun, some of the radiation will fall into the visible range and be perceived as light. Cooler objects emit in the infrared or lower frequency bands and can be detected by instruments sensitive to them. The radiation emitted by a hot object has a characteristic spectrum (the distribution of energy by frequency), and has a peak which depends only upon the temperature of the body. One of the simplest cases is that of a black body, an ideal object which perfectly absorbs all incident radiation. Consider an ideal closed oven which loses no heat to the outside. When heated to a given temperature, its walls will absorb and re-emit radiation, with the spectrum depending upon its temperature. But the equipartition theorem, a cornerstone of statistical mechanics, predicted that the absorption and re-emission of radiation in the closed oven would result in a ever-increasing peak frequency and energy, diverging to infinite temperature, the so-called ultraviolet catastrophe. Not only did this violate the law of conservation of energy, it was an affront to common sense: closed ovens do not explode like nuclear bombs. And yet the theory which predicted this behaviour, the Rayleigh-Jeans law, made perfect sense based upon the motion of atoms and molecules, correctly predicted numerous physical phenomena, and was correct for thermal radiation at lower temperatures.

At the time Planck took up the problem of thermal radiation, experimenters in Germany were engaged in measuring the radiation emitted by hot objects with ever-increasing precision, confirming the discrepancy between theory and reality, and falsifying several attempts to explain the measurements. In December 1900, Planck presented his new theory of black body radiation and what is now called Planck's Law at a conference in Berlin. Written in modern notation, his formula for the energy emitted by a body of temperature T at frequency ν is:

Planck's Law

This equation not only correctly predicted the results measured in the laboratories, it avoided the ultraviolet catastrophe, as it predicted an absolute cutoff of the highest frequency radiation which could be emitted based upon an object's temperature. This meant that the absorption and re-emission of radiation in the closed oven could never run away to infinity because no energy could be emitted above the limit imposed by the temperature.

Fine: the theory explained the measurements. But what did it mean? More than a century later, we're still trying to figure that out.

Planck modeled the walls of the oven as a series of resonators, but unlike earlier theories in which each could emit energy at any frequency, he constrained them to produce discrete chunks of energy with a value determined by the frequency emitted. This had the result of imposing a limit on the frequency due to the available energy. While this assumption yielded the correct result, Planck, deeply steeped in the nineteenth century tradition of the continuum, did not initially suggest that energy was actually emitted in discrete packets, considering this aspect of his theory “a purely formal assumption.” Planck's 1900 paper generated little reaction: it was observed to fit the data, but the theory and its implications went over the heads of most physicists.

In 1905, in his capacity as editor of Annalen der Physik, he read and approved the publication of Einstein's paper on the photoelectric effect, which explained another physics puzzle by assuming that light was actually emitted in discrete bundles with an energy determined by its frequency. But Planck, whose equation manifested the same property, wasn't ready to go that far. As late as 1913, he wrote of Einstein, “That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.” Only in the 1920s did Planck fully accept the implications of his work as embodied in the emerging quantum theory.

The equation for Planck's Law contained two new fundamental physical constants: Planck's constant (h) and Boltzmann's constant (kB). (Boltzmann's constant was named in memory of Ludwig Boltzmann, the pioneer of statistical mechanics, who committed suicide in 1906. The constant was first introduced by Planck in his theory of thermal radiation.) Planck realised that these new constants, which related the worlds of the very large and very small, together with other physical constants such as the speed of light (c), the gravitational constant (G), and the Coulomb constant (ke), allowed defining a system of units for quantities such as length, mass, time, electric charge, and temperature which were truly fundamental: derived from the properties of the universe we inhabit, and therefore comprehensible to intelligent beings anywhere in the universe. Most systems of measurement are derived from parochial anthropocentric quantities such as the temperature of somebody's armpit or the supposed distance from the north pole to the equator. Planck's natural units have no such dependencies, and when one does physics using them, equations become simpler and more comprehensible. The magnitudes of the Planck units are so far removed from the human scale they're unlikely to find any application outside theoretical physics (imagine speed limit signs expressed in a fraction of the speed of light, or road signs giving distances in Planck lengths of 1.62×10−35 metres), but they reflect the properties of the universe and may indicate the limits of our ability to understand it (for example, it may not be physically meaningful to speak of a distance smaller than the Planck length or an interval shorter than the Planck time [5.39×10−44 seconds]).

Planck's life was long and productive, and he enjoyed robust health (he continued his long hikes in the mountains into his eighties), but was marred by tragedy. His first wife, Marie, died of tuberculosis in 1909. He outlived four of his five children. His son Karl was killed in 1916 in World War I. His two daughters, Grete and Emma, both died in childbirth, in 1917 and 1919. His son and close companion Erwin, who survived capture and imprisonment by the French during World War I, was arrested and executed by the Nazis in 1945 for suspicion of involvement in the Stauffenberg plot to assassinate Hitler. (There is no evidence Erwin was a part of the conspiracy, but he was anti-Nazi and knew some of those involved in the plot.)

Planck was repulsed by the Nazis, especially after a private meeting with Hitler in 1933, but continued in his post as the head of the Kaiser Wilhelm Society until 1937. He considered himself a German patriot and never considered emigrating (and doubtless his being 75 years old when Hitler came to power was a consideration). He opposed and resisted the purging of Jews from German scientific institutions and the campaign against “Jewish science”, but when ordered to dismiss non-Aryan members of the Kaiser Wilhelm Society, he complied. When Heisenberg approached him for guidance, he said, “You have come to get my advice on political questions, but I am afraid I can no longer advise you. I see no hope of stopping the catastrophe that is about to engulf all our universities, indeed our whole country. … You simply cannot stop a landslide once it has started.”

Planck's house near Berlin was destroyed in an Allied bombing raid in February 1944, and with it a lifetime of his papers, photographs, and correspondence. (He and his second wife Marga had evacuated to Rogätz in 1943 to escape the raids.) As a result, historians have only limited primary sources from which to work, and the present book does an excellent job of recounting the life and science of a man whose work laid part of the foundations of twentieth century science.

January 2017 Permalink

Bryson, Bill. Shakespeare. London: Harper Perennial, 2007. ISBN 978-0-00-719790-3.
This small, thin (200 page) book contains just about every fact known for certain about the life of William Shakespeare, which isn't very much. In fact, if the book restricted itself only to those facts, and excluded descriptions of Elizabethan and Jacobean England, Shakespeare's contemporaries, actors and theatres of the time, and the many speculations about Shakespeare and the deliciously eccentric characters who sometimes promoted them, it would probably be a quarter of its present length.

For a figure whose preeminence in English literature is rarely questioned today, and whose work shaped the English language itself—2035 English words appear for the first time in the works of Shakespeare, of which about 800 continue in common use today, including critical, frugal, horrid, vast, excellent, lonely, leapfrog, and zany (pp. 112–113)—very little is known apart from the content of his surviving work. We know the dates of his birth, marriage, and death, something of his parents, siblings, wife, and children, but nothing of his early life, education, travel, reading, or any of the other potential sources of the extraordinary knowledge and insight into the human psyche which informs his work. Between the years 1585 and 1592 he drops entirely from sight: no confirmed historical record has been found, then suddenly he pops up in London, at the peak of his powers, writing, producing, and performing in plays and quickly gaining recognition as one of the preeminent dramatists of his time. We don't even know (although there is no shortage of speculation) which plays were his early works and which were later: there is no documentary evidence for the dates of the plays nor the order in which they were written, apart from a few contemporary references which allow placing a play as no later than the mention of it. We don't even know how he spelt or pronounced his name: of six extant signatures believed to be in his hand, no two spell his name the same way, and none uses the “Shakespeare” spelling in use today.

Shakespeare's plays brought him fame and a substantial fortune during his life, but plays were regarded as ephemeral things at the time, and were the property of the theatrical company which commissioned them, not the author, so no authoritative editions of the plays were published during his life. Had it not been for the efforts of his colleagues John Heminges and Henry Condell, who published the “First Folio” edition of his collected works seven years after his death, it is probable that the eighteen plays which first appeared in print in that edition would have been lost to history, with subsequent generations deeming Shakespeare, based upon surviving quarto editions of uneven (and sometimes laughable) quality of a few plays, one of a number of Elizabethan playwrights but not the towering singular figure he is now considered to be. (One wonders if there were others of Shakespeare's stature who were not as lucky in the dedication of their friends, of whose work we shall never know.) Nobody really knows how many copies of the First Folio were printed, but guesses run between 750 and 1000. Around 300 copies in various states of completeness have survived to the present, and around eighty copies are in a single room at the Folger Shakespeare Library in Washington, D.C., about two blocks from the U.S. Capitol. Now maybe decades of computer disasters have made me obsessively preoccupied with backup and geographical redundancy, but that just makes me shudder. Is there anybody there who wonders whether this is really a good idea? After all, the last time I was a few blocks from the U.S. Capitol, I spotted an ACME MISSILE BOMB right in plain sight!

A final chapter is devoted to theories that someone other than the scantily documented William Shakespeare wrote the works attributed to him. The author points out the historical inconsistencies and implausibilities of most frequently proffered claimants, and has a good deal of fun with some of the odder of the theorists, including the exquisitely named J. Thomas Looney, Sherwood E. Silliman, and George M. Battey.

Bill Bryson fans who have come to cherish his lighthearted tone and quirky digressions on curious details and personalities from such works as A Short History of Nearly Everything (November 2007) will not be disappointed. If one leaves the book not knowing a great deal about Shakespeare, because so little is actually known, it is with a rich sense of having been immersed in the England of his time and the golden age of theatre to which he so mightily contributed.

A U.S. edition is available, but at this writing only in hardcover.

July 2008 Permalink

Bryson, Bill. The Life and Times of the Thunderbolt Kid. London: Black Swan, 2007. ISBN 978-0-552-77254-9.
What could be better than growing up in the United States in the 1950s? Well, perhaps being a kid with super powers as the American dream reached its apogee and before the madness started! In this book, humorist, travel writer, and science populariser extraordinaire Bill Bryson provides a memoir of his childhood (and, to a lesser extent, coming of age) in Des Moines, Iowa in the 1950s and '60s. It is a thoroughly engaging and charming narrative which, if you were a kid there, then will bring back a flood of fond memories (as well as some acutely painful ones) and if you weren't, to appreciate, as the author closes the book, “What a wonderful world it was. We won't see its like again, I'm afraid.”

The 1950s were the golden age of comic books, and whilst shopping at the local supermarket, Bryson's mother would drop him in the (unsupervised) Kiddie Corral where he and other offspring could indulge for free to their heart's content. It's only natural a red-blooded Iowan boy would discover himself to be a superhero, The Thunderbolt Kid, endowed with ThunderVision, which enabled his withering gaze to vapourise morons. Regrettably, the power seemed to lack permanence, and the morons so dispersed into particles of the luminiferous æther had a tedious way of reassembling themselves and further vexing our hero and his long-suffering schoolmates. But still, more work for The Thunderbolt Kid!

This was a magic time in the United States—when prosperity not only returned after depression and war, but exploded to such an extent that mean family income more than doubled in the 1950s while most women still remained at home raising their families. What had been considered luxuries just a few years before: refrigerators and freezers, cars and even second cars, single family homes, air conditioning, television, all became commonplace (although kids would still gather in the yard of the neighbourhood plutocrat to squint through his window at the wonder of colour TV and chuckle at why he paid so much for it).

Although the transformation of the U.S. from an agrarian society to a predominantly urban and industrial nation was well underway, most families were no more than one generation removed from the land, and Bryson recounts his visits to his grandparents' farm which recall what was lost and gained as that pillar of American society went into eclipse.

There are relatively few factual errors, but from time to time Bryson's narrative swallows counterfactual left-wing conventional wisdom about the Fifties. For example, writing about atomic bomb testing:

Altogether between 1946 and 1962, the United States detonated just over a thousand nuclear warheads, including some three hundred in the open air, hurling numberless tons of radioactive dust into the atmosphere. The USSR, China, Britain, and France detonated scores more.

Sigh…where do we start? Well, the obvious subtext is that U.S. started the arms race and that other nuclear powers responded in a feeble manner. In fact, the U.S. conducted a total of 1030 nuclear tests, with a total of 215 detonated in the atmosphere, including all tests up until testing was suspended in 1992, with the balance conducted underground with no release of radioactivity. The Soviet Union (USSR) did, indeed, conduct “scores” of tests, to be precise 35.75 score with a total of 715 tests, with 219 in the atmosphere—more than the U.S.—including Tsar Bomba, with a yield of 50 megatons. “Scores” indeed—surely the arms race was entirely at the instigation of the U.S.

If you've grown up in he U.S. in the 1950s or wished you did, you'll want to read this book. I had totally forgotten the radioactive toilets you had to pay to use but kids could wiggle under the door to bask in their actinic glare, the glories of automobiles you could understand piece by piece and were your ticket to exploring a broad continent where every town, every city was completely different: not just another configuration of the same franchises and strip malls (and yet recall how exciting it was when they first arrived: “We're finally part of the great national adventure!”)

The 1950s, when privation gave way to prosperity, yet Leviathan had not yet supplanted family, community, and civil society, it was utopia to be a kid (although, having been there, then, I'd have deemed it boring, but if I'd been confined inside as present-day embryonic taxpayers in safetyland are I'd have probably blown things up. Oh wait—Willoughby already did that, twelve hours too early!). If you grew up in the '50s, enjoy spending a few pleasant hours back there; if you're a parent of the baby boomers, exult in the childhood and opportunities you entrusted to them. And if you're a parent of a child in this constrained century? Seek to give your child the unbounded opportunities and unsupervised freedom to explore the world which Bryson and this humble scribbler experienced as we grew up.

Vapourising morons with ThunderVision—we need you more than ever, Thunderbolt Kid!

A U.S. edition is available.

January 2010 Permalink

Burns, Jennifer. Goddess of the Market. New York: Oxford University Press, 2009. ISBN 978-0-19-532487-7.
For somebody who built an entire philosophical system founded on reason, and insisted that even emotion was ultimately an expression of rational thought which could be arrived at from first principles, few modern writers have inspired such passion among their readers, disciples, enemies, critics, and participants in fields ranging from literature, politics, philosophy, religion, architecture, music, economics, and human relationships as Ayn Rand. Her two principal novels, The Fountainhead and Atlas Shrugged (April 2010), remain among the best selling fiction titles more than half a century after their publication, with in excess of ten million copies sold. More than half a million copies of Atlas Shrugged were sold in 2009 alone.

For all the commercial success of her works, which made this refugee from the Soviet Union, writing in a language she barely knew when she arrived in the United States, wealthy before her fortieth birthday, her work was generally greeted with derision among the literary establishment, reviewers in major newspapers, and academics. By the time Atlas Shrugged was published in 1957, she saw herself primarily as the founder of an all-encompassing philosophical system she named Objectivism, and her fiction as a means to demonstrate the validity of her system and communicate it to a broad audience. Academic philosophers, for the most part, did not even reject her work but simply ignored it, deeming it unworthy of their consideration. And Rand did not advance her cause by refusing to enter into the give and take of philosophical debate but instead insist that her system was self-evidently correct and had to be accepted as a package deal with no modifications.

As a result, she did not so much attract followers as disciples, who looked to her words as containing the answer to all of their questions, and whose self-worth was measured by how close they became to, as it were, the fountainhead whence they sprang. Some of these people were extremely bright, and went on to distinguished careers in which they acknowledged Rand's influence on their thinking. Alan Greenspan was a member of Rand's inner circle in the 1960s, making the case for a return to the gold standard in her newsletter, before becoming the maestro of paper money decades later.

Although her philosophy claimed that contradiction was impossible, her life and work were full of contradictions. While arguing that everything of value sprang from the rational creativity of free minds, she created a rigid system of thought which she insisted her followers adopt without any debate or deviation, and banished them from her circle if they dared dissent. She claimed to have created a self-consistent philosophical and moral system which was self-evidently correct, and yet she refused to debate those championing other systems. Her novels portray the state and its minions in the most starkly negative light of perhaps any broadly read fiction, and yet she detested libertarians and anarchists, defended the state as necessary to maintain the rule of law, and exulted in the success of Apollo 11 (whose launch she was invited to observe).

The passion that Ayn Rand inspires has coloured most of the many investigations of her life and work published to date. Finally, in this volume, we have a more or less dispassionate examination of her career and œuvre, based on original documents in the collection of the Ayn Rand Institute and a variety of other archives. Based upon the author's Ph.D. dissertation (and with the wealth of footnotes and source citations customary in such writing), this book makes an effort to tell the story of Ayn Rand's life, work, and their impact upon politics, economics, philosophy, and culture to date, and her lasting legacy, without taking sides. The author is neither a Rand follower nor a confirmed opponent, and pretty much lets each reader decide where they come down based on the events described.

At the outset, the author writes, “For over half a century, Rand has been the ultimate gateway drug to life on the right.” I initially found this very off-putting, and resigned myself to enduring another disdainful dismissal of Rand (to whose views the vast majority of the “right” over that half a century would have taken violent exception: Rand was vehemently atheist, opposing any mixing of religion and politics; a staunch supporter of abortion rights; opposed the Vietnam War and conscription; and although she rejected the legalisation of marijuana, cranked out most of her best known work while cranked on Benzedrine), as I read the book the idea began to grow on me. Indeed, many people in the libertarian and conservative worlds got their introduction to thought outside the collectivist and statist orthodoxy pervading academia and the legacy media by reading one of Ayn Rand's novels. This may have been the moment at which they first began to, as the hippies exhorted, “question authority”, and investigate other sources of information and ways of thinking and looking at the world. People who grew up with the Internet will find it almost impossible to imagine how difficult this was back in the 1960s, where even discovering the existence of a dissenting newsletter (amateurishly produced, irregularly issued, and with a tiny subscriber base) was entirely a hit or miss matter. But Ayn Rand planted the seed in the minds of millions of people, a seed which might sprout when they happened upon a like mind, or a like-minded publication.

The life of Ayn Rand is simultaneously a story of an immigrant living the American dream: success in Hollywood and Broadway and wealth beyond even her vivid imagination; the frustration of an author out of tune with the ideology of the times; the political education of one who disdained politics and politicians; the birth of one of the last “big systems” of philosophy in an age where big systems had become discredited; and a life filled with passion lived by a person obsessed with reason. The author does a thorough job of pulling this all together into a comprehensible narrative which, while thoroughly documented and eschewing enthusiasm in either direction, will keep you turning the pages. The author is an academic, and writes in the contemporary scholarly idiom: the term “right-wing” appears 15 times in the book, while “left-wing” is used not at all, even when describing officials and members of the Communist Party USA. Still, this does not detract from the value of this work: a serious, in-depth, and agenda-free examination of Ayn Rand's life, work, and influence on history, today, and tomorrow.

December 2010 Permalink

Carlson, W. Bernard. Tesla: Inventor of the Electrical Age. Princeton: Princeton University Press, 2013. ISBN 978-0-691-16561-5.
Nicola Tesla was born in 1858 in a village in what is now Croatia, then part of the Austro-Hungarian Empire. His father and grandfather were both priests in the Orthodox church. The family was of Serbian descent, but had lived in Croatia since the 1690s among a community of other Serbs. His parents wanted him to enter the priesthood and enrolled him in school to that end. He excelled in mathematics and, building on a boyhood fascination with machines and tinkering, wanted to pursue a career in engineering. After completing high school, Tesla returned to his village where he contracted cholera and was near death. His father promised him that if he survived, he would “go to the best technical institution in the world.” After nine months of illness, Tesla recovered and, in 1875 entered the Joanneum Polytechnic School in Graz, Austria.

Tesla's university career started out brilliantly, but he came into conflict with one of his physics professors over the feasibility of designing a motor which would operate without the troublesome and unreliable commutator and brushes of existing motors. He became addicted to gambling, lost his scholarship, and dropped out in his third year. He worked as a draftsman, taught in his old high school, and eventually ended up in Prague, intending to continue his study of engineering at the Karl-Ferdinand University. He took a variety of courses, but eventually his uncles withdrew their financial support.

Tesla then moved to Budapest, where he found employment as chief electrician at the Budapest Telephone Exchange. He quickly distinguished himself as a problem solver and innovator and, before long, came to the attention of the Continental Edison Company of France, which had designed the equipment used in Budapest. He was offered and accepted a job at their headquarters in Ivry, France. Most of Edison's employees had practical, hands-on experience with electrical equipment, but lacked Tesla's formal education in mathematics and physics. Before long, Tesla was designing dynamos for lighting plants and earning a handsome salary. With his language skills (by that time, Tesla was fluent in Serbian, German, and French, and was improving his English), the Edison company sent him into the field as a trouble-shooter. This further increased his reputation and, in 1884 he was offered a job at Edison headquarters in New York. He arrived and, years later, described the formalities of entering the U.S. as an immigrant: a clerk saying “Kiss the Bible. Twenty cents!”.

Tesla had never abandoned the idea of a brushless motor. Almost all electric lighting systems in the 1880s used direct current (DC): electrons flowed in only one direction through the distribution wires. This is the kind of current produced by batteries, and the first electrical generators (dynamos) produced direct current by means of a device called a commutator. As the generator is turned by its power source (for example, a steam engine or water wheel), power is extracted from the rotating commutator by fixed brushes which press against it. The contacts on the commutator are wired to the coils in the generator in such a way that a constant direct current is maintained. When direct current is used to drive a motor, the motor must also contain a commutator which converts the direct current into a reversing flow to maintain the motor in rotary motion.

Commutators, with brushes rubbing against them, are inefficient and unreliable. Brushes wear and must eventually be replaced, and as the commutator rotates and the brushes make and break contact, sparks may be produced which waste energy and degrade the contacts. Further, direct current has a major disadvantage for long-distance power transmission. There was, at the time, no way to efficiently change the voltage of direct current. This meant that the circuit from the generator to the user of the power had to run at the same voltage the user received, say 120 volts. But at such a voltage, resistance losses in copper wires are such that over long runs most of the energy would be lost in the wires, not delivered to customers. You can increase the size of the distribution wires to reduce losses, but before long this becomes impractical due to the cost of copper it would require. As a consequence, Edison electric lighting systems installed in the 19th century had many small powerhouses, each supplying a local set of customers.

Alternating current (AC) solves the problem of power distribution. In 1881 the electrical transformer had been invented, and by 1884 high-efficiency transformers were being manufactured in Europe. Powered by alternating current (they don't work with DC), a transformer efficiently converts current from one voltage and current to another. For example, power might be transmitted from the generating station to the customer at 12000 volts and 1 ampere, then stepped down to 120 volts and 100 amperes by a transformer at the customer location. Losses in a wire are purely a function of current, not voltage, so for a given level of transmission loss, the cables to distribute power at 12000 volts will cost a hundredth as much as if 120 volts were used. For electric lighting, alternating current works just as well as direct current (as long as the frequency of the alternating current is sufficiently high that lamps do not flicker). But electricity was increasingly used to power motors, replacing steam power in factories. All existing practical motors ran on DC, so this was seen as an advantage to Edison's system.

Tesla worked only six months for Edison. After developing an arc lighting system only to have Edison put it on the shelf after acquiring the rights to a system developed by another company, he quit in disgust. He then continued to work on an arc light system in New Jersey, but the company to which he had licensed his patents failed, leaving him only with a worthless stock certificate. To support himself, Tesla worked repairing electrical equipment and even digging ditches, where one of his foremen introduced him to Alfred S. Brown, who had made his career in telegraphy. Tesla showed Brown one of his patents, for a “thermomagnetic motor”, and Brown contacted Charles F. Peck, a lawyer who had made his fortune in telegraphy. Together, Peck and Brown saw the potential for the motor and other Tesla inventions and in April 1887 founded the Tesla Electric Company, with its laboratory in Manhattan's financial district.

Tesla immediately set to make his dream of a brushless AC motor a practical reality and, by using multiple AC currents, out of phase with one another (the polyphase system), he was able to create a magnetic field which itself rotated. The rotating magnetic field induced a current in the rotating part of the motor, which would start and turn without any need for a commutator or brushes. Tesla had invented what we now call the induction motor. He began to file patent applications for the motor and the polyphase AC transmission system in the fall of 1887, and by May of the following year had been granted a total of seven patents on various aspects of the motor and polyphase current.

One disadvantage of the polyphase system and motor was that it required multiple pairs of wires to transmit power from the generator to the motor, which increased cost and complexity. Also, existing AC lighting systems, which were beginning to come into use, primarily in Europe, used a single phase and two wires. Tesla invented the split-phase motor, which would run on a two wire, single phase circuit, and this was quickly patented.

Unlike Edison, who had built an industrial empire based upon his inventions, Tesla, Peck, and Brown had no interest in founding a company to manufacture Tesla's motors. Instead, they intended to shop around and license the patents to an existing enterprise with the resources required to exploit them. George Westinghouse had developed his inventions of air brakes and signalling systems for railways into a successful and growing company, and was beginning to compete with Edison in the electric light industry, installing AC systems. Westinghouse was a prime prospect to license the patents, and in July 1888 a deal was concluded for cash, notes, and a royalty for each horsepower of motors sold. Tesla moved to Pittsburgh, where he spent a year working in the Westinghouse research lab improving the motor designs. While there, he filed an additional fifteen patent applications.

After leaving Westinghouse, Tesla took a trip to Europe where he became fascinated with Heinrich Hertz's discovery of electromagnetic waves. Produced by alternating current at frequencies much higher than those used in electrical power systems (Hertz used a spark gap to produce them), here was a demonstration of transmission of electricity through thin air—with no wires at all. This idea was to inspire much of Tesla's work for the rest of his life. By 1891, he had invented a resonant high frequency transformer which we now call a Tesla coil, and before long was performing spectacular demonstrations of artificial lightning, illuminating lamps at a distance without wires, and demonstrating new kinds of electric lights far more efficient than Edison's incandescent bulbs. Tesla's reputation as an inventor was equalled by his talent as a showman in presentations before scientific societies and the public in both the U.S. and Europe.

Oddly, for someone with Tesla's academic and practical background, there is no evidence that he mastered Maxwell's theory of electromagnetism. He believed that the phenomena he observed with the Tesla coil and other apparatus were not due to the Hertzian waves predicted by Maxwell's equations, but rather something he called “electrostatic thrusts”. He was later to build a great edifice of mistaken theory on this crackpot idea.

By 1892, plans were progressing to harness the hydroelectric power of Niagara Falls. Transmission of this power to customers was central to the project: around one fifth of the American population lived within 400 miles of the falls. Westinghouse bid Tesla's polyphase system and with Tesla's help in persuading the committee charged with evaluating proposals, was awarded the contract in 1893. By November of 1896, power from Niagara reached Buffalo, twenty miles away, and over the next decade extended throughout New York. The success of the project made polyphase power transmission the technology of choice for most electrical distribution systems, and it remains so to this day. In 1895, the New York Times wrote:

Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.

After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed.

None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles.

Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy.

In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.

February 2016 Permalink

Carpenter, [Malcolm] Scott and Kris Stoever. For Spacious Skies. New York: Harcourt, 2002. ISBN 0-15-100467-6.
This is the most detailed, candid, and well-documented astronaut memoir I've read (Collins' Carrying the Fire is a close second). Included is a pointed riposte to “the man malfunctioned” interpretation of Carpenter's MA-7 mission given in Chris Kraft's autobiography Flight (May 2001). Co-author Stoever is Carpenter's daughter.

June 2003 Permalink

Chaikin, Andrew. John Glenn: America's Astronaut. Washington: Smithsonian Books, 2014. ISBN 978-1-58834-486-1.
This short book (around 126 pages print equivalent), available only for the Kindle as a “Kindle single” at a modest price, chronicles the life and space missions of the first American to orbit the Earth. John Glenn grew up in a small Ohio town, the son of a plumber, and matured during the first great depression. His course in life was set when, in 1929, his father took his eight year old son on a joy ride offered by a pilot at local airfield in a Waco biplane. After that, Glenn filled up his room with model airplanes, intently followed news of air racers and pioneers of exploration by air, and in 1938 attended the Cleveland Air Races. There seemed little hope of his achieving his dream of becoming an airman himself: pilot training was expensive, and his family, while making ends meet during the depression, couldn't afford such a luxury.

With the war in Europe underway and the U.S. beginning to rearm and prepare for possible hostilities, Glenn heard of a government program, the Civilian Pilot Training Program, which would pay for his flying lessons and give him college credit for taking them. He entered the program immediately and received his pilot's license in May 1942. By then, the world was a very different place. Glenn dropped out of college in his junior year and applied for the Army Air Corps. When they dawdled accepting him, he volunteered for the Navy, which immediately sent him to flight school. After completing advanced flight training, he transferred to the Marine Corps, which was seeking aviators.

Sent to the South Pacific theatre, he flew 59 combat missions, mostly in close air support of ground troops in which Marine pilots specialise. With the end of the war, he decided to make the Marines his career and rotated through a number of stateside posts. After the outbreak of the Korean War, he hoped to see action in the jet combat emerging there and in 1953 arrived in country, again flying close air support. But an exchange program with the Air Force finally allowed him to achieve his ambition of engaging in air to air combat at ten miles a minute. He completed 90 combat missions in Korea, and emerged as one of the Marine Corps' most distinguished pilots.

Glenn parlayed his combat record into a test pilot position, which allowed him to fly the newest and hottest aircraft of the Navy and Marines. When NASA went looking for pilots for its Mercury manned spaceflight program, Glenn was naturally near the top of the list, and was among the 110 military test pilots invited to the top secret briefing about the project. Despite not meeting all of the formal selection criteria (he lacked a college degree), he performed superbly in all of the harrowing tests to which candidates were subjected, made cut after cut, and was among the seven selected to be the first astronauts.

This book, with copious illustrations and two embedded videos, chronicles Glenn's career, his harrowing first flight into space, his 1998 return to space on Space Shuttle Discovery on STS-95, and his 24 year stint in the U.S. Senate. I found the picture of Glenn after his pioneering flight somewhat airbrushed. It is said that while in the Senate, “He was known as one of NASA's strongest supporters on Capitol Hill…”, and yet in fact, while not one of the rabid Democrats who tried to kill NASA like Walter Mondale, he did not speak out as an advocate for a more aggressive space program aimed at expanding the human presence in space. His return to space is presented as the result of his assiduously promoting the benefits of space research for gerontology rather than a political junket by a senator which would generate publicity for NASA at a time when many people had tuned out its routine missions. (And if there was so much to be learned by flying elderly people in space, why was it never done again?)

John Glenn was a quintessential product of the old, tough America. A hero in two wars, test pilot when that was one of the most risky of occupations, and first to ride the thin-skinned pressure-stabilised Atlas rocket into orbit, his place in history is assured. His subsequent career as a politician was not particularly distinguished: he initiated few pieces of significant legislation and never became a figure on the national stage. His campaign for the 1984 Democratic presidential nomination went nowhere, and he was implicated in the “Keating Five” scandal. John Glenn accomplished enough in the first forty-five years of his life to earn him a secure place in American history. This book does an excellent job of recounting those events and placing them in the context of the time. If it goes a bit too far in lionising his subsequent career, that's understandable: a biographer shouldn't always succumb to balance when dealing with a hero.

April 2014 Permalink

Chambers, Whittaker. Witness. Washington: Regnery Publishing, [1952] 2002. ISBN 0-89526-789-6.

September 2003 Permalink

Chertok, Boris E. Rockets and People. Vol. 1. Washington: National Aeronautics and Space Administration, [1999] 2005. ISBN 978-1-4700-1463-6 NASA SP-2005-4110.
This is the first book of the author's monumental four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Chertok saw it all, from the earliest Soviet experiments with rocketry in the 1930s, uncovering the secrets of the German V-2 amid the rubble of postwar Germany (he was the director of the Institute RABE, where German and Soviet specialists worked side by side laying the foundations of postwar Soviet rocketry), the glory days of Sputnik and Gagarin, the anguish of losing the Moon race, and the emergence of Soviet preeminence in long-duration space station operations.

The first volume covers Chertok's career up to the conclusion of his work in Germany in 1947. Unlike Challenge to Apollo, which is a scholarly institutional and technical history (and consequently rather dry reading), Chertok gives you a visceral sense of what it was like to be there: sometimes chilling, as in his descriptions of the 1930s where he matter-of-factly describes his supervisors and colleagues as having been shot or sent to Siberia just as an employee in the West would speak of somebody being transferred to another office, and occasionally funny, as when he recounts the story of the imperious Valentin Glushko showing up at his door in a car belching copious smoke. It turns out that Glushko had driven all the way with the handbrake on, and his subordinate hadn't dared mention it because Glushko didn't like to be distracted when at the wheel.

When the Soviets began to roll out their space spectaculars in the late 1950s and early '60s, some in the West attributed their success to the Soviets having gotten the “good German” rocket scientists while the West ended up with the second team. Chertok's memoir puts an end to such speculation. By the time the Americans and British vacated the V-2 production areas, they had packed up and shipped out hundreds of rail cars of V-2 missiles and components and captured von Braun and all of his senior staff, who delivered extensive technical documentation as part of their surrender. This left the Soviets with pretty slim pickings, and Chertok and his staff struggled to find components, documents, and specialists left behind. This put them at a substantial disadvantage compared to the U.S., but forced them to reverse-engineer German technology and train their own people in the disciplines of guided missilery rather than rely upon a German rocket team.

History owes a great debt to Boris Chertok not only for the achievements in his six decade career (for which he was awarded Hero of Socialist Labour, the Lenin Prize, the Order of Lenin [twice], and the USSR State Prize), but for living so long and undertaking to document the momentous events he experienced at the first epoch at which such a candid account was possible. Only after the fall of the Soviet Union could the events chronicled here be freely discussed, and the merits and shortcomings of the Soviet system in accomplishing large technological projects be weighed.

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. Words are occasionally run together and capitalisation is missing on some proper nouns. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

May 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 2. Washington: National Aeronautics and Space Administration, [1999] 2006. ISBN 978-1-4700-1508-4 NASA SP-2006-4110.
This is the second book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of Chertok's chronicle begins with his return from Germany to the Soviet Union, where he discovers, to his dismay, that day-to-day life in the victorious workers' state is much harder than in the land of the defeated fascist enemy. He becomes part of the project, mandated by Stalin, to first launch captured German V-2 missiles and then produce an exact Soviet copy, designated the R-1. Chertok and his colleagues discover that making a copy of foreign technology may be more difficult than developing it from scratch—the V-2 used a multitude of steel and non-ferrous metal alloys, as well as numerous non-metallic components (seals, gaskets, insulation, etc.) which were not produced by Soviet industry. But without the experience of the German rocket team (which, by this time, was in the United States), there was no way to know whether the choice of a particular material was because its properties were essential to its function or simply because it was readily available in Germany. Thus, making an “exact copy” involved numerous difficult judgement calls where the designers had to weigh the risk of deviation from the German design against the cost of standing up a Soviet manufacturing capacity which might prove unnecessary.

After the difficult start which is the rule for missile projects, the Soviets managed to turn the R-1 into a reliable missile and, through patience and painstaking analysis of telemetry, solved a mystery which had baffled the Germans: why between 10% and 20% of V-2 warheads had detonated in a useless airburst high above the intended target. Chertok's instrumentation proved that the cause was aerodynamic heating during re-entry which caused the high explosive warhead to outgas, deform, and trigger the detonator.

As the Soviet missile program progresses, Chertok is a key player, participating in the follow-on R-2 project (essentially a Soviet Redstone—a V-2 derivative, but entirely of domestic design), the R-5 (an intermediate range ballistic missile eventually armed with nuclear warheads), and the R-7, the world's first intercontinental ballistic missile, which launched Sputnik, Gagarin, and whose derivatives remain in service today, providing the only crewed access to the International Space Station as of this writing.

Not only did the Soviet engineers have to build ever larger and more complicated hardware, they essentially had to invent the discipline of systems engineering all by themselves. While even in aviation it is often possible to test components in isolation and then integrate them into a vehicle, working out interface problems as they manifest themselves, in rocketry everything interacts, and when something goes wrong, you have only the telemetry and wreckage upon which to base your diagnosis. Consider: a rocket ascending may have natural frequencies in its tankage structure excited by vibration due to combustion instabilities in the engine. This can, in turn, cause propellant delivery to the engine to oscillate, which will cause pulses in thrust, which can cause further structural stress. These excursions may cause control actuators to be over-stressed and possibly fail. When all you have to go on is a ragged cloud in the sky, bits of metal raining down on the launch site, and some telemetry squiggles for a second or two before everything went pear shaped, it can be extraordinarily difficult to figure out what went wrong. And none of this can be tested on the ground. Only a complete systems approach can begin to cope with problems like this, and building that kind of organisation required a profound change in Soviet institutions, which had previously been built around imperial chief designers with highly specialised missions. When everything interacts, you need a different structure, and it was part of the genius of Sergei Korolev to create it. (Korolev, who was the author's boss for most of the years described here, is rightly celebrated as a great engineer and champion of missile and space projects, but in Chertok's view at least equally important was his talent in quickly evaluating the potential of individuals and filling jobs with the people [often improbable candidates] best able to do them.)

In this book we see the transformation of the Soviet missile program from slavishly copying German technology to world-class innovation, producing, in short order, the first ICBM, earth satellite, lunar impact, images of the lunar far side, and interplanetary probes. The missile men found themselves vaulted from an obscure adjunct of Red Army artillery to the vanguard of Soviet prestige in the world, with the Soviet leadership urging them on to ever greater exploits.

There is a tremendous amount of detail here—so much that some readers have deemed it tedious: I found it enlightening. The author dissects the Nedelin disaster in forensic detail, as well as the much less known 1980 catastrophe at Plesetsk where 48 died because a component of the rocket used the wrong kind of solder. Rocketry is an exacting business, and it is a gift to generations about to embark upon it to imbibe the wisdom of one who was present at its creation and learned, by decades of experience, just how careful one must be to succeed at it. I could go on regaling you with anecdotes from this book but, hey, if you've made it this far, you're probably going to read it yourself, so what's the point? (But if you do, I'd suggest you read Volume 1 [May 2012] first.)

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

August 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 3. Washington: National Aeronautics and Space Administration, [1999] 2009. ISBN 978-1-4700-1437-7 NASA SP-2009-4110.
This is the third book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of this memoir chronicled the achievements which thrust the Soviet Union's missile and space program into the consciousness of people world-wide and sparked the space race with the United States: the development of the R-7 ICBM, Sputnik and its successors, and the first flights which photographed the far side of the Moon and impacted on its surface. In this volume, the author describes the projects and accomplishments which built upon this base and persuaded many observers of the supremacy of Soviet space technology. Since the author's speciality was control systems and radio technology, he had an almost unique perspective upon these events: unlike other designers who focussed upon one or a few projects, he was involved in almost all of the principal efforts, from intermediate range, intercontinental, and submarine-launched ballistic missiles; air and anti-missile defence; piloted spaceflight; reconnaissance, weather, and navigation satellites; communication satellites; deep space missions and the ground support for them; soft landing on the Moon; and automatic rendezvous and docking. He was present when it looked like the rudimentary R-7 ICBM might be launched in anger during the Cuban missile crisis, at the table as chief designers battled over whether combat missiles should use cryogenic or storable liquid propellants or solid fuel, and sat on endless boards of inquiry after mission failures—the first eleven attempts to soft-land on the Moon failed, and Chertok was there for each launch, subsequent tracking, and sorting through what went wrong.

This was a time of triumph for the Soviet space program: the first manned flight, endurance record after endurance record, dual flights, the first woman in space, the first flight with a crew of more than one, and the first spacewalk. But from Chertok's perspective inside the programs, and the freedom he had to write candidly in the 1990s about his experiences, it is clear that the seeds of tragedy were being sown. With the quest for one spectacular after another, each surpassing the last, the Soviets became inoculated with what NASA came to call “go fever”—a willingness to brush anomalies under the rug and normalise the abnormal because you'd gotten away with it before.

One of the most stunning examples of this is Gagarin's flight. The Vostok spacecraft consisted of a spherical descent module (basically a cannonball covered with ablative thermal protection material) and an instrument compartment containing the retro-rocket, attitude control system, and antennas. After firing the retro-rocket, the instrument compartment was supposed to separate, allowing the descent module's heat shield to protect it through atmospheric re-entry. (The Vostok performed a purely ballistic re-entry, and had no attitude control thrusters in the descent module; stability was maintained exclusively by an offset centre of gravity.) In the two unmanned test flights which preceded Garagin's mission, the instrument module had failed to cleanly separate from the descent module, but the connection burned through during re-entry and the descent module survived. Gagarin was launched in a spacecraft with the same design, and the same thing happened: there were wild oscillations, but after the link burned through his spacecraft stabilised. Astonishingly, Vostok 2 was launched with Gherman Titov on board with precisely the same flaw, and suffered the same failure during re-entry. Once again, the cosmonaut won this orbital game of Russian roulette. One wonders what lessons were learned from this. In this narrative, Chertok is simply aghast at the decision making here, but one gets the sense that you had to be there, then, to appreciate what was going through people's heads.

The author was extensively involved in the development of the first Soviet communications satellite, Molniya, and provides extensive insights into its design, testing, and early operations. It is often said that the Molniya orbit was chosen because it made the satellite visible from the Soviet far North where geostationary satellites would be too close to the horizon for reliable communication. It is certainly true that today this orbit continues to be used for communications with Russian arctic territories, but its adoption for the first Soviet communications satellite had an entirely different motivation. Due to the high latitude of the Soviet launch site in Kazakhstan, Korolev's R-7 derived booster could place only about 100 kilograms into a geostationary orbit, which was far too little for a communication satellite with the technology of the time, but it could loft 1,600 kilograms into a high-inclination Molniya orbit. The only alternative would have been for Korolev to have approached Chelomey to launch a geostationary satellite on his UR-500 (Proton) booster, which was unthinkable because at the time the two were bitter rivals. So much for the frictionless efficiency of central planning!

In engineering, one learns that every corner cut will eventually come back to cut you. Korolev died at just the time he was most needed by the Soviet space program due to a botched operation for a routine condition performed by a surgeon who had spent most of his time as a Minister of the Soviet Union and not in the operating room. Gagarin died in a jet fighter training accident which has been the subject of such an extensive and multi-layered cover-up and spin that the author simply cites various accounts and leaves it to the reader to judge. Komarov died in Soyuz 1 due to a parachute problem which would have been discovered had an unmanned flight preceded his. He was a victim of “go fever”.

There is so much insight and wisdom here I cannot possibly summarise it all; you'll have to read this book to fully appreciate it, ideally after having first read Volume 1 (May 2012) and Volume 2 (August 2012). Apart from the unique insider's perspective on the Soviet missile and space program, as a person elected a corresponding member of the Soviet Academy of Sciences in 1968 and a full member (academician) of the Russian Academy of Sciences in 2000, he provides a candid view of the politics of selection of members of the Academy and how they influence policy and projects at the national level. Chertok believes that, even as one who survived Stalin's purges, there were merits to the Soviet system which have been lost in the “new Russia”. His observations are worth pondering by those who instinctively believe the market will always converge upon the optimal solution.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They are not only better formatted but free.

The original Russian edition is available online.

December 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 4. Washington: National Aeronautics and Space Administration, [1999] 2011. ISBN 978-1-4700-1437-7 NASA SP-2011-4110.
This is the fourth and final book of the author's autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. As he says in this volume, “I was born in the Russian Empire, grew up in Soviet Russia, achieved a great deal in the Soviet Union, and continue to work in Russia.” After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

This work covers the Soviet manned lunar program and the development of long-duration space stations and orbital rendezvous, docking, and assembly. As always, Chertok was there, and participated in design and testing, was present for launches and in the control centre during flights, and all too often participated in accident investigations.

In retrospect, the Soviet manned lunar program seems almost bizarre. It did not begin in earnest until two years after NASA's Apollo program was underway, and while the Gemini and Apollo programs were a step-by-step process of developing and proving the technologies and operational experience for lunar missions, the Soviet program was a chaotic bag of elements seemingly driven more by the rivalries of the various chief designers than a coherent plan for getting to the Moon. First of all, there were two manned lunar programs, each using entirely different hardware and mission profiles. The Zond program used a modified Soyuz spacecraft launched on a Proton booster, intended to send two cosmonauts on a circumlunar mission. They would simply loop around the Moon and return to Earth without going into orbit. A total of eight of these missions were launched unmanned, and only one completed a flight which would have been safe for cosmonauts on board. After Apollo 8 accomplished a much more ambitious lunar orbital mission in December 1968, a Zond flight would simply demonstrate how far behind the Soviets were, and the program was cancelled in 1970.

The N1-L3 manned lunar landing program was even more curious. In the Apollo program, the choice of mission mode and determination of mass required for the lunar craft came first, and the specifications of the booster rocket followed from that. Work on Korolev's N1 heavy lifter did not get underway until 1965—four years after the Saturn V, and it was envisioned as a general purpose booster for a variety of military and civil space missions. Korolev wanted to use very high thrust kerosene engines on the first stage and hydrogen engines on the upper stages as did the Saturn V, but he was involved in a feud with Valentin Glushko, who championed the use of hypergolic, high boiling point, toxic propellants and refused to work on the engines Korolev requested. Hydrogen propellant technology in the Soviet Union was in its infancy at the time, and Korolev realised that waiting for it to mature would add years to the schedule.

In need of engines, Korolev approached Nikolai Kuznetsov, a celebrated designer of jet turbine engines, but who had no previous experience at all with rocket engines. Kuznetsov's engines were much smaller than Korolev desired, and to obtain the required thrust, required thirty engines on the first stage alone, each with its own turbomachinery and plumbing. Instead of gimballing the engines to change the thrust vector, pairs of engines on opposite sides of the stage were throttled up and down. The gargantuan scale of the lower stages of the N-1 meant they were too large to transport on the Soviet rail network, so fabrication of the rocket was done in a huge assembly hall adjacent to the launch site. A small city had to be built to accommodate the work force.

All Soviet rockets since the R-2 in 1949 had used “integral tanks”: the walls of the propellant tanks were load-bearing and formed the skin of the rocket. The scale of the N1 was such that load-bearing tanks would have required a wall thickness which exceeded the capability of Soviet welding technology at the time, forcing a design with an external load-bearing shell and separate propellant tanks within it. This increased the complexity of the rocket and added dead weight to the design. (NASA's contractors had great difficulty welding the integral tanks of the Saturn V, but NASA simply kept throwing money at the problem until they figured out how to do it.)

The result was a rocket which was simultaneously huge, crude, and bewilderingly complicated. There was neither money in the budget nor time in the schedule to build a test stand to permit ground firings of the first stage. The first time those thirty engines fired up would be on the launch pad. Further, Kuznetsov's engines were not reusable. After every firing, they had to be torn down and overhauled, and hence were essentially a new and untested engine every time they fired. The Saturn V engines, by contrast, while expended in each flight, could be and were individually test fired, then ground tested together installed on the flight stage before being stacked into a launch vehicle.

The weight and less efficient fuel of the N-1 made its performance anæmic. While it had almost 50% more thrust at liftoff than the Saturn V, its payload to low Earth orbit was 25% less. This meant that performing a manned lunar landing mission in a single launch was just barely possible. The architecture would have launched two cosmonauts in a lunar orbital ship. After entering orbit around the Moon, one would spacewalk to the separate lunar landing craft (an internal docking tunnel as used in Apollo would have been too heavy) and descend to the Moon. Fuel constraints meant the cosmonaut only had ten to fifteen seconds to choose a landing spot. After the footprints, flag, and grabbing a few rocks, it was back to the lander to take off to rejoin the orbiter. Then it took another spacewalk to get back inside. Everybody involved at the time was acutely aware how marginal and risky this was, but given that the N-1 design was already frozen and changing it or re-architecting the mission to two or three launches would push out the landing date four or five years, it was the only option that would not forfeit the Moon race to the Americans.

They didn't even get close. In each of its test flights, the N-1 did not even get to the point of second stage ignition (although in its last flight it got within seven seconds of that milestone). On the second test flight the engines cut off shortly after liftoff and the vehicle fell back onto the launch pad, completely obliterating it in the largest artificial non-nuclear explosion known to this date: the equivalent of 7 kilotons of TNT. After four consecutive launch failures, having lost the Moon race, with no other mission requiring its capabilities, and the military opposing an expensive program for which they had no use, work on the N-1 was suspended in 1974 and the program officially cancelled in 1976.

When I read Challenge to Apollo, what struck me was the irony that the Apollo program was the very model of a centrally-planned state-directed effort along Soviet lines, while the Soviet Moon program was full of the kind of squabbling, turf wars, and duplicative competitive efforts which Marxists decry as flaws of the free market. What astounded me in reading this book is that the Soviets were acutely aware of this in 1968. In chapter 9, Chertok recounts a Central Committee meeting in which Minister of Defence Dmitriy Ustinov remarked:

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

In addition to the Moon program, there is extensive coverage of the development of automated rendezvous and docking and the long duration orbital station programs (Almaz, Salyut, and Mir). There is also an enlightening discussion, building on Chertok's career focus on control systems, of the challenges in integrating humans and automated systems into the decision loop and coping with off-nominal situations in real time.

I could go on and on, but there is so much to learn from this narrative, I'll just urge you to read it. Even if you are not particularly interested in space, there is much experience and wisdom to be gained from it which are applicable to all kinds of large complex systems, as well as insight into how things were done in the Soviet Union. It's best to read Volume 1 (May 2012), Volume 2 (August 2012), and Volume 3 (December 2012) first, as they will introduce you to the cast of characters and the events which set the stage for those chronicled here.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is much better produced than the Kindle editions of the first three volumes. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They're free.

The original Russian edition is available online.

March 2013 Permalink

Ciszek, Walter J. with Daniel L. Flaherty. He Leadeth Me. San Francisco: Ignatius Press, [1973] 1995. ISBN 978-0-89870-546-1.
Shortly after joining the Jesuit order in 1928, the author volunteered for the “Russian missions” proclaimed by Pope Pius XI. Consequently, he received most of his training at a newly-established centre in Rome, where in addition to the usual preparation for the Jesuit priesthood, he mastered the Russian language and the sacraments of the Byzantine rite in addition to those of the Latin. At the time of his ordination in 1937, Stalin's policy prohibited the entry of priests of all kinds to the Soviet Union, so Ciszek was assigned to a Jesuit mission in eastern Poland (as the Polish-American son of first-generation immigrants, he was acquainted with the Polish language). When Germany and the Soviet Union invaded Poland in 1939 at the outbreak of what was to become World War II, he found himself in the Soviet-occupied region and subject to increasingly stringent curbs on religious activities imposed by the Soviet occupation.

The Soviets began to recruit labour brigades in Poland to work in factories and camps in the Urals, and the author and another priest from the mission decided to volunteer for one of these brigades, concealing their identity as priests, so as to continue their ministry to the Polish labourers and the ultimate goal of embarking on their intended mission to Russia. Upon arriving at a lumbering camp, the incognito priests found that the incessant, backbreaking work and intense scrutiny by the camp bosses made it impossible to minister to the other labourers.

When Hitler double crossed Stalin and invaded the Soviet Union in 1941, the Red Army was initially in disarray and Stalin apparently paralysed, but the NKVD (later to become the KGB) did what it has always done best with great efficiency: Ciszek, along with hundreds of other innocents, was rounded up as a “German spy” and thrown in prison. When it was discovered that he was, in fact, a Catholic priest, the charge was changed to “Vatican spy”, and he was sent to the Lubyanka, where he was held throughout the entire war—five years—most of it in solitary confinement, and subjected to the relentless, incessant, and brutal interrogations for which the NKVD never seemed to lack resources even as the Soviet Union was fighting for its survival.

After refusing to be recruited as a spy, he was sentenced to 15 years hard labour in Siberia and shipped in a boxcar filled with hardened criminals to the first of a series of camps where only the strongest in body and spirit could survive. He served the entire 15 years less only three months, and was then released with a restricted internal passport which only permitted him to live in specific areas and required him to register with the police everywhere he went. In 1947, the Jesuit order listed him as dead in a Soviet prison, but he remained on the books of the KGB, and in 1963 was offered as an exchange to the U.S. for two Soviet spies in U.S. custody, and arrived back in the U.S. after twenty-three years in the Soviet Union.

In this book, as in his earlier With God in Russia, he recounts the events of his extraordinary life and provides a first-hand look at the darkest parts of a totalitarian society. Unlike the earlier book, which is more biographical, in the present volume the author uses the events he experienced as the point of departure for a very Jesuit exploration of topics including the body and soul, the priesthood, the apostolate, the kingdom of God on Earth, humility, and faith. He begins the chapter on the fear of death by observing, “Facing a firing squad is a pretty good test, I guess, of your theology of death” (p. 143).

As he notes in the Epilogue, on the innumerable occasions he was asked, after his return to the U.S., “How did you manage to survive?” and replied along the lines explained herein: by consigning his destiny to the will of God and accepting whatever came as God's will for him, many responded that “my beliefs in this matter are too simple, even naïve; they may find that my faith is not only childlike but childish.” To this he replies, “I am sorry if they feel this way, but I have written only what I know and what I have experienced. … My answer has always been—and can only be—that I survived on the basis of the faith others may find too simple and naïve” (p. 199).

Indeed, to this reader, it seemed that Ciszek's ongoing discovery that fulfillment and internal peace lay in complete submission to the will of God as revealed in the events one faces from day to day sometimes verged upon a fatalism I associate more with Islam than Catholicism. But this is the philosophy developed by an initially proud and ambitious man which permitted him not only to survive the almost unimaginable, but to achieve, to some extent, his mission to bring the word of God to those living in the officially atheist Soviet Union.

A more detailed biography with several photographs of Father Ciszek is available. Since 1990, he has been a candidate for beatification and sainthood.

May 2009 Permalink

DiLorenzo, Thomas J. The Real Lincoln. Roseville, CA: Prima Publishing, 2002. ISBN 0-7615-3641-8.

August 2002 Permalink

Doran, Jamie and Piers Bizony. Starman: the Truth Behind the Legend of Yuri Gagarin. London: Bloomsbury, 1998. ISBN 0-7475-3688-0.

January 2001 Permalink

Einstein, Albert. Autobiographical Notes. Translated and edited by Paul Arthur Schilpp. La Salle, Illinois: Open Court, [1949] 1996. ISBN 0-8126-9179-2.

July 2001 Permalink

Evans, M. Stanton. Blacklisted by History. New York: Three Rivers Press, 2007. ISBN 978-1-4000-8106-6.
In this book, the author, one of the lions of conservatism in the second half of the twentieth century, undertakes one of the most daunting tasks a historian can attempt: a dispassionate re-examination of one of the most reviled figures in modern American history, Senator Joseph McCarthy. So universal is the disdain for McCarthy by figures across the political spectrum, and so uniform is his presentation as an ogre in historical accounts, the media, and popular culture, that he has grown into a kind of legend used to scare people and intimidate those who shudder at being accused of “McCarthyism”. If you ask people about McCarthy, you'll often hear that he used the House Un-American Activities Committee to conduct witch hunts, smearing the reputations of innocent people with accusations of communism, that he destroyed the careers of people in Hollywood and caused the notorious blacklist of screen writers, and so on. None of this is so: McCarthy was in the Senate, and hence had nothing to do with activities of the House committee, which was entirely responsible for the investigation of Hollywood, in which McCarthy played no part whatsoever. The focus of his committee, the Permanent Subcommittee on Investigations of the Government Operations Committee of the U.S. Senate was on security policy and enforcement within first the State Department and later, the Signal Corps of the U.S. Army. McCarthy's hearings were not focussed on smoking out covert communists in the government, but rather investigating why communists and other security risks who had already been identified by investigations by the FBI and their employers' own internal security apparatus remained on the payroll, in sensitive policy-making positions, for years after evidence of their dubious connections and activities were brought to the attention of their employers and in direct contravention of the published security policies of both the Truman and Eisenhower administrations.

Any book about McCarthy published in the present environment must first start out by cutting through a great deal of misinformation and propaganda which is just simply false on the face of it, but which is accepted as conventional wisdom by a great many people. The author starts by telling the actual story of McCarthy, which is little known and pretty interesting. McCarthy was born on a Wisconsin farm in 1908 and dropped out of junior high school at the age of 14 to help his parents with the farm. At age 20, he entered a high school and managed to complete the full four year curriculum in nine months, earning his diploma. Between 1930 and 1935 he worked his way through college and law school, receiving his law degree and being admitted to the Wisconsin bar in 1935. In 1939 he ran for an elective post of circuit judge and defeated a well-known incumbent, becoming, at age 30, the youngest judge in the state of Wisconsin. In 1942, after the U.S. entered World War II following Pearl Harbor, McCarthy, although exempt from the draft due to his position as a sitting judge, resigned from the bench and enlisted in the Marine Corps, being commissioned as a second lieutenant (based upon his education) upon completion of boot camp. He served in the South Pacific as an intelligence officer with a dive bomber squadron, and flew a dozen missions as a tailgunner/photographer, earning the sobriquet “Tail-Gunner Joe”.

While still in the Marine Corps, McCarthy sought the Wisconsin Republican Senate nomination in 1944 and lost, but then in 1946 mounted a primary challenge to three-term incumbent senator Robert M. La Follette, Jr., scion of Winconsin's first family of Republican politics, narrowly defeating him in the primary, and then won the general election in a landslide, with more than 61% of the vote. Arriving in Washington, McCarthy was perceived to be a rather undistinguished moderate Republican back-bencher, and garnered little attention by the press.

All of this changed on February 9th, 1950, when he gave a speech in Wheeling, West Virgina in which he accused the State Department of being infested with communists, and claimed to have a list in his hand of known communists who continued to work at State after their identities had been made known to the Secretary of State. Just what McCarthy actually said in Wheeling remains a matter of controversy to this day, and is covered in gruelling detail in this book. This speech, and encore performances a few days later in Salt Lake City and Reno catapulted McCarthy onto the public stage, with intense scrutiny in the press and an uproar in Congress, leading to duelling committee investigations: those exploring the charges he made, and those looking into McCarthy himself, precisely what he said where and when, and how he obtained his information on security risks within the government. Oddly, from the outset, the focus within the Senate and executive branch seemed to be more on the latter than the former, with one inquiry digging into McCarthy's checkbook and his income tax returns and those of members of his family dating back to 1935—more than a decade before he was elected to the Senate.

The content of the hearings chaired by McCarthy are also often misreported and misunderstood. McCarthy was not primarily interested in uncovering Reds and their sympathisers within the government: that had already been done by investigations by the FBI and agency security organisations and duly reported to the executive departments involved. The focus of McCarthy's investigation was why, once these risks were identified, often with extensive documentation covering a period of many years, nothing was done, with those identified as security risks remaining on the job or, in some cases, allowed to resign without any note in their employment file, often to immediately find another post in a different government agency or one of the international institutions which were burgeoning in the postwar years. Such an inquiry was a fundamental exercise of the power of congressional oversight over executive branch agencies, but McCarthy (and other committees looking into such matters) ran into an impenetrable stonewall of assertions of executive privilege by both the Truman and Eisenhower administrations. In 1954, the Washington Post editorialised, “The President's authority under the Constitution to withhold from Congress confidences, presidential information, the disclosure of which would be incompatible with the public interest, is altogether beyond question”. The situational ethics of the legacy press is well illustrated by comparing this Post editorial to those two decades later when Nixon asserted the same privilege against a congressional investigation.

Indeed, the entire McCarthy episode reveals how well established, already at the mid-century point, the ruling class government/media/academia axis was. Faced with an assault largely directed at “their kind” (East Coast, Ivy League, old money, creatures of the capital) by an uncouth self-made upstart from the windswept plains, they closed ranks, launched serial investigations and media campaigns, covered up, destroyed evidence, stonewalled, and otherwise aimed to obstruct and finally destroy McCarthy. This came to fruition when McCarthy was condemned by a Senate resolution on December 2nd, 1954. (Oddly, the usual word “censure” was not used in the resolution.) Although McCarthy remained in the Senate until his death at age 48 in 1957, he was shunned in the Senate and largely ignored by the press.

The perspective of half a century later allows a retrospective on the rise and fall of McCarthy which wasn't possible in earlier accounts. Many documents relevant to McCarthy's charges, including the VENONA decrypts of Soviet cable traffic, FBI security files, and agency loyalty board investigations have been declassified in recent years (albeit, in some cases, with lengthy “redactions”—blacked out passages), and the author makes extensive use of these primary sources in the present work. In essence, what they demonstrate is that McCarthy was right: that the documents he sought in vain, blocked by claims of executive privilege, gag orders, cover-ups, and destruction of evidence were, in fact, persuasive evidence that the individuals he identified were genuine security risks who, under existing policy, should not have been employed in the sensitive positions they held. Because the entire “McCarthy era”, from his initial speech to condemnation and downfall, was less than five years in length, and involved numerous investigations, counter-investigations, and re-investigations of many of the same individuals, regarding which abundant source documents have become available, the detailed accounts in this massive book (672 pages in the trade paperback edition) can become tedious on occasion. Still, if you want to understand what really happened at this crucial episode of the early Cold War, and the background behind the defining moment of the era: the conquest of China by Mao's communists, this is an essential source.

In the Kindle edition, the footnotes, which appear at the bottom of the page in the print edition, are linked to reference numbers in the text with a numbering scheme distinct from that used for source references. Each note contains a link to return to the text at the location of the note. Source citations appear at the end of the book and are not linked in the main text. The Kindle edition includes no index.

November 2010 Permalink

Farmelo, Graham. The Strangest Man. New York: Basic Books, 2009. ISBN 978-0-465-02210-6.
Paul Adrien Maurice Dirac was born in 1902 in Bristol, England. His father, Charles, was a Swiss-French immigrant who made his living as a French teacher at a local school and as a private tutor in French. His mother, Florence (Flo), had given up her job as a librarian upon marrying Charles. The young Paul and his older brother Felix found themselves growing up in a very unusual, verging upon bizarre, home environment. Their father was as strict a disciplinarian at home as in the schoolroom, and spoke only French to his children, requiring them to answer in that language and abruptly correcting them if they committed any faute de français. Flo spoke to the children only in English, and since the Diracs rarely received visitors at home, before going to school Paul got the idea that men and women spoke different languages. At dinner time Charles and Paul would eat in the dining room, speaking French exclusively (with any error swiftly chastised) while Flo, Felix, and younger daughter Betty ate in the kitchen, speaking English. Paul quickly learned that the less he said, the fewer opportunities for error and humiliation, and he traced his famous reputation for taciturnity to his childhood experience.

(It should be noted that the only account we have of Dirac's childhood experience comes from himself, much later in life. He made no attempt to conceal the extent he despised his father [who was respected by his colleagues and acquaintances in Bristol], and there is no way to know whether Paul exaggerated or embroidered upon the circumstances of his childhood.)

After a primary education in which he was regarded as a sound but not exceptional pupil, Paul followed his brother Felix into the Merchant Venturers' School, a Bristol technical school ranked among the finest in the country. There he quickly distinguished himself, ranking near the top in most subjects. The instruction was intensely practical, eschewing Latin, Greek, and music in favour of mathematics, science, geometric and mechanical drawing, and practical skills such as operating machine tools. Dirac learned physics and mathematics with the engineer's eye to “getting the answer out” as opposed to finding the most elegant solution to the problem. He then pursued his engineering studies at Bristol University, where he excelled in mathematics but struggled with experiments.

Dirac graduated with a first-class honours degree in engineering, only to find the British economy in a terrible post-war depression, the worst economic downturn since the start of the Industrial Revolution. Unable to find employment as an engineer, he returned to Bristol University to do a second degree in mathematics, where it was arranged he could skip the first year of the program and pay no tuition fees. Dirac quickly established himself as the star of the mathematics programme, and also attended lectures about the enigmatic quantum theory.

His father had been working in the background to secure a position at Cambridge for Paul, and after cobbling together scholarships and a gift from his father, Dirac arrived at the university in October 1923 to pursue a doctorate in theoretical physics. Dirac would already have seemed strange to his fellow students. While most were scions of the upper class, classically trained, with plummy accents, Dirac knew no Latin or Greek, spoke with a Bristol accent, and approached problems as an engineer or mathematician, not a physicist. He had hoped to study Einstein's general relativity, the discovery of which had first interested him in theoretical physics, but his supervisor was interested in quantum mechanics and directed his work into that field.

It was an auspicious time for a talented researcher to undertake work in quantum theory. The “old quantum theory”, elaborated in the early years of the 20th century, had explained puzzles like the distribution of energy in heat radiation and the photoelectric effect, but by the 1920s it was clear that nature was much more subtle. For example, the original quantum theory failed to explain even the spectral lines of hydrogen, the simplest atom. Dirac began working on modest questions related to quantum theory, but his life was changed when he read Heisenberg's 1925 paper which is now considered one of the pillars of the new quantum mechanics. After initially dismissing the paper as overly complicated and artificial, he came to believe that it pointed the way forward, dismissing Bohr's concept of atoms like little solar systems in favour of a probability density function which gives the probability an electron will be observed in a given position. This represented not just a change in the model of the atom but the discarding entirely of models in favour of a mathematical formulation which permitted calculating what could be observed without providing any mechanism whatsoever explaining how it worked.

After reading and fully appreciating the significance of Heisenberg's work, Dirac embarked on one of the most productive bursts of discovery in the history of modern physics. Between 1925 and 1933 he published one foundational paper after another. His Ph.D. in 1926, the first granted by Cambridge for work in quantum mechanics, linked Heisenberg's theory to the classical mechanics he had learned as an engineer and provided a framework which made Heisenberg's work more accessible. Scholarly writing did not come easily to Dirac, but he mastered the art to such an extent that his papers are still read today as examples of pellucid exposition. At a time when many contributions to quantum mechanics were rough-edged and difficult to understand even by specialists, Dirac's papers were, in the words of Freeman Dyson, “like exquisitely carved marble statues falling out of the sky, one after another.”

In 1928, Dirac took the first step to unify quantum mechanics and special relativity in the Dirac equation. The consequences of this equation led Dirac to predict the existence of a positively-charged electron, which had never been observed. This was the first time a theoretical physicist had predicted the existence of a new particle. This “positron” was observed in debris from cosmic ray collisions in 1932. The Dirac equation also interpreted the spin (angular momentum) of particles as a relativistic phenomenon.

Dirac, along with Enrico Fermi, elaborated the statistics of particles with half-integral spin (now called “fermions”). The behaviour of ensembles of one such particle, the electron, is essential to the devices you use to read this article. He took the first steps toward a relativistic theory of light and matter and coined the name, “quantum electrodynamics”, for the field, but never found a theory sufficiently simple and beautiful to satisfy himself. He published The Principles of Quantum Mechanics in 1930, for many years the standard textbook on the subject and still read today. He worked out the theory of magnetic monopoles (not detected to this date) and speculated on the origin and possible links between large numbers in physics and cosmology.

The significance of Dirac's work was recognised at the time. He was elected a Fellow of the Royal Society in 1930, became the Lucasian Professor of Mathematics (Newton's chair) at Cambridge in 1932, and shared the Nobel Prize in Physics for 1933 with Erwin Schrödinger. After rejecting a knighthood because he disliked being addressed by his first name, he was awarded the Order of Merit in 1973. He is commemorated by a plaque in Westminster Abbey, close to that of Newton; the plaque bears his name and the Dirac equation, the only equation so honoured.

Many physicists consider Dirac the second greatest theoretical physicist of the 20th century, after Einstein. While Einstein produced great leaps of intellectual achievement in fields neglected by others, Dirac, working alone, contributed to the grand edifice of quantum mechanics, which occupied many of the most talented theorists of a generation. You have to dig a bit deeper into the history of quantum mechanics to fully appreciate Dirac's achievement, which probably accounts for his name not being as well known as it deserves.

There is much more to Dirac, all described in this extensively-documented scientific biography. While declining to join the British atomic weapons project during World War II because he refused to work as part of a collaboration, he spent much of the war doing consulting work for the project on his own, including inventing a new technique for isotope separation. (Dirac's process proved less efficient that those eventually chosen by the Manhattan project and was not used.) As an extreme introvert, nobody expected him to ever marry, and he astonished even his closest associates when he married the sister of his fellow physicist Eugene Wigner, Manci, a Hungarian divorcée with two children by her first husband. Manci was as extroverted as Dirac was reserved, and their marriage in 1937 lasted until Dirac's death in 1984. They had two daughters together, and lived a remarkably normal family life. Dirac, who disdained philosophy in his early years, became intensely interested in the philosophy of science later in life, even arguing that mathematical beauty, not experimental results, could best guide theorists to the best expression of the laws of nature.

Paul Dirac was a very complicated man, and this is a complicated and occasionally self-contradictory biography (but the contradiction is in the subject's life, not the fault of the biographer). This book provides a glimpse of a unique intellect whom even many of his closest associates never really felt they completely knew.

January 2015 Permalink

Fort, Adrian. Prof: The Life of Frederick Lindemann. London: Jonathan Cape, 2003. ISBN 0-224-06317-0.
Frederick Lindemann is best known as Winston Churchill's scientific advisor in the years prior to and during World War II. He was the central figure in what Churchill called the “Wizard War”, including the development and deployment of radar, antisubmarine warfare technologies, the proximity fuze, area bombing techniques, and nuclear weapons research (which was well underway in Britain before the Manhattan Project began in the U.S.). Lindemann's talents were so great and his range of interests so broad that if he had settled into the cloistered life of an Oxford don after his appointment as Professor of Experimental Philosophy and chief of the Clarendon Laboratory in 1919, he would still be remembered for his scientific work in quantum mechanics, X-ray spectra, cryogenics, photoelectric photometry in astronomy, and isotope separation, as well as for restoring Oxford's reputation in the natural sciences, which over the previous half century “had sunk almost to zero” in Lindemann's words.

Educated in Germany, he spoke German and French like a native. He helped organise the first historic Solvay Conference in 1911, which brought together the pioneers of the relativity and quantum revolutions in physics. There he met Einstein, beginning a life-long friendship. Lindemann was a world class tennis champion and expert golfer and squash player, as well as a virtuoso on the piano. Although a lifetime bachelor, he was known as a ladies' man and never lacked female companionship.

In World War I Lindemann tackled the problem of spin recovery in aircraft, then thought to be impossible (this in an era when pilots were not issued parachutes!). To collect data and test his theories, he learned to fly and deliberately induced spins in some of the most notoriously dangerous aircraft types and confirmed his recovery procedure by putting his own life on the line. The procedure he developed is still taught to pilots today.

With his close contacts in Germany, Lindemann was instrumental in arranging and funding the emigration of Jewish and other endangered scientists after Hitler took power in 1933. The scientists he enabled to escape not only helped bring Oxford into the first rank of research universities, many ended up contributing to the British and U.S. atomic projects and other war research. About the only thing he ever failed at was his run for Parliament in 1937, yet his influence as confidant and advisor to Churchill vastly exceeded that of a Tory back bencher. With the outbreak of war in 1939, he joined Churchill at the Admiralty, where he organised and ran the Statistical Branch, which applied what is now called Operations Research to the conduct of the war, which rôle he expanded as chief of “S Department” after Churchill became Prime Minister in May 1940. Many of the wartime “minutes” quoted in Churchill's The Second World War were drafted by Lindemann and sent out verbatim over Churchill's signature, sometimes with the addition “Action this day”. Lindemann finally sat in Parliament, in the House of Lords, after being made Lord Cherwell in 1941, and joined the Cabinet in 1942 and became a Privy Counsellor in 1943.

After the war, Lindemann returned to Oxford, continuing to champion scientific research, taking leave to serve in Churchill's cabinet from 1951–1953, where he almost single-handedly and successfully fought floating of the pound and advocated the establishment of an Atomic Energy Authority, on which he served for the rest of his life.

There's an atavistic tendency when writing history to focus exclusively on the person at the top, as if we still lived in the age of warrior kings, neglecting those who obtain and filter the information and develop the policies upon which the exalted leader must ultimately decide. (This is as common, or more so, in the business press where the cult of the CEO is well entrenched.) This biography, of somebody many people have never heard of, shows that the one essential skill a leader must have is choosing the right people to listen to and paying attention to what they say.

A paperback edition is now available.

March 2005 Permalink

Fraser, George MacDonald. Quartered Safe Out Here. New York: Skyhorse Publishing, [1992, 2001] 2007. ISBN 978-1-60239-190-1.
George MacDonald Fraser is best known as the author of the Flashman historical novels set in the 19th century. This autobiographical account of his service in the British Army in Burma during World War II is fictionalised only in that he has changed the names of those who served with him, tried to reconstruct dialogue from memory, and reconstructed events as best he can from the snapshots the mind retains from the chaos of combat and the boredom of army life between contact with the enemy.

Fraser, though born to Scottish parents, grew up in Carlisle, England, in the region of Cumbria. When he enlisted in the army, it was in the Border Regiment, composed almost entirely of Cumbrian troops. As the author notes, “…Cumbrians of old lived by raid, cattle theft, extortion, and murder; in war they were England's vanguard, and in peace her most unruly and bloody nuisance. They hadn't changed much in four centuries, either…”. Cumbrians of the epoch retained their traditional dialect, which may seem nearly incomprehensible to those accustomed to BBC English:

No offence, lad, but ye doan't 'alf ga broon. Admit it, noo. Put a dhoti on ye, an' ye could get a job dishin 'oot egg banjoes at Wazir Ali's. Any roads, w'at Ah'm sayin' is that if ye desert oot 'ere — Ah mean, in India, ye'd 'ev to be dooally to booger off in Boorma — the ridcaps is bound to cotch thee, an' court-martial gi'es thee the choice o' five years in Teimulghari or Paint Joongle, or coomin' oop t'road to get tha bollicks shot off. It's a moog's game. (p. 71)

A great deal of the text is dialogue in dialect, and if you find that difficult to get through, it may be rough going. I usually dislike reading dialect, but agree with the author that if it had been rendered into standard English the whole flavour of his experience would have been lost. Soldiers swear, and among Cumbrians profanity is as much a part of speech as nouns and verbs; if this offends you, this is not your book.

This is one of the most remarkable accounts of infantry combat I have ever read. Fraser was a grunt—he never rose above the rank of lance corporal during the events chronicled in the book and usually was busted back to private before long. The campaign in Burma was largely ignored by the press while it was underway and forgotten thereafter, but for those involved it was warfare at the most visceral level: combat harking back to the colonial era, fought by riflemen without armour or air support. Kipling of the 1890s would have understood precisely what was going on. On the ground, Fraser and his section had little idea of the larger picture or where their campaign fit into the overall war effort. All they knew is that they were charged with chasing the Japanese out of Burma and that “Jap” might be “half-starved and near naked, and his only weapon was a bamboo stake, but he was in no mood to surrender.” (p. 191)

This was a time where the most ordinary men from Britain and the Empire fought to defend what they confidently believed was the pinnacle of civilisation from the forces of barbarism and darkness. While constantly griping about everything, as soldiers are wont to do, when the time came they shouldered their packs, double-checked their rifles, and went out to do the job. From time to time the author reflects on how far Britain, and the rest of the West, has fallen, “One wonders how Londoners survived the Blitz without the interference of unqualified, jargon-mumbling ‘counsellors’, or how an overwhelming number of 1940s servicemen returned successfully to civilian life without benefit of brain-washing.” (p. 89)

Perhaps it helps that the author is a master of the historical novel: this account does a superb job of relating events as they happened and were perceived at the time without relying on hindsight to establish a narrative. While he doesn't abjure the occasional reflexion from decades later or reference to regimental history documents, for most of the account you are there—hot, wet, filthy, constantly assailed by insects, and never knowing whether that little sound you heard was just a rustle in the jungle or a Japanese patrol ready to attack with the savagery which comes when an army knows its cause is lost, evacuation is impossible, and surrender is unthinkable.

But this is not all boredom and grim combat. The account of the air drop of supplies starting on p. 96 is one of the funniest passages I've ever read in a war memoir. Cumbrians will be Cumbrians!

August 2013 Permalink

Freeh, Louis J. with Howard Means. My FBI. New York: St. Martin's Press, 2005. ISBN 0-312-32189-9.
This may be one of the most sanctimonious and self-congratulatory books ever written by a major U.S. public figure who is not Jimmy Carter. Not only is the book titled “My FBI” (gee, I always thought it was supposed to belong to the U.S. taxpayers who pay the G-men's salaries and buy the ammunition they expend), in the preface, where the author explains why he reversed his original decision not to write a memoir of his time at the FBI, he uses the words “I”, “me”, “my”, and “myself” a total of 91 times in four pages.

Only about half of the book covers Freeh's 1993–2001 tenure as FBI director; the rest is a straightforward autohagiography of his years as an altar boy, Eagle Scout, idealistic but apolitical law student during the turbulent early 1970s, FBI agent, crusading anti-Mafia federal prosecutor in New York City, and hard-working U.S. district judge, before bring appointed to the FBI job by Bill Clinton, who promised him independence and freedom from political interference in the work of the Bureau. Little did Freeh expect, when accepting the job, that he would spend much of his time in the coming years investigating the Clintons and their cronies. The tawdry and occasionally bizarre stories of those events as seen from the FBI fills a chapter and sets the background for the tense relations between the White House and FBI on other matters such as terrorism and counter-intelligence. The Oklahoma City and Saudi Arabian Khobar Towers bombings, the Atlanta Olympics bomb, the identification and arrest of Unabomber Ted Kaczynski, and the discovery of long-term Soviet mole Robert Hanssen in the FBI all occurred on Freeh's watch; he provides a view of these events and the governmental turf battles they engendered from the perspective of the big office in the Hoover Building, but there's little or no new information about the events themselves. Freeh resigned the FBI directorship in June 2001, and September 11th of that year was the first day at his new job. (What do you do after nine years running the FBI? Go to work for a credit card company!) In a final chapter, he provides a largely exculpatory account of the FBI's involvement in counter-terrorism and what might have been done to prevent such terrorist strikes. He directly attacks Richard A. Clarke and his book Against All Enemies as a self-aggrandising account by a minor player including some outright fabrications.

Freeh's book provides a peek into the mind of a self-consciously virtuous top cop—if only those foolish politicians and their paranoid constituents would sign over the last shreds of their liberties and privacy (on p. 304 he explicitly pitches for key escrow and back doors in encryption products, arguing “there's no need for this technology to be any more intrusive than a wiretap on a phone line”—indeed!), the righteous and incorruptible enforcers of the law and impartial arbiters of justice could make their lives ever so much safer and fret-free. And perhaps if the human beings in possession of those awesome powers were, in fact, as righteous as Mr. Freeh seems to believe himself to be, then there would nothing to worry about. But evidence suggests cause for concern. On the next to last page of the book, p. 324, near the end of six pages of acknowledgements set in small type with narrow leading (didn't think we'd read that far, Mr. Freeh?), we find the author naming, as an exemplar of one of the “courageous and honorable men who serve us”, who “deserve the nation's praise and lasting gratitude”, one Lon Horiuchi, the FBI sniper who shot and killed Vicki Weaver (who was accused of no crime) while she was holding her baby in her hands during the Ruby Ridge siege in August of 1992. Horiuchi later pled the Fifth Amendment in testimony before the U.S. Senate Judiciary Committee in 1995, ten years prior to Freeh's commendation of him here.

March 2006 Permalink

Gémignani, Anne-Marie. Une femme au royaume des interdits. Paris: Presses de la Renaissance, 2003. ISBN 2-85616-888-4.

March 2003 Permalink

Gergel, Max G. Excuse Me Sir, Would You Like to Buy a Kilo of Isopropyl Bromide? Rockford, IL: Pierce Chemical Company, 1979. OCLC 4703212.
Throughout Max Gergel's long career he has been an unforgettable character for all who encountered him in the many rôles he has played: student, bench chemist, instructor of aviation cadets, entrepreneur, supplier to the Manhattan Project, buyer and seller of obscure reagents to a global clientele, consultant to industry, travelling salesman peddling products ranging from exotic halocarbons to roach killer and toilet bowl cleaner, and evangelist persuading young people to pursue careers in chemistry. With family and friends (and no outside capital) he founded Columbia Organic Chemicals, a specialty chemical supplier specialising in halocarbons but, operating on a shoestring, willing to make almost anything a customer was ready to purchase (even Max drew the line, however, when the silver-tongued director of the Naval Research Laboratory tried to persuade him to make pentaborane).

The narrative is as rambling and entertaining as one imagines sharing a couple (or a couple dozen) drinks with Max at an American Chemical Society meeting would have been. He jumps from family to friends to finances to business to professional colleagues to suppliers to customers to nuggets of wisdom for starting and building a business to eccentric characters he has met and worked with to his love life to the exotic and sometimes bone-chilling chemical syntheses he did in his company's rough and ready facilities. Many of Columbia's contracts involved production of moderate quantities (between a kilogram and several 55 gallon drums) of substances previously made only in test tube batches. This “medium scale chemistry”—situated between the laboratory bench and an industrial facility making tank car loads of the stuff—involves as much art (or, failing that, brute force and cunning) as it does science and engineering, and this leads to many of the adventures and misadventures chronicled here. For example, an exothermic reaction may be simple to manage when you're making a few grams of something—the liberated heat is simply conducted to the walls to the test tube and dissipated: at worst you may only need to add the reagent slowly, stir well, and/or place the reaction vessel in a water bath. But when DuPont placed an order for allene in gallon quantities, this posed a problem which Max resolved as follows.

When one treats 1,2,3-Trichloropropane with alkali and a little water the reaction is violent; there is a tendency to deposit the reaction product, the raw materials and the apparatus on the ceiling and the attending chemist. I solved this by setting up duplicate 12 liter flasks, each equipped with double reflux condensers and surrounding each with half a dozen large tubs. In practice, when the reaction “took off” I would flee through the door or window and battle the eruption with water from a garden hose. The contents flying from the flasks were deflected by the ceiling and collected under water in the tubs. I used towels to wring out the contents which separated, shipping the lower level to DuPont. They complained of solids suspended in the liquid, but accepted the product and ordered more. I increased the number of flasks to four, doubled the number of wash tubs and completed the new order.

They ordered a 55 gallon drum. … (p. 127)

All of this was in the days before the EPA, OSHA, and the rest of the suffocating blanket of soft despotism descended upon entrepreneurial ventures in the United States that actually did things and made stuff. In the 1940s and '50s, when Gergel was building his business in South Carolina, he was free to adopt the “whatever it takes” attitude which is the quintessential ingredient for success in start-ups and small business. The flexibility and ingenuity which allowed Gergel not only to compete with the titans of the chemical industry but become a valued supplier to them is precisely what is extinguished by intrusive regulation, which accounts for why sclerotic dinosaurs are so comfortable with it. On the other hand, Max's experience with methyl iodide illustrates why some of these regulations were imposed:

There is no description adequate for the revulsion I felt over handling this musky smelling, high density, deadly liquid. As residue of the toxicity I had chronic insomnia for years, and stayed quite slim. The government had me questioned by Dr. Rotariu of Loyola University for there had been a number of cases of methyl bromide poisoning and the victims were either too befuddled or too dead to be questioned. He asked me why I had not committed suicide which had been the final solution for some of the afflicted and I had to thank again the patience and wisdom of Dr. Screiber. It is to be noted that another factor was our lack of a replacement worker. (p. 130)

Whatever it takes.

This book was published by Pierce Chemical Company and was never, as best I can determine, assigned either an ISBN or Library of Congress catalogue number. I cite it above by its OCLC Control Number. The book is hopelessly out of print, and used copies, when available, sell for forbidding prices. Your only alternative to lay hands on a print copy is an inter-library loan, for which the OCLC number is a useful reference. (I hear members of the write-off generation asking, “What is this ‘library’ of which you speak?”) I found a scanned PDF edition in the library section of the Sciencemadness.org Web site; the scanned pages are sometimes a little gnarly around the bottom, but readable. You will also find the second volume of Gergel's memoirs, The Ageless Gergel, among the works in this collection.

May 2012 Permalink

Gleick, James. Isaac Newton. New York: Pantheon Books, 2003. ISBN 0-375-42233-1.
Fitting a satisfying biography of one of the most towering figures in the history of the human intellect into fewer than 200 pages is a formidable undertaking, which James Gleick has accomplished magnificently here. Newton's mathematics and science are well covered, placing each in the context of the “shoulders of Giants” which he said helped him see further, but also his extensive (and little known, prior to the twentieth century) investigations into alchemy, theology, and ancient history. His battles with Hooke, Leibniz, and Flamsteed, autocratic later years as Master of the Royal Mint and President of the Royal Society and ceaseless curiosity and investigation are well covered, as well as his eccentricity and secretiveness. I'm a little dubious of the discussion on pp. 186–187 where Newton is argued to have anticipated or at least left the door open for relativity, quantum theory, equivalence of mass and energy, and subatomic forces. Newton wrote millions of words on almost every topic imaginable, most for his own use with no intention of publication, few examined by scholars until centuries after his death. From such a body of text, it may be possible to find sentences here and there which “anticipate” almost anything when you know from hindsight what you're looking for. In any case, the achievements of Newton, who not only laid the foundation of modern physical science, invented the mathematics upon which much of it is based, and created the very way we think about and do science, need no embellishment. The text is accompanied by 48 pages of endnotes (the majority citing primary sources) and an 18 page bibliography. A paperback edition is now available.

November 2004 Permalink

Goldsmith, Barbara. Obsessive Genius. New York: W. W. Norton, 2005. ISBN 978-0-393-32748-9.
Maria Salomea Skłodowska was born in 1867 in Warsaw, Poland, then part of the Russian Empire. She was the fifth and last child born to her parents, Władysław and Bronisława Skłodowski, both teachers. Both parents were members of a lower class of the aristocracy called the Szlachta, but had lost their wealth through involvement in the Polish nationalist movement opposed to Russian rule. They retained the love of learning characteristic of their class, and had independently obtained teaching appointments before meeting and marrying. Their children were raised in an intellectual atmosphere, with their father reading books aloud to them in Polish, Russian, French, German, and English, all languages in which he was fluent.

During Maria's childhood, her father lost his teaching position after his anti-Russian sentiments and activities were discovered, and supported himself by operating a boarding school for boys from the provinces. In cramped and less than sanitary conditions, one of the boarders infected two of the children with typhus: Marie's sister Zofia died. Three years later, her mother, Bronisława, died of tuberculosis. Maria experienced her first episode of depression, a malady which would haunt her throughout life.

Despite having graduated from secondary school with honours, Marie and her sister Bronisława could not pursue their education in Poland, as the universities did not admit women. Marie made an agreement with her older sister: she would support Bronisława's medical education at the Sorbonne in Paris in return for her supporting Maria's studies there after she graduated and entered practice. Maria worked as a governess, supporting Bronisława. Finally, in 1891, she was able to travel to Paris and enroll in the Sorbonne. On the registration forms, she signed her name as “Marie”.

One of just 23 women among the two thousand enrolled in the School of Sciences, Marie studied physics, chemistry, and mathematics under an eminent faculty including luminaries such as Henri Poincaré. In 1893, she earned her degree in physics, one of only two women to graduate with a science degree that year, and in 1894 obtained a second degree in mathematics, ranking second in her class.

Finances remained tight, and Marie was delighted when one of her professors, Gabriel Lippman, arranged for her to receive a grant to study the magnetic properties of different kinds of steel. She set to work on the project but made little progress because the equipment she was using in Lippman's laboratory was cumbersome and insensitive. A friend recommended she contact a little-known physicist who was an expert on magnetism in metals and had developed instruments for precision measurements. Marie arranged to meet Pierre Curie to discuss her work.

Pierre was working at the School of Industrial Physics and Chemistry of the City of Paris (EPCI), an institution much less prestigious than the Sorbonne, in a laboratory which the visiting Lord Kelvin described as “a cubbyhole between the hallway and a student laboratory”. Still, he had major achievements to his credit. In 1880, with his brother Jacques, he had discovered the phenomenon of piezoelectricity, the interaction between electricity and mechanical stress in solids. Now the foundation of many technologies, the Curies used piezoelectricity to build an electrometer much more sensitive than previous instruments. His doctoral dissertation on the effects of temperature on the magnetism of metals introduced the concept of a critical temperature, different for each metal or alloy, at which permanent magnetism is lost. This is now called the Curie temperature.

When Pierre and Marie first met, they were immediately taken with one another: both from families of modest means, largely self-educated, and fascinated by scientific investigation. Pierre rapidly fell in love and was determined to marry Marie, but she, having been rejected in an earlier relationship in Poland, was hesitant and still planned to return to Warsaw. Pierre eventually persuaded Marie, and the two were married in July 1895. Marie was given a small laboratory space in the EPCI building to pursue work on magnetism, and henceforth the Curies would be a scientific team.

In the final years of the nineteenth century “rays” were all the rage. In 1896, Wilhelm Conrad Röntgen discovered penetrating radiation produced by accelerating electrons (which he called “cathode rays”, as the electron would not be discovered until the following year) into a metal target. He called them “X-rays”, using “X” as the symbol for the unknown. The same year, Henri Becquerel discovered that a sample of uranium salts could expose a photographic plate even if the plate were wrapped in a black cloth. In 1897 he published six papers on these “Becquerel rays”. Both discoveries were completely accidental.

The year that Marie was ready to begin her doctoral research, 65 percent of the papers presented at the Academy of Sciences in Paris were devoted to X-rays. Pierre suggested that Marie investigate the Becquerel rays produced by uranium, as they had been largely neglected by other scientists. She began a series of experiments using an electrometer designed by Pierre. The instrument was sensitive but exasperating to operate: Lord Rayleigh later wrote that electrometers were “designed by the devil”. Patiently, Marie measured the rays produced by uranium and then moved on to test samples of other elements. Among them, only thorium produced detectable rays.

She then made a puzzling observation. Uranium was produced from an ore called pitchblende. When she tested a sample of the residue of pitchblende from which all of the uranium had been extracted, she measured rays four times as energetic as those from pure uranium. She inferred that there must be a substance, perhaps a new chemical element, remaining in the pitchblende residue which was more radioactive than uranium. She then tested a thorium ore and found it also to produce rays more energetic than pure thorium. Perhaps here was yet another element to be discovered.

In March 1898, Marie wrote a paper in which she presented her measurements of the uranium and thorium ores, introduced the word “radioactivity” to describe the phenomenon, put forth the hypothesis that one or more undiscovered elements were responsible, suggested that radioactivity could be used to discover new elements, and, based upon her observations that radioactivity was unaffected by chemical processes, that it must be “an atomic property”. Neither Pierre nor Marie were members of the Academy of Sciences; Marie's former professor, Gabriel Lippman, presented the paper on her behalf.

It was one thing to hypothesise the existence of a new element or elements, and entirely another to isolate the element and determine its properties. Ore, like pitchblende, is a mix of chemical compounds. Starting with ore from which the uranium had been extracted, the Curies undertook a process to chemically separate these components. Those found to be radioactive were then distilled to increase their purity. With each distillation their activity increased. They finally found two of these fractions contained all the radioactivity. One was chemically similar to barium, while the other resembled bismuth. Measuring the properties of the fractions indicated they must be a mixture of the new radioactive elements and other, lighter elements.

To isolate the new elements, a process called “fractionation” was undertaken. When crystals form from a solution, the lighter elements tend to crystallise first. By repeating this process, the heavier elements could slowly be concentrated. With each fractionation the radioactivity increased. Working with the fraction which behaved like bismuth, the Curies eventually purified it to be 400 times as radioactive as uranium. No spectrum of the new element could yet be determined, but the Curies were sufficiently confident in the presence of a new element to publish a paper in July 1898 announcing the discovery and naming the new element “polonium” after Marie's native Poland. In December, working with the fraction which chemically resembled barium, they produced a sample 900 times as radioactive as uranium. This time a clear novel spectral line was found, and at the end of December 1898 they announced the discovery of a second new element, which they named “radium”.

Two new elements had been discovered, with evidence sufficiently persuasive that their existence was generally accepted. But the existing samples were known to be impure. The physical and chemical properties of the new elements, allowing their places in the periodic table to be determined, would require removal of the impurities and isolation of pure samples. The same process of fractionation could be used, but since it quickly became clear that the new radioactive elements were a tiny fraction of the samples in which they had been discovered, it would be necessary to scale up the process to something closer to an industrial scale. (The sample in which radium had been identified was 900 times more radioactive than uranium. Pure radium was eventually found to be ten million times as radioactive as uranium.)

Pierre learned that the residue from extracting uranium from pitchblende was dumped in a forest near the uranium mine. He arranged to have the Austrian government donate the material at no cost, and found the funds to ship it to the laboratory in Paris. Now, instead of test tubes, they were working with tons of material. Pierre convinced a chemical company to perform the first round of purification, persuading them that other researchers would be eager to buy the resulting material. Eventually, they delivered twenty kilogram lots of material to the Curies which were fifty times as radioactive as uranium. From there the Curie laboratory took over the subsequent purification. After four years, processing ten tons of pitchblende residue, hundreds of tons of rinsing water, thousands of fractionations, one tenth of a gram of radium chloride was produced that was sufficiently pure to measure its properties. In July 1902 Marie announced the isolation of radium and placed it on the periodic table as element 88.

In June of 1903, Marie defended her doctoral thesis, becoming the first woman in France to obtain a doctorate in science. With the discovery of radium, the source of the enormous energy it and other radioactive elements released became a major focus of research. Ernest Rutherford argued that radioactivity was a process of “atomic disintegration” in which one element was spontaneously transmuting to another. The Curies originally doubted this hypothesis, but after repeating the experiments of Rutherford, accepted his conclusion as correct.

In 1903, the Nobel Prize for Physics was shared by Marie and Pierre Curie and Henri Becquerel, awarded for the discovery of radioactivity. The discovery of radium and polonium was not mentioned. Marie embarked on the isolation of polonium, and within two years produced a sample sufficiently pure to place it as element 84 on the periodic table with an estimate of its half-life of 140 days (the modern value is 138.4 days). Polonium is about 5000 times as radioactive as radium. Polonium and radium found in nature are the products of decay of primordial uranium and thorium. Their half-lives are so short (radium's is 1600 years) that any present at the Earth's formation has long since decayed.

After the announcement of the discovery of radium and the Nobel prize, the Curies, and especially Marie, became celebrities. Awards, honorary doctorates, and memberships in the academies of science of several countries followed, along with financial support and the laboratory facilities they had lacked while performing the work which won them such acclaim. Radium became a popular fad, hailed as a cure for cancer and other diseases, a fountain of youth, and promoted by quacks promising all kinds of benefits from the nostrums they peddled, some of which, to the detriment of their customers, actually contained minute quantities of radium.

Tragedy struck in April 1906 when Pierre was killed in a traffic accident: run over on a Paris street in a heavy rainstorm by a wagon pulled by two horses. Marie was inconsolable, immersing herself in laboratory work and neglecting her two young daughters. Her spells of depression returned. She continued to explore the properties of radium and polonium and worked to establish a standard unit to measure radioactive decay, calibrated by radium. (This unit is now called the curie, but is no longer defined based upon radium and has been replaced by the becquerel, which is simply an inverse second.) Marie Curie was not interested or involved in the work to determine the structure of the atom and its nucleus or the development of quantum theory. The Curie laboratory continued to grow, but focused on production of radium and its applications in medicine and industry. Lise Meitner applied for a job at the laboratory and was rejected. Meitner later said she believed that Marie thought her a potential rival to Curie's daughter Irène. Meitner joined the Kaiser Wilhelm Institute in Berlin and went on to co-discover nuclear fission. The only two chemical elements named in whole or part for women are curium (element 96, named for both Pierre and Marie) and meitnerium (element 109).

In 1910, after three years of work with André-Louis Debierne, Marie managed to produce a sample of metallic radium, allowing a definitive measurement of its properties. In 1911, she won a second Nobel prize, unshared, in chemistry, for the isolation of radium and polonium. At the moment of triumph, news broke of a messy affair she had been carrying on with Pierre's successor at the EPCI, Paul Langevin, a married man. The popular press, who had hailed Marie as a towering figure of French science, went after her with bared fangs and mockery, and she went into seclusion under an assumed name.

During World War I, she invented and promoted the use of mobile field X-ray units (called “Les Petites Curies”) and won acceptance for women to operate them near the front, with her daughter Irène assisting in the effort. After the war, her reputation largely rehabilitated, Marie not only accepted but contributed to the growth of the Curie myth, seeing it as a way to fund her laboratory and research. Irène took the lead at the laboratory.

As co-discoverer of the phenomenon of radioactivity and two chemical elements, Curie's achievements were well recognised. She was the first woman to win a Nobel prize, the first person to win two Nobel prizes, and the only person so far to win Nobel prizes in two different sciences. (The third woman to win a Nobel prize was her daughter, Irène Joliot-Curie, for the discovery of artificial radioactivity.) She was the first woman to be appointed a full professor at the Sorbonne.

Marie Curie died of anæmia in 1934, probably brought on by exposure to radiation over her career. She took few precautions, and her papers and personal effects remain radioactive to this day. Her legacy is one of dedication and indefatigable persistence in achieving the goals she set for herself, regardless of the scientific and technical challenges and the barriers women faced at the time. She demonstrated that pure persistence, coupled with a brilliant intellect, can overcome formidable obstacles.

April 2016 Permalink

Guderian, Heinz. Panzer Leader. New York, Da Capo Press, 1996. ISBN 0-306-80689-4.

February 2001 Permalink

Haffner, Sebastian [Raimund Pretzel]. Defying Hitler. New York: Picador, [2000] 2003. ISBN 978-0-312-42113-7.
In 1933, the author was pursuing his ambition to follow his father into a career in the Prussian civil service. While completing his law degree, he had obtained a post as a Referendar, the lowest rank in the civil service, performing what amounted to paralegal work for higher ranking clerks and judges. He enjoyed the work, especially doing research in the law library and drafting opinions, and was proud to be a part of the Prussian tradition of an independent judiciary. He had no strong political views nor much interest in politics. But, as he says, “I have a fairly well developed figurative sense of smell, or to put it differently, a sense of the worth (or worthlessness!) of human, moral, political views and attitudes. Most Germans unfortunately lack this sense almost completely.”

When Hitler came to power in January 1933, “As for the Nazis, my nose left me with no doubts. … How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset. What was not at all clear to me was what terrible enemies they would turn out to be.” Initially, little changed: it was a “matter for the press”. The new chancellor might rant to enthralled masses about the Jews, but in the court where Haffner clerked, a Jewish judge continued to sit on the bench and work continued as before. He hoped that the political storm on the surface would leave the depths of the civil service unperturbed. This was not to be the case.

Haffner was a boy during the First World War, and, like many of his schoolmates, saw the war as a great adventure which unified the country. Coming of age in the Weimar Republic, he experienced the great inflation of 1921–1924 as up-ending the society: “Amid all the misery, despair, and poverty there was an air of light-headed youthfulness, licentiousness, and carnival. Now, for once, the young had money and the old did not. Its value lasted only a few hours. It was spent as never before or since; and not on the things old people spend their money on.” A whole generation whose ancestors had grown up in a highly structured society where most decisions were made for them now were faced with the freedom to make whatever they wished of their private lives. But they had never learned to cope with such freedom.

After the Reichstag fire and the Nazi-organised boycott of Jewish businesses (enforced by SA street brawlers standing in doors and intimidating anybody who tried to enter), the fundamental transformation of the society accelerated. Working in the library at the court building, Haffner is shocked to see this sanctum of jurisprudence defiled by the SA, who had come to eject all Jews from the building. A Jewish colleague is expelled from university, fired from the civil service, and opts to emigrate.

The chaos of the early days of the Nazi ascendency gives way to Gleichschaltung, the systematic takeover of all institutions by placing Nazis in key decision-making positions within them. Haffner sees the Prussian courts, which famously stood up to Frederick the Great a century and a half before, meekly toe the line.

Haffner begins to consider emigrating from Germany, but his father urges him to complete his law degree before leaving. His close friends among the Referendars run the gamut from Communist sympathisers to ardent Nazis. As he is preparing for the Assessor examination (the next rank in the civil service, and the final step for a law student), he is called up for mandatory political and military indoctrination now required for the rank. The barrier between the personal, professional, and political had completely fallen. “Four weeks later I was wearing jackboots and a uniform with a swastika armband, and spent many hours each day marching in a column in the vicinity of Jüterbog.”

He discovers that, despite his viewing the Nazis as essentially absurd, there is something about order, regimentation, discipline, and forced camaraderie that resonates in his German soul.

Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue.

That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.

And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”.

This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that.

This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available.

Thanks to Glenn Beck for recommending this book.

June 2017 Permalink

Harden, Blaine. Escape from Camp 14. New York: Viking Penguin, 2012. ISBN 978-0-14-312291-3.
Shin Dong-hyuk was born in a North Korean prison camp. The doctrine of that collectivist Hell-state, as enunciated by tyrant Kim Il Sung, is that “[E]nemies of class, whoever they are, their seed must be eliminated through three generations.” Shin (I refer to him by his family name, as he prefers) committed no crime, but was born into slavery in a labour camp because his parents had been condemned to servitude there due to supposed offences. Shin grew up in an environment so anti-human it would send shivers of envy down the spines of Western environmentalists. In school, he saw a teacher beat a six-year-old classmate to death with a blackboard pointer because she had stolen and hidden five kernels of maize. He witnessed the hanging of his mother and the execution by firing squad of his brother because they were caught contemplating escape from the camp, and he felt only detestation of them because their actions would harm him.

Shin was imprisoned and tortured due to association with his mother and brother, and assigned to work details where accidents which killed workers were routine. Shin accepted this as simply the way life was—he knew nothing of life outside the camp or in the world beyond his slave state. This changed when he made the acquaintance of Park Yong Chul, sent to the camp for some reason after a career which had allowed him to travel abroad and meet senior people in the North Korean ruling class. While working together in the camp's garment factory, Park introduced Shin to a wider world and set him to thinking about escaping the camp. The fact that Shin, who had been recruited to observe Park and inform upon any disloyalty he observed, instead began to conspire with him to escape the camp was the signal act of defiance against tyranny which changed Shin's life.

Shin pulled off a harrowing escape from the camp which left him severely injured, lived by his wits crossing the barren countryside of North Korea, and made it across the border to China, where he worked as a menial farm hand and yet lived in luxury unheard of in North Korea. Raised in the camp, his expectations for human behaviour had nothing to do with the reality outside. As the author observes, “Freedom, in Shin's mind, was just another word for grilled meat.”

Freedom, beyond grilled meat, was something Shin found difficult to cope with. After making his way to South Korea (where the state has programs to integrate North Korean escapees into the society) and then the United States (where, as the only person born in a North Korean prison camp to ever escape, he was a celebrity among groups advocating for human rights in North Korea). But growing up in an intensely anti-human environment, cut off from all information about the outside world, makes it difficult to cope with normal human interactions and the flood of information those born into liberty consider normal.

Much as with Nothing to Envy (September 2011), this book made my blood boil. It is not just the injustice visited upon Shin and all the prisoners of the regime who did not manage to escape, but those in our own societies who would condemn us to comparable servitude in the interest of a “higher good” as they define it.

May 2013 Permalink

Hayward, Steven F. The Real Jimmy Carter. Washington: Regnery Publishing, 2004. ISBN 0-89526-090-5.
In the acknowledgements at the end, the author says one of his motivations for writing this book was to acquaint younger readers and older folks who've managed to forget with the reality of Jimmy Carter's presidency. Indeed, unless one lived through it, it's hard to appreciate how Carter's formidable intellect allowed him to quickly grasp the essentials of a situation, absorb vast amounts of detailed information, and then immediately, intuitively leap to the absolutely worst conceivable course of action. It's all here: his race-baiting 1970 campaign for governor of Georgia; the Playboy interview; “ethnic purity”; “I'll never lie to you”; the 111 page list of campaign promises; alienating the Democratic controlled House and Senate before inaugural week was over; stagflation; gas lines; the Moral Equivalent of War (MEOW); turning down the thermostat; spending Christmas with the Shah of Iran, “an island of stability in one of he more troubled areas of the world”; Nicaragua; Afghanistan; “malaise” (which he actually never said, but will be forever associated with his presidency); the cabinet massacre; kissing Brezhnev; “Carter held Hostage”, and more. There is a side-splitting account of the “killer rabbit” episode on page 155. I'd have tried to work in Billy Beer, but I guess you gotta stop somewhere. Carter's post-presidential career, hobnobbing with dictators, loose-cannon freelance diplomacy, and connections with shady middle-east financiers including BCCI, are covered along with his admirable humanitarian work with Habitat for Humanity. That this sanctimonious mountebank who The New Republic, hardly a right wing mouthpiece, called “a vain, meddling, amoral American fool” in 1995 after he expressed sympathy for Serbian ethnic cleanser Radovan Karadzic, managed to win the Nobel Peace Prize, only bears out the assessment of Carter made decades earlier by notorious bank robber Willie Sutton, “I've never seen a bigger confidence man in my life, and I've been around some of the best in the business.”

October 2004 Permalink

Hayward, Steven F. Greatness. New York: Crown Forum, 2005. ISBN 0-307-23715-X.
This book, subtitled “Reagan, Churchill, and the Making of Extraordinary Leaders ”, examines the parallels between the lives and careers of these two superficially very different men, in search of the roots of their ability, despite having both been underestimated and disdained by their contemporaries (which historical distance has caused many to forget in the case of Churchill, a fact of which Hayward usefully reminds the reader), and considered too old for the challenges facing them when they arrived at the summit of power.

The beginning of the Cold War was effectively proclaimed by Churchill's 1946 “Iron Curtain” speech in Fulton, Missouri, and its end foretold by Reagan's “Tear Down this Wall” speech at the Berlin wall in 1987. (Both speeches are worth reading in their entirety, as they have much more to say about the world of their times than the sound bites from them you usually hear.) Interestingly, both speeches were greeted with scorn, and much of Reagan's staff considered it fantasy to imagine and an embarrassment to suggest the Berlin wall falling in the foreseeable future.

Only one chapter of the book is devoted to the Cold War; the bulk explores the experiences which formed the character of these men, their self-education in the art of statecraft, their remarkably similar evolution from youthful liberalism in domestic policy to stalwart confrontation of external threats, and their ability to talk over the heads of the political class directly to the population and instill their own optimism when so many saw only disaster and decline ahead for their societies. Unlike the vast majority of their contemporaries, neither Churchill nor Reagan considered Communism as something permanent—both believed it would eventually collapse due to its own, shall we say, internal contradictions. This short book provides an excellent insight into how they came to that prophetic conclusion.

January 2006 Permalink

Herken. Gregg. Brotherhood of the Bomb. New York: Henry Holt, 2002. ISBN 0-8050-6589-X.
What more's to be said about the tangled threads of science, politics, ego, power, and history that bound together the lives of Ernest O. Lawrence, J. Robert Oppenheimer, and Edward Teller from the origin of the Manhattan Project through the postwar controversies over nuclear policy and the development of thermonuclear weapons? In fact, a great deal, as declassification of FBI files, including wiretap transcripts, release of decrypted Venona intercepts of Soviet espionage cable traffic, and documents from Moscow archives opened to researchers since the collapse of the Soviet Union have provide a wealth of original source material illuminating previously dark corners of the epoch.

Gregg Herken, a senior historian and curator at the National Air and Space Museum, draws upon these resources to explore the accomplishments, conflicts, and controversies surrounding Lawrence, Oppenheimer, and Teller, and the cold war era they played such a large part in defining. The focus is almost entirely on the period in which the three were active in weapons development and policy—there is little discussion of their prior scientific work, nor of Teller's subsequent decades on the public stage. This is a serious academic history, with almost 100 pages of source citations and bibliography, but the story is presented in an engaging manner which leaves the reader with a sense of the personalities involved, not just their views and actions. The author writes with no discernible ideological bias, and I noted only one insignificant technical goof.

May 2005 Permalink

Hickam, Homer H., Jr. Rocket Boys. New York: Doubleday, 1998. ISBN 0-385-33321-8.
The author came of age in southern West Virginia during the dawn of the space age. Inspired by science fiction and the sight of Sputnik gliding through the patch of night sky between the mountains which surrounded his coal mining town, he and a group of close friends decided to build their own rockets. Counselled by the author's mother, “Don't blow yourself up”, they managed not only to avoid that downside of rocketry (although Mom's garden fence was not so lucky), but succeeded in building and launching more than thirty rockets powered by, as they progressed, first black powder, then melted saltpetre and sugar (“rocket candy”), and finally “zincoshine”, a mixture of powdered zinc and sulphur bound by 200 proof West Virginia mountain moonshine, which propelled their final rocket almost six miles into the sky. Their efforts won them the Gold and Silver award at the National Science Fair in 1960, and a ticket out of coal country for the author, who went on to a career as a NASA engineer. This is a memoir by a member of the last generation when the U.S. was still free enough for boys to be boys, and boys with dreams were encouraged to make them come true. This book will bring back fond memories for any member of that generation, and inspire envy among those who postdate that golden age.

This book served as the basis for the 1999 film October Sky, which I have not seen.

July 2005 Permalink

Hirshfeld, Alan. The Electric Life of Michael Faraday. New York: Walker and Company, 2006. ISBN 978-0-8027-1470-1.
Of post-Enlightenment societies, one of the most rigidly structured by class and tradition was that of Great Britain. Those aspiring to the life of the mind were overwhelmingly the well-born, educated in the classics at Oxford or Cambridge, with the wealth and leisure to pursue their interests on their own. The career of Michael Faraday stands as a monument to what can be accomplished, even in such a stultifying system, by the pure power of intellect, dogged persistence, relentless rationality, humility, endless fascination with the intricacies of creation, and confidence that it was ultimately knowable through clever investigation.

Faraday was born in 1791, the third child of a blacksmith who had migrated to London earlier that year in search of better prospects, which he never found due to fragile health. In his childhood, Faraday's family occasionally got along only thanks to the charity of members of the fundamentalist church to which they belonged. At age 14, Faraday was apprenticed to a French émigré bookbinder, setting himself on the path to a tradesman's career. But Faraday, while almost entirely unschooled, knew how to read, and read he did—as many of the books which passed through the binder's shop as he could manage. As with many who read widely, Faraday eventually came across a book that changed his life, The Improvement of the Mind by Isaac Watts, and from the pragmatic and inspirational advice in that volume, along with the experimental approach to science he learned from Jane Marcet's Conversations in Chemistry, Faraday developed his own philosophy of scientific investigation and began to do his own experiments with humble apparatus in the bookbinder's shop.

Faraday seemed to be on a trajectory which would frustrate his curiosity forever amongst the hammers, glue, and stitches of bookbindery when, thanks to his assiduous note-taking at science lectures, his employer passing on his notes, and a providential vacancy, he found himself hired as the assistant to the eminent Humphry Davy at the Royal Institution in London. Learning chemistry and the emerging field of electrochemistry at the side of the master, he developed the empirical experimental approach which would inform all of his subsequent work.

Faraday originally existed very much in Davy's shadow, even serving as his personal valet as well as scientific assistant on an extended tour of the Continent, but slowly (and over Davy's opposition) rose to become a Fellow of the Royal Institution and director of its laboratory. Seeking to shore up the shaky finances of the Institution, in 1827 he launched the Friday Evening Discourses, public lectures on a multitude of scientific topics by Faraday and other eminent scientists, which he would continue to supervise until 1862.

Although trained as a chemist, and having made his reputation in that field, his electrochemical investigations with Davy had planted in his mind the idea that electricity was not a curious phenomenon demonstrated in public lectures involving mysterious “fluids”, but an essential component in understanding the behaviour of matter. In 1831, he turned his methodical experimental attention to the relationship between electricity and magnetism, and within months had discovered electromagnetic induction: that an electric current was induced in a conductor only by a changing magnetic field: the principle used by every electrical generator and transformer in use today. He built the first dynamo, using a spinning copper disc between the poles of a strong magnet, and thereby demonstrated the conversion of mechanical energy into electricity for the first time. Faraday's methodical, indefatigable investigations, failures along with successes, were chronicled in a series of papers eventually collected into the volume Experimental Researches in Electricity, which is considered to be one of the best narratives ever written of science as it is done.

Knowing little mathematics, Faraday expressed the concepts he discovered in elegant prose. His philosophy of science presaged that of Karl Popper and the positivists of the next century—he considered all theories as tentative, advocated continued testing of existing theories in an effort to falsify them and thereby discover new science beyond them, and he had no use whatsoever for the unobservable: he detested concepts such as “action at a distance”, which he considered mystical obfuscation. If some action occurred, there must be some physical mechanism which causes it, and this led him to formulate what we would now call field theory: that physical lines of force extend from electrically charged objects and magnets through apparently empty space, and it is the interaction of objects with these lines of force which produces the various effects he had investigated. This flew in the face of the scientific consensus of the time, and while universally admired for his experimental prowess, many regarded Faraday's wordy arguments as verging on the work of a crank. It wasn't until 1857 that the ageing Faraday made the acquaintance of the young James Clerk Maxwell, who had sent him a copy of a paper in which Maxwell made his first attempt to express Faraday's lines of force in rigorous mathematical form. By 1864 Maxwell had refined his model into his monumental field theory, which demonstrated that light was simply a manifestation of the electromagnetic field, something that Faraday had long suspected (he wrote repeatedly of “ray-vibrations”) but had been unable to prove.

The publication of Maxwell's theory marked a great inflection point between the old physics of Faraday and the new, emerging, highly mathematical style of Maxwell and his successors. While discovering the mechanism through experiment was everything to Faraday, correctly describing the behaviour and correctly predicting the outcome of experiments with a set of equations was all that mattered in the new style, which made no effort to explain why the equations worked. As Heinrich Hertz said, “Maxwell's theory is Maxwell's equations” (p. 190). Michael Faraday lived in an era in which a humble-born person with no formal education or knowledge of advanced mathematics could, purely through intelligence, assiduous self-study, clever and tireless experimentation with simple apparatus he made with his own hands, make fundamental discoveries about the universe and rise to the top rank of scientists. Those days are now forever gone, and while we now know vastly more than those of Faraday's time, one also feels we've lost something. Aldous Huxley once remarked, “Even if I could be Shakespeare, I think I should still choose to be Faraday.” This book is an excellent way to appreciate how science felt when it was all new and mysterious, acquaint yourself with one of the most admirable characters in its history, and understand why Huxley felt as he did.

July 2008 Permalink

Hitchens, Christopher. The Missionary Position: Mother Teresa in Theory and Practice. London: Verso, 1995. ISBN 1-85984-054-X.

March 2003 Permalink

Hoover, Herbert. The Crusade Years. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2013. ISBN 978-0-8179-1674-9.
In the modern era, most former U.S. presidents have largely retired from the public arena, lending their names to charitable endeavours and acting as elder statesmen rather than active partisans. One striking counter-example to this rule was Herbert Hoover who, from the time of his defeat by Franklin Roosevelt in the 1932 presidential election until shortly before his death in 1964, remained in the arena, giving hundreds of speeches, many broadcast nationwide on radio, writing multiple volumes of memoirs and analyses of policy, collecting and archiving a multitude of documents regarding World War I and its aftermath which became the core of what is now the Hoover Institution collection at Stanford University, working in famine relief during and after World War II, and raising funds and promoting benevolent organisations such as the Boys' Clubs. His strenuous work to keep the U.S. out of World War II is chronicled in his “magnum opus”, Freedom Betrayed (June 2012), which presents his revisionist view of U.S. entry into and conduct of the war, and the tragedy which ensued after victory had been won. Freedom Betrayed was largely completed at the time of Hoover's death, but for reasons difficult to determine at this remove, was not published until 2011.

The present volume was intended by Hoover to be a companion to Freedom Betrayed, focussing on domestic policy in his post-presidential career. Over the years, he envisioned publishing the work in various forms, but by the early 1950s he had given the book its present title and accumulated 564 pages of typeset page proofs. Due to other duties, and Hoover's decision to concentrate his efforts on Freedom Betrayed, little was done on the manuscript after he set it aside in 1955. It is only through the scholarship of the editor, drawing upon Hoover's draft, but also documents from the Hoover Institution and the Hoover Presidential Library, that this work has been assembled in its present form. The editor has also collected a variety of relevant documents, some of which Hoover cited or incorporated in earlier versions of the work, into a comprehensive appendix. There are extensive source citations and notes about discrepancies between Hoover's quotation of documents and speeches and other published versions of them.

Of all the crusades chronicled here, the bulk of the work is devoted to “The Crusade Against Collectivism in American Life”, and Hoover's words on the topic are so pithy and relevant to the present state of affairs in the United States that one suspects that a brave, ambitious, but less than original politician who simply cut and pasted Hoover's words into his own speeches would rapidly become the darling of liberty-minded members of the Republican party. I cannot think of any present-day Republican, even darlings of the Tea Party, who draws the contrast between the American tradition of individual liberty and enterprise and the grey uniformity of collectivism as Hoover does here. And Hoover does it with a firm intellectual grounding in the history of America and the world, personal knowledge from having lived and worked in countries around the world, and an engineer's pragmatism about doing what works, not what sounds good in a speech or makes people feel good about themselves.

This is somewhat of a surprise. Hoover was, in many ways, a progressive—Calvin Coolidge called him “wonder boy”. He was an enthusiastic believer in trust-busting and regulation as a counterpoise to concentration of economic power. He was a protectionist who supported the tariff to protect farmers and industry from foreign competition. He supported income and inheritance taxes “to regulate over-accumulations of wealth.” He was no libertarian, nor even a “light hand on the tiller” executive like Coolidge.

And yet he totally grasped the threat to liberty which the intrusive regulatory and administrative state represented. It's difficult to start quoting Hoover without retyping the entire book, as there is line after line, paragraph after paragraph, and page after page which are not only completely applicable to the current predicament of the U.S., but guaranteed applause lines were they uttered before a crowd of freedom loving citizens of that country. Please indulge me in a few (comments in italics are my own).

(On his electoral defeat)   Democracy is not a polite employer.

We cannot extend the mastery of government over the daily life of a people without somewhere making it master of people's souls and thoughts.

(On JournoList, vintage 1934)   I soon learned that the reviewers of the New York Times, the New York Herald Tribune, the Saturday Review and of other journals of review in New York kept in touch to determine in what manner they should destroy books which were not to their liking.

Who then pays? It is the same economic middle class and the poor. That would still be true if the rich were taxed to the whole amount of their fortunes….

Blessed are the young, for they shall inherit the national debt….

Regulation should be by specific law, that all who run may read.

It would be far better that the party go down to defeat with the banner of principle flying than to win by pussyfooting.

The seizure by the government of the communications of persons not charged with wrong-doing justifies the immoral conduct of every snooper.

I could quote dozens more. Should Hoover re-appear and give a composite of what he writes here as a keynote speech at the 2016 Republican convention, and if it hasn't been packed with establishment cronies, I expect he would be interrupted every few lines with chants of “Hoo-ver, Hoo-ver” and nominated by acclamation.

It is sad that in the U.S. in the age of Obama there is no statesman with the stature, knowledge, and eloquence of Hoover who is making the case for liberty and warning of the inevitable tyranny which awaits at the end of the road to serfdom. There are voices articulating the message which Hoover expresses so pellucidly here, but in today's media environment they don't have access to the kind of platform Hoover did when his post-presidential policy speeches were routinely broadcast nationwide. After his being reviled ever since his presidency, not just by Democrats but by many in his own party, it's odd to feel nostalgia for Hoover, but Obama will do that to you.

In the Kindle edition the index cites page numbers in the hardcover edition which, since the Kindle edition does not include real page numbers, are completely useless.

April 2014 Permalink

Horowitz, David. Radical Son. New York: Touchstone Books, 1997. ISBN 0-684-84005-7.
One the mysteries I have never been able to figure out—I remember discussing it with people before I left the U.S., so that makes it at least fifteen years of bewilderment on my part—is why so many obviously highly intelligent people, some of whom have demonstrated initiative and achieved substantial success in productive endeavours, are so frequently attracted to collectivist ideologies which deny individual excellence, suppress individualism, and seek to replace achievement with imposed equality in mediocrity. Even more baffling is why so many people remain attracted to these ideas which are as thoroughly discredited by the events of the twentieth century as any in the entire history of human intellectual endeavour, in a seeming willingness to ignore evidence, even when it takes the form of a death toll in the tens of millions of human beings.

This book does not supply a complete answer, but it provides several important pieces of the puzzle. It is the most enlightening work on this question I've read since Hayek's The Fatal Conceit (March 2005), and complements it superbly. While Hayek's work is one of philosophy and economics, Radical Son is a searching autobiography by a person who was one of the intellectual founders and leaders of the New Left in the 1960s and 70s. The author was part of the group which organised the first demonstration against the Vietnam war in Berkeley in 1962, published the standard New Left history of the Cold War, The Free World Colossus in 1965, and in 1968, the very apogee of the Sixties, joined Ramparts magazine, where he rapidly rose to a position of effective control, setting its tone through the entire period of radicalisation and revolutionary chaos which ensued. He raised the money for the Black Panther Party's “Learning Center” in Oakland California, and became an adviser and regular companion of Huey Newton. Throughout all of this his belief in the socialist vision of the future, the necessity of revolution even in a democratic society, and support for the “revolutionary vanguard”, however dubious some of their actions seemed, never wavered.

He came to these convictions almost in the cradle. Like many of the founders of the New Left (Tom Hayden was one of the rare exceptions), Horowitz was a “red diaper baby”. In his case both his mother and father were members of the Communist Party of the United States and met through political activity. Although the New Left rejected the Communist Party as a neo-Stalinist anachronism, so many of its founders had parents who were involved with it directly or knowingly in front organisations, they formed part of a network of acquaintances even before they met as radicals in their own right. It is somewhat ironic that these people who believed themselves to be and were portrayed in the press as rebels and revolutionaries were, perhaps more than their contemporaries, truly their parents' children, carrying on their radical utopian dream without ever questioning anything beyond the means to the end.

It was only in 1974, when Betty Van Patter, a former Ramparts colleague he had recommended for a job helping the Black Panthers sort out their accounts, was abducted and later found brutally murdered, obviously by the Panthers (who expressed no concern when she disappeared, and had complained of her inquisitiveness), that Horowitz was confronted with the true nature of those he had been supporting. Further, when he approached others who were, from the circumstances of their involvement, well aware of the criminality and gang nature of the Panthers well before he, they continued to either deny the obvious reality or, even worse, deliberately cover it up because they still believed in the Panther mission of revolution. (To this day, nobody has been charged with Van Patter's murder.)

The contemporary conquest of Vietnam and Cambodia and the brutal and bloody aftermath, the likelihood of which had also been denied by the New Left (as late as 1974, Tom Hayden and Jane Fonda released a film titled Introduction to the Enemy which forecast a bright future of equality and justice when Saigon fell), reinforced the author's second thoughts, leading eventually to a complete break with the Left in the mid-1980s and his 1989 book with Peter Collier, Destructive Generation, the first sceptical look at the beliefs and consequences of Sixties radicalism by two of its key participants.

Radical Son mixes personal recollection, politics, philosophy, memoirs of encounters with characters ranging from Bertrand Russell to Abbie Hoffman, and a great deal of painful introspection to tell the story of how reality finally shattered second-generation utopian illusions. Even more valuable, the reader comes to understand the power those delusions have over those who share them, and why seemingly no amount of evidence suffices to induce doubt among those in their thrall, and why the reaction to any former believer who declares their “apostasy” is so immediate and vicious.

Horowitz is a serious person, and this is a serious, and often dismaying and tragic narrative. But one cannot help to be amused by the accounts of New Leftists trying to put their ideology into practice in running communal households, publishing enterprises, and political movements. Inevitably, before long everything blows up in the tediously familiar ways of such things, as imperfect human beings fail to meet the standards of a theory which requires them to deny their essential humanity. And yet they never learn; it's always put down to “errors”, blamed on deviant individuals, oppression, subversion, external circumstances, or some other cobbled up excuse. And still they want to try again, betting the entire society and human future on it.

March 2007 Permalink

Jenkins, Roy. Churchill: A Biography. New York: Plume, 2001. ISBN 0-452-28352-3.
This is a splendid biography of Churchill. The author, whose 39 year parliamentary career overlapped 16 of Churchill's almost 64 years in the House of Commons, focuses more on the political aspects of Churchill's career, as opposed to William Manchester's The Last Lion (in two volumes: Visions of Glory and Alone) which delves deeper into the British and world historical context of Churchill's life. Due to illness, Manchester abandoned plans for the third volume of The Last Lion, so his biography regrettably leaves the story in 1940. Jenkins covers Churchill's entire life in one volume (although at 1001 pages including end notes, it could easily have been two) and occasionally assumes familiarity with British history and political figures which may send readers not well versed in twentieth century British history, particularly the Edwardian era, scurrying to other references. Having read both Manchester and Jenkins, I find they complement each other well. If I were going to re-read them, I'd probably start with Manchester.

February 2004 Permalink

Kauffman, Bill. Forgotten Founder, Drunken Prophet. Wilmington: ISI Books, 2008. ISBN 978-1-933859-73-6.
It is a cliché to observe that history is written by the victors, but rarely is it as evident as in the case of the drafting and ratification of the United States Constitution, where the proponents of a strong national government, some of whom, including Alexander Hamilton, wished to “annihilate the State distinctions and State operations” (p. 30), not only conducted the proceedings in secret, carefully managed the flow of information to the public, and concealed their nationalist, nay imperial, ambitions from the state conventions which were to vote on ratification. Indeed, just like modern-day collectivists in the U.S. who have purloined the word “liberal”, which used to mean a champion of individual freedom, the covert centralisers at the Constitutional Convention styled themselves “Federalists”, while promoting a supreme government which was anything but federal in nature. The genuine champions of a federal structure allowed themselves to be dubbed “Anti-Federalists” and, as always, were slandered as opposing “progress” (but toward what?). The Anti-Federalists counted among their ranks men such as Samuel Adams, Patrick Henry, George Mason, Samuel Chase, and Elbridge Gerry: these were not reactionary bumpkins but heroes, patriots, and intellectuals the equal of any of their opponents. And then there was Luther Martin, fervent Anti-Federalist and perhaps the least celebrated of the Founding Fathers.

Martin's long life was a study in contradictions. He was considered one of the most brilliant trial lawyers of his time, and yet his courtroom demeanour was universally described as long-winded, rambling, uncouth, and ungrammatical. He often appeared in court obviously inebriated, was slovenly in appearance and dress, when excited would flick spittle from his mouth, and let's not get into his table manners. At the Consitutional Convention he was a fierce opponent of the Virginia Plan which became the basis of the Constitution and, with Samuel Adams and Mason, urged the adoption of a Bill of Rights. He argued vehemently for the inclusion of an immediate ban on the importation of slaves and a plan to phase out slavery while, as of 1790, owning six slaves himself yet serving as Honorary-Counselor to a Maryland abolitionist society.

After the Constitution was adopted by the convention (Martin had walked out by the time and did not sign the document), he led the fight against its ratification by Maryland. Maryland ratified the Constitution over his opposition, but he did manage to make the ratification conditional upon the adoption of a Bill of Rights.

Martin was a man with larger than life passions. Although philosophically close to Thomas Jefferson in his view of government, he detested the man because he believed Jefferson had slandered one of his wife's ancestors as a murderer of Indians. When Jefferson became President, Martin the Anti-Federalist became Martin the ardent Federalist, bent on causing Jefferson as much anguish as possible. When a law student studying with him eloped with and married his daughter, Martin turned incandescent, wrote, and self-published a 163 page full-tilt tirade against the bounder titled Modern Gratitude.

Lest Martin come across as a kind of buffoon, bear in mind that after his singular performance at the Constitutional Convention, he went on to serve as Attorney General of the State of Maryland for thirty years (a tenure never equalled in all the years which followed), argued forty cases before the U.S. Supreme Court, and appeared for the defence in two of the epochal trials of early U.S. jurisprudence: the impeachment trial of Supreme Court Justice Samuel Chase before the U.S. Senate, and the treason trial of Aaron Burr—and won acquittals on both occasions.

The author is an unabashed libertarian, and considers Martin's diagnosis of how the Constitution would inevitably lead to the concentration of power in a Federal City (which his fellow Anti-Federalist George Clinton foresaw, “would be the asylum of the base, idle, avaricious, and ambitious” [p. xiii]) to the detriment of individual liberty as prescient. One wishes that Martin had been listened to, while sympathising with those who actually had to endure his speeches.

The author writes with an exuberantly vast vocabulary which probably would have sent the late William F. Buckley to the dictionary on several occasions: every few pages you come across a word like “roorback”, “eftsoons”, “sennight”, or “fleer”. For a complete list of those which stumped me, open the vault of the spoilers.

Spoiler warning: Plot and/or ending details follow.  
Here are the delightfully obscure words used in this book. To avoid typographic fussiness, I have not quoted them. Each is linked to its definition. Vocabulary ho!

malison, exordium, eristic, roorback, tertium quid, bibulosity, eftsoons, vendue, froward, pococurante, disprized, toper, cerecloth, sennight, valetudinarian, variorum, concinnity, plashing, ultimo, fleer, recusants, scrim, flagitious, indurated, truckling, linguacious, caducity, prepotency, natheless, dissentient, placemen, lenity, burke, plangency, roundelay, hymeneally, mesalliance, divagation, parti pris, anent, comminatory, descry, minatory
Spoilers end here.  

This is a wonderful little book which, if your view of the U.S. Constitution has been solely based on the propaganda of those who promulgated it, is an excellent and enjoyable antidote.

November 2008 Permalink

Kotkin, Stephen. Stalin, Vol. 1: Paradoxes of Power, 1878–1928. New York: Penguin Press, 2014. ISBN 978-0-14-312786-4.
In a Levada Center poll in 2017, Russians who responded named Joseph Stalin the “most outstanding person” in world history. Now, you can argue about the meaning of “outstanding”, but it's pretty remarkable that citizens of a country whose chief of government (albeit several regimes ago) presided over an entirely avoidable famine which killed millions of citizens of his country, ordered purges which executed more than 700,000 people, including senior military leadership, leaving his nation unprepared for the German attack in 1941, which would, until the final victory, claim the lives of around 27 million Soviet citizens, military and civilian, would be considered an “outstanding person” as opposed to a super-villain.

The story of Stalin's career is even less plausible, and should give pause to those who believe history can be predicted without the contingency of things that “just happen”. Ioseb Besarionis dze Jughashvili (the author uses Roman alphabet transliterations of all individuals' names in their native languages, which can occasionally be confusing when they later Russified their names) was born in 1878 in the town of Gori in the Caucasus. Gori, part of the territory of Georgia which had long been ruled by the Ottoman Empire, had been seized by Imperial Russia in a series of bloody conflicts ending in the 1860s with complete incorporation of the territory into the Czar's empire. Ioseb, who was called by the Georgian dimunitive “Sosa” throughout his youth, was the third son born to his parents, but, as both of his older brothers had died not long after birth, was raised as an only child.

Sosa's father, Besarion Jughashvili (often written in the Russian form, Vissarion) was a shoemaker with his own shop in Gori but, as time passed his business fell on hard times and he closed the shop and sought other work, ending his life as a vagrant. Sosa's mother, Ketevan “Keke” Geladze, was ambitious and wanted the best for her son, and left her husband and took a variety of jobs to support the family. She arranged for eight year old Sosa to attend Russian language lessons given to the children of a priest in whose house she was boarding. Knowledge of Russian was the key to advancement in Czarist Georgia, and he had a head start when Keke arranged for him to be enrolled in the parish school's preparatory and four year programs. He was the first member of either side of his family to attend school and he rose to the top of his class under the patronage of a family friend, “Uncle Yakov” Egnatashvili. After graduation, his options were limited. The Russian administration, wary of the emergence of a Georgian intellectual class that might champion independence, refused to establish a university in the Caucasus. Sosa's best option was the highly selective Theological Seminary in Tiflis where he would prepare, in a six year course, for life as a parish priest or teacher in Georgia but, for those who graduated near the top, could lead to a scholarship at a university in another part of the empire.

He took the examinations and easily passed, gaining admission, petitioning and winning a partial scholarship that paid most of his fees. “Uncle Yakov” paid the rest, and he plunged into his studies. Georgia was in the midst of an intense campaign of Russification, and Sosa further perfected his skills in the Russian language. Although completely fluent in spoken and written Russian along with his native Georgian (the languages are completely unrelated, having no more in common than Finnish and Italian), he would speak Russian with a Georgian accent all his life and did not publish in the Russian language until he was twenty-nine years old.

Long a voracious reader, at the seminary Sosa joined a “forbidden literature” society which smuggled in and read works, not banned by the Russian authorities, but deemed unsuitable for priests in training. He read classics of Russian, French, English, and German literature and science, including Capital by Karl Marx. The latter would transform his view of the world and path in life. He made the acquaintance of a former seminarian and committed Marxist, Lado Ketskhoveli, who would guide his studies. In August 1898, he joined the newly formed “Third Group of Georgian Marxists”—many years later Stalin would date his “party card” to then.

Prior to 1905, imperial Russia was an absolute autocracy. The Czar ruled with no limitations on his power. What he decreed and ordered his functionaries to do was law. There was no parliament, political parties, elected officials of any kind, or permanent administrative state that did not serve at the pleasure of the monarch. Political activity and agitation were illegal, as were publishing and distributing any kind of political literature deemed to oppose imperial rule. As Sosa became increasingly radicalised, it was only a short step from devout seminarian to underground agitator. He began to neglect his studies, became increasingly disrespectful to authority figures, and, in April 1899, left the seminary before taking his final examinations.

Saddled with a large debt to the seminary for leaving without becoming a priest or teacher, he drifted into writing articles for small, underground publications associated with the Social Democrat movement, at the time the home of most Marxists. He took to public speaking and, while eschewing fancy flights of oratory, spoke directly to the meetings of workers he addressed in their own dialect and terms. Inevitably, he was arrested for “incitement to disorder and insubordination against higher authority” in April 1902 and jailed. After fifteen months in prison at Batum, he was sentenced to three years of internal exile in Siberia. In January 1904 he escaped and made it back to Tiflis, in Georgia, where he resumed his underground career. By this time the Social Democratic movement had fractured into Lenin's Bolshevik faction and the larger Menshevik group. Sosa, who during his imprisonment had adopted the revolutionary nickname “Koba”, after the hero in a Georgian novel of revenge, continued to write and speak and, in 1905, after the Czar was compelled to cede some of his power to a parliament, organised Battle Squads which stole printing equipment, attacked government forces, and raised money through protection rackets targeting businesses.

In 1905, Koba Jughashvili was elected one of three Bolshevik delegates from Georgia to attend the Third Congress of the Russian Social Democratic Workers' Party in Tampere, Finland, then part of the Russian empire. It was there he first met Lenin, who had been living in exile in Switzerland. Koba had read Lenin's prolific writings and admired his leadership of the Bolshevik cause, but was unimpressed in this first in-person encounter. He vocally took issue with Lenin's position that Bolsheviks should seek seats in the newly-formed State Duma (parliament). When Lenin backed down in the face of opposition, he said, “I expected to see the mountain eagle of our party, a great man, not only politically but physically, for I had formed for myself a picture of Lenin as a giant, as a stately representative figure of a man. What was my disappointment when I saw the most ordinary individual, below average height, distinguished from ordinary mortals by, literally, nothing.”

Returning to Georgia, he resumed his career as an underground revolutionary including, famously, organising a robbery of the Russian State Bank in Tiflis in which three dozen people were killed and two dozen more injured, “expropriating” 250,000 rubles for the Bolshevik cause. Koba did not participate directly, but he was the mastermind of the heist. This and other banditry, criminal enterprises, and unauthorised publications resulted in multiple arrests, imprisonments, exiles to Siberia, escapes, re-captures, and life underground in the years that followed. In 1912, while living underground in Saint Petersburg after yet another escape, he was named the first editor of the Bolshevik party's new daily newspaper, Pravda, although his name was kept secret. In 1913, with the encouragement of Lenin, he wrote an article titled “Marxism and the National Question” in which he addressed how a Bolshevik regime should approach the diverse ethnicities and national identities of the Russian Empire. As a Georgian Bolshevik, Jughashvili was seen as uniquely qualified and credible to address this thorny question. He published the article under the nom de plume “K. [for Koba] Stalin”, which literally translated, meant “Man of Steel” and paralleled Lenin's pseudonym. He would use this name for the rest of his life, reverting to the Russified form of his given name, “Joseph” instead of the nickname Koba (by which his close associates would continue to address him informally). I shall, like the author, refer to him subsequently as “Stalin”.

When Russia entered the Great War in 1914, events were set into motion which would lead to the end of Czarist rule, but Stalin was on the sidelines: in exile in Siberia, where he spent much of his time fishing. In late 1916, as manpower shortages became acute, exiled Bolsheviks including Stalin received notices of conscription into the army, but when he appeared at the induction centre he was rejected due to a crippled left arm, the result of a childhood injury. It was only after the abdication of the Czar in the February Revolution of 1917 that he returned to Saint Petersburg, now renamed Petrograd, and resumed his work for the Bolshevik cause. In April 1917, in elections to the Bolshevik Central Committee, Stalin came in third after Lenin (who had returned from exile in Switzerland) and Zinoviev. Despite having been out of circulation for several years, Stalin's reputation from his writings and editorship of Pravda, which he resumed, elevated him to among the top rank of the party.

As Kerensky's Provisional Government attempted to consolidate its power and continue the costly and unpopular war, Stalin and Trotsky joined Lenin's call for a Bolshevik coup to seize power, and Stalin was involved in all aspects of the eventual October Revolution, although often behind the scenes, while Lenin was the public face of the Bolshevik insurgency.

After seizing power, the Bolsheviks faced challenges from all directions. They had to disentangle Russia from the Great War without leaving the country open to attack and territorial conquest by Germany or Poland. Despite their ambitious name, they were a minority party and had to subdue domestic opposition. They took over a country which the debts incurred by the Czar to fund the war had effectively bankrupted. They had to exert their control over a sprawling, polyglot empire in which, outside of the big cities, their party had little or no presence. They needed to establish their authority over a military in which the officer corps largely regarded the Czar as their legitimate leader. They must restore agricultural production, severely disrupted by levies of manpower for the war, before famine brought instability and the risk of a counter-coup. And for facing these formidable problems, all at the same time, they were utterly unprepared.

The Bolsheviks were, to a man (and they were all men), professional revolutionaries. Their experience was in writing and publishing radical tracts and works of Marxist theory, agitating and organising workers in the cities, carrying out acts of terror against the regime, and funding their activities through banditry and other forms of criminality. There was not a military man, agricultural expert, banker, diplomat, logistician, transportation specialist, or administrator among them, and suddenly they needed all of these skills and more, plus the ability to recruit and staff an administration for a continent-wide empire. Further, although Lenin's leadership was firmly established and undisputed, his subordinates were all highly ambitious men seeking to establish and increase their power in the chaotic and fluid situation.

It was in this environment that Stalin made his mark as the reliable “fixer”. Whether it was securing levies of grain from the provinces, putting down resistance from counter-revolutionary White forces, stamping out opposition from other parties, developing policies for dealing with the diverse nations incorporated into the Russian Empire (indeed, in a real sense, it was Stalin who invented the Soviet Union as a nominal federation of autonomous republics which, in fact, were subject to Party control from Moscow), or implementing Lenin's orders, even when he disagreed with them, Stalin was on the job. Lenin recognised Stalin's importance as his right hand man by creating the post of General Secretary of the party and appointing him to it.

This placed Stalin at the centre of the party apparatus. He controlled who was hired, fired, and promoted. He controlled access to Lenin (only Trotsky could see Lenin without going through Stalin). This was a finely-tuned machine which allowed Lenin to exercise absolute power through a party machine which Stalin had largely built and operated.

Then, in May of 1922, the unthinkable happened: Lenin was felled by a stroke which left him partially paralysed. He retreated to his dacha at Gorki to recuperate, and his communication with the other senior leadership was almost entirely through Stalin. There had been no thought of or plan for a succession after Lenin (he was only fifty-two at the time of his first stroke, although he had been unwell for much of the previous year). As Lenin's health declined, ending in his death in January 1924, Stalin increasingly came to run the party and, through it, the government. He had appointed loyalists in key positions, who saw their own careers as linked to that of Stalin. By the end of 1924, Stalin began to move against the “Old Bolsheviks” who he saw as rivals and potential threats to his consolidation of power. When confronted with opposition, on three occasions he threatened to resign, each exercise in brinksmanship strengthening his grip on power, as the party feared the chaos that would ensue from a power struggle at the top. His status was reflected in 1925 when the city of Tsaritsyn was renamed Stalingrad.

This ascent to supreme power was not universally applauded. Felix Dzierzynski (Polish born, he is often better known by the Russian spelling of his name, Dzerzhinsky) who, as the founder of the Soviet secret police (Cheka/GPU/OGPU) knew a few things about dictatorship, warned in 1926, the year of his death, that “If we do not find the correct line and pace of development our opposition will grow and the country will get its dictator, the grave digger of the revolution irrespective of the beautiful feathers on his costume.”

With or without feathers, the dictatorship was beginning to emerge. In 1926 Stalin published “On Questions of Leninism” in which he introduced the concept of “Socialism in One Country” which, presented as orthodox Leninist doctrine (which it wasn't), argued that world revolution was unnecessary to establish communism in a single country. This set the stage for the collectivisation of agriculture and rapid industrialisation which was to come. In 1928, what was to be the prototype of the show trials of the 1930s opened in Moscow, the Shakhty trial, complete with accusations of industrial sabotage (“wrecking”), denunciations of class enemies, and Andrei Vyshinsky presiding as chief judge. Of the fifty-three engineers accused, five were executed and forty-four imprisoned. A country desperately short on the professionals its industry needed to develop had begin to devour them.

It is a mistake to regard Stalin purely as a dictator obsessed with accumulating and exercising power and destroying rivals, real or imagined. The one consistent theme throughout Stalin's career was that he was a true believer. He was a devout believer in the Orthodox faith while at the seminary, and he seamlessly transferred his allegiance to Marxism once he had been introduced to its doctrines. He had mastered the difficult works of Marx and could cite them from memory (as he often did spontaneously to buttress his arguments in policy disputes), and went on to similarly internalise the work of Lenin. These principles guided his actions, and motivated him to apply them rigidly, whatever the cost may be.

Starting in 1921, Lenin had introduced the New Economic Policy, which lightened state control over the economy and, in particular, introduced market reforms in the agricultural sector, resulting in a mixed economy in which socialism reigned in big city industries, but in the countryside the peasants operated under a kind of market economy. This policy had restored agricultural production to pre-revolutionary levels and largely ended food shortages in the cities and countryside. But to a doctrinaire Marxist, it seemed to risk destruction of the regime. Marx believed that the political system was determined by the means of production. Thus, accepting what was essentially a capitalist economy in the agricultural sector was to infect the socialist government with its worst enemy.

Once Stalin had completed his consolidation of power, he then proceeded as Marxist doctrine demanded: abolish the New Economic Policy and undertake the forced collectivisation of agriculture. This began in 1928.

And it is with this momentous decision that the present volume comes to an end. This massive work (976 pages in the print edition) is just the first in a planned three volume biography of Stalin. The second volume, Stalin: Waiting for Hitler, 1929–1941, was published in 2017 and the concluding volume is not yet completed.

Reading this book, and the entire series, is a major investment of time in a single historical figure. But, as the author observes, if you're interested in the phenomenon of twentieth century totalitarian dictatorship, Stalin is the gold standard. He amassed more power, exercised by a single person with essentially no checks or limits, over more people and a larger portion of the Earth's surface than any individual in human history. He ruled for almost thirty years, transformed the economy of his country, presided over deliberate famines, ruthless purges, and pervasive terror that killed tens of millions, led his country to victory at enormous cost in the largest land conflict in history and ended up exercising power over half of the European continent, and built a military which rivaled that of the West in a bipolar struggle for global hegemony.

It is impossible to relate the history of Stalin without describing the context in which it occurred, and this is as much a history of the final days of imperial Russia, the revolutions of 1917, and the establishment and consolidation of Soviet power as of Stalin himself. Indeed, in this first volume, there are lengthy parts of the narrative in which Stalin is largely offstage: in prison, internal exile, or occupied with matters peripheral to the main historical events. The level of detail is breathtaking: the Bolsheviks seem to have been as compulsive record-keepers as Germans are reputed to be, and not only are the votes of seemingly every committee meeting recorded, but who voted which way and why. There are more than two hundred pages of end notes, source citations, bibliography, and index.

If you are interested in Stalin, the Soviet Union, the phenomenon of Bolshevism, totalitarian dictatorship, or how destructive madness can grip a civilised society for decades, this is an essential work. It is unlikely it will ever be equalled.

December 2018 Permalink

Kotkin, Stephen. Stalin, Vol. 2: Waiting for Hitler, 1929–1941. New York: Penguin Press, 2017. ISBN 978-1-59420-380-0.
This is the second volume in the author's monumental projected three-volume biography of Joseph Stalin. The first volume, Stalin: Paradoxes of Power, 1878–1928 (December 2018) covers the period from Stalin's birth through the consolidation of his sole power atop the Soviet state after the death of Lenin. The third volume, which will cover the period from the Nazi invasion of the Soviet Union in 1941 through the death of Stalin in 1953 has yet to be published.

As this volume begins in 1928, Stalin is securely in the supreme position of the Communist Party of the Soviet Union, and having over the years staffed the senior ranks of the party and the Soviet state (which the party operated like the puppet it was) with loyalists who owed their positions to him, had no serious rivals who might challenge him. (It is often claimed that Stalin was paranoid and feared a coup, but would a despot fearing for his position regularly take summer holidays, months in length, in Sochi, far from the capital?)

By 1928, the Soviet Union had largely recovered from the damage inflicted by the Great War, Bolshevik revolution, and subsequent civil war. Industrial and agricultural production were back to around their 1914 levels, and most measures of well-being had similarly recovered. To be sure, compared to the developed industrial economies of countries such as Germany, France, or Britain, Russia remained a backward economy largely based upon primitive agriculture, but at least it had undone the damage inflicted by years of turbulence and conflict.

But in the eyes of Stalin and his close associates, who were ardent Marxists, there was a dangerous and potentially deadly internal contradiction in the Soviet system as it then stood. In 1921, in response to the chaos and famine following the 1917 revolution and years-long civil war, Lenin had proclaimed the New Economic Policy (NEP), which tempered the pure collectivism of original Bolshevik doctrine by introducing a mixed economy, where large enterprises would continue to be owned and managed by the state, but small-scale businesses could be privately owned and run for profit. More importantly, agriculture, which had previously been managed under a top-down system of coercive requisitioning of grain and other products by the state, was replaced by a market system where farmers could sell their products freely, subject to a tax, payable in product, proportional to their production (and thus creating an incentive to increase production).

The NEP was a great success, and shortages of agricultural products were largely eliminated. There was grousing about the growing prosperity of the so-called NEPmen, but the results of freeing the economy from the shackles of state control were evident to all. But according to Marxist doctrine, it was a dagger pointed at the heart of the socialist state.

By 1928, the Soviet economy could be described, in Marxist terms, as socialism in the industrial cities and capitalism in the agrarian countryside. But, according to Marx, the form of politics was determined by the organisation of the means of production—paraphrasing Brietbart, politics is downstream of economics. This meant that preserving capitalism in a large sector of the country, one employing a large majority of its population and necessary to feed the cities, was an existential risk. In such a situation it would only be normal for the capitalist peasants to eventually prevail over the less numerous urbanised workers and destroy socialism.

Stalin was a Marxist. He was not an opportunist who used Marxism-Leninism to further his own ambitions. He really believed this stuff. And so, in 1928, he proclaimed an end to the NEP and began the forced collectivisation of Soviet agriculture. Private ownership of land would be abolished, and the 120 million peasants essentially enslaved as “workers” on collective or state farms, with planting, quotas to be delivered, and management essentially controlled by the party. After an initial lucky year, the inevitable catastrophe ensued. Between 1931 and 1933 famine and epidemics resulting from it killed between five and seven million people. The country lost around half of its cattle and two thirds of its sheep. In 1929, the average family in Kazakhstan owned 22.6 cattle; in 1933 3.7. This was a calamity on the same order as the Jewish Holocaust in Germany, and just as man-made: during this period there was a global glut of food, but Stalin refused to admit the magnitude of the disaster for fear of inciting enemies to attack and because doing so would concede the failure of his collectivisation project. In addition to the famine, the process of collectivisation resulted in between four and five million people being arrested, executed, deported to other regions, or jailed.

Many in the starving countryside said, “If only Stalin knew, he would do something.” But the evidence is overwhelming: Stalin knew, and did nothing. Marxist theory said that agriculture must be collectivised, and by pure force of will he pushed through the project, whatever the cost. Many in the senior Soviet leadership questioned this single-minded pursuit of a theoretical goal at horrendous human cost, but they did not act to stop it. But Stalin remembered their opposition and would settle scores with them later.

By 1936, it appeared that the worst of the period of collectivisation was over. The peasants, preferring to live in slavery than starve to death, had acquiesced to their fate and resumed production, and the weather co-operated in producing good harvests. And then, in 1937, a new horror was unleashed upon the Soviet people, also completely man-made and driven by the will of Stalin, the Great Terror. Starting slowly in the aftermath of the assassination of Sergey Kirov in 1934, by 1937 the absurd devouring of those most loyal to the Soviet regime, all over Stalin's signature, reached a crescendo. In 1937 and 1938 1,557,259 people would be arrested and 681,692 executed, the overwhelming majority for political offences, this in a country with a working-age population of 100 million. Counting deaths from other causes as a result of the secret police, the overall death toll was probably around 830,000. This was so bizarre, and so unprecedented in human history, it is difficult to find any comparable situation, even in Nazi Germany. As the author remarks,

To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?

Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies.

And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.)

But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state.

With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union.

Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course.

Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution.

Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production.

On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border.

After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere.

But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine.

Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping.

The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn.

This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses.

Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia.

Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.

May 2019 Permalink

Kraft, Christopher C. Flight: My Life in Mission Control. New York: Dutton, 2001. ISBN 0-525-94571-7.

May 2001 Permalink

Kranz, Gene. Failure Is Not an Option. New York: Simon & Schuster, 2000. ISBN 0-7432-0079-9.

April 2001 Permalink

Krauss, Lawrence. Quantum Man. New York: W. W. Norton, 2011. ISBN 978-0-393-34065-5.
A great deal has been written about the life, career, and antics of Richard Feynman, but until the present book there was not a proper scientific biography of his work in physics and its significance in the field and consequences for subsequent research. Lawrence Krauss has masterfully remedied this lacuna with this work, which provides, at a level comprehensible to the intelligent layman, both a survey of Feynman's work, both successful and not, and also a sense of how Feynman achieved what he did and what ultimately motivated him in his often lonely quest to understand.

One often-neglected contributor to Feynman's success is discussed at length: his extraordinary skill in mathematical computation, intuitive sense of the best way to proceed toward a solution (he would often skip several intermediate steps and only fill them in when preparing work for publication), and tireless perseverance in performing daunting calculations which occupied page after page of forbidding equations. This talent was quickly recognised by those with whom he worked, and as one of the most junior physicists on the project, he was placed in charge of all computation at Los Alamos during the final phases of the Manhattan Project. Eugene Wigner said of Feynman, “He's another Dirac. Only this time human.”

Feynman's intuition and computational prowess was best demonstrated by his work on quantum electrodynamics, for which he shared a Nobel prize in 1965. (Initially Feynman didn't think too much of this work—he considered it mathematical mumbo-jumbo which swept the infinities which had plagued earlier attempts at a relativistic quantum theory of light and matter under the carpet. Only later did it become apparent that Feynman's work had laid the foundation upon which a comprehensive quantum field theory of the strong and electroweak interactions could be built.) His invention of Feynman diagrams defined the language now universally used by particle physicists to describe events in which particles interact.

Feynman was driven to understand things, and to him understanding meant being able to derive a phenomenon from first principles. Often he ignored the work of others and proceeded on his own, reinventing as he went. In numerous cases, he created new techniques and provided alternative ways of looking at a problem which provided a deeper insight into its fundamentals. A monumental illustration of Feynman's ability to do this is The Feynman Lectures on Physics, based on an undergraduate course in physics Feynman taught at Caltech in 1961–1964. Few physicists would have had the audacity to reformulate all of basic physics, from vectors and statics to quantum mechanics from scratch, and probably only Feynman could have pulled it off, which he did magnificently. As undergraduate pedagogy, the course was less than successful, but the transcribed lectures have remained in print ever since, and working physicists (and even humble engineers like me) are astounded at the insights to be had in reading and re-reading Feynman's work.

Even when Feynman failed, he failed gloriously and left behind work that continues to inspire. His unsuccessful attempt to find a quantum theory of gravitation showed that Einstein's geometric theory was completely equivalent to a field theory developed from first principles and knowledge of the properties of gravity. Feynman's foray into computation produced the Feynman Lectures On Computation, one of the first comprehensive expositions of the theory of quantum computation.

A chapter is devoted to the predictions of Feynman's 1959 lecture, “Plenty of Room at the Bottom”, which is rightly viewed as the founding document of molecular nanotechnology, but, as Krauss describes, also contained the seeds of genomic biotechnology, ultra-dense data storage, and quantum material engineering. Work resulting in more than fifteen subsequent Nobel prizes is suggested in this blueprint for research. Although Feynman would go on to win his own Nobel for other work, one gets the sense he couldn't care less that others pursued the lines of investigation he sketched and were rewarded for doing so. Feynman was in the game to understand, and often didn't seem to care whether what he was pursuing was of great importance or mundane, or whether the problem he was working on from his own unique point of departure had already been solved by others long before.

Feynman was such a curious character that his larger than life personality often obscures his greatness as a scientist. This book does an excellent job of restoring that balance and showing how much his work contributed to the edifice of science in the 20th century and beyond.

April 2013 Permalink

Lefevre, Edwin. Reminiscences of a Stock Operator. New York: John Wiley & Sons, [1923] 1994. ISBN 0-471-05970-6.
This stock market classic is a thinly fictionalised biography of the exploits of the legendary speculator Jesse Livermore, written in the form of an autobiography of “Larry Livingston”. (In 1940, shortly before his death, Livermore claimed that he had actually written the book himself, with writer Edwin Lefevre acting as editor and front-man; I know of no independent confirmation of this claim.) In any case, there are few books you can read which contain so much market wisdom packed into 300 pages of entertaining narrative. The book was published in 1923, and covers Livermore/Livingston's career from his start in the bucket shops of Boston to a millionaire market mover as the great 1920s bull market was just beginning to take off.

Trading was Livermore's life; he ended up making and losing four multi-million dollar fortunes, and was blamed for every major market crash from 1917 through the year of his death, 1940. Here is a picture of the original wild and woolly Wall Street—before the SEC, Glass-Steagall, restrictions on insider trading, and all the other party-pooping innovations of later years. Prior to 1913, there were not even any taxes on stock market profits. Market manipulation was considered (chapter 19) “no more than common merchandising processes”, and if the public gets fleeced, well, that's what they're there for! If you think today's financial futures, options, derivatives, and hedge funds are speculative, check out the description of late 19th century “bucket shops”: off-track betting parlours for stocks, which actually made no transactions in the market at all. Some things never change, however, and anybody who read chapter 23 about media hyping of stocks in the early decades of the last century would have been well cautioned against the “perma-bull” babblers who sucked the public into the dot-com bubble near the top.

July 2005 Permalink

Lindley, David. Degrees Kelvin. Washington: Joseph Henry Press, 2004. ISBN 0-309-09618-9.
When 17 year old William Thomson arrived at Cambridge University to study mathematics, Britain had become a backwater of research in science and mathematics—despite the technologically-driven industrial revolution being in full force, little had been done to build upon the towering legacy of Newton, and cutting edge work had shifted to the Continent, principally France and Germany. Before beginning his studies at Cambridge, Thomson had already published three research papers in the Cambridge Mathematical Journal, one of which introduced Fourier's mathematical theory of heat to English speaking readers, defending it against criticism from those opposed to the highly analytical French style of science which Thomson found congenial to his way of thinking.

Thus began a career which, by the end of the 19th century, made Thomson widely regarded as the preeminent scientist in the world: a genuine scientific celebrity. Over his long career Thomson fused the mathematical rigour of the Continental style of research with the empirical British attitude and made fundamental progress in the kinetic theory of heat, translated Michael Faraday's intuitive view of electricity and magnetism into a mathematical framework which set the stage for Maxwell's formal unification of the two in electromagnetic field theory, and calculated the age of the Earth based upon heat flow from the interior. The latter calculation, in which he estimated only 20 to 40 million years, proved to be wrong, but was so because he had no way to know about radioactive decay as the source of Earth's internal heat: he was explicit in stating that his result assumed no then-unknown source of heat or, as we'd now say, “no new physics”. Such was his prestige that few biologists and geologists whose own investigations argued for a far more ancient Earth stepped up and said, “Fine—so start looking for the new physics!” With Peter Tait, he wrote the Treatise on Natural Philosophy, the first unified exposition of what we would now call classical physics.

Thomson believed that science had to be founded in observations of phenomena, then systematised into formal mathematics and tested by predictions and experiments. To him, understanding the mechanism, ideally based upon a mechanical model, was the ultimate goal. Although acknowledging that Maxwell's equations correctly predicted electromagnetic phenomena, he considered them incomplete because they didn't explain how or why electricity and magnetism behaved that way. Heaven knows what he would have thought of quantum mechanics (which was elaborated after his death in 1907).

He'd probably have been a big fan of string theory, though. Never afraid to add complexity to his mechanical models, he spent two decades searching for a set of 21 parameters which would describe the mechanical properties of the luminiferous ether—what string “landscape” believers might call the moduli and fluxes of the vacuum, and argued for a “vortex atom” model in which extended vortex loops replaced pointlike billiard ball atoms to explain spectrographic results. These speculations proved, as they say, not even wrong.

Thomson was not an ivory tower theorist. He viewed the occupation of the natural philosopher (he disliked the word “physicist”) as that of a problem solver, with the domain of problems encompassing the practical as well as fundamental theory. He was a central figure in the development of the first transatlantic telegraphic cable and invented the mirror galvanometer which made telegraphy over such long distances possible. He was instrumental in defining the units of electricity we still use today. He invented a mechanical analogue computer for computation of tide tables, and a compass compensated for the magnetic distortion of iron and steel warships which became the standard for the Royal Navy. These inventions made him wealthy, and he indulged his love of the sea by buying a 126 ton schooner and inviting his friends and colleagues on voyages.

In 1892, he was elevated to a peerage by Queen Victoria, made Baron Kelvin of Largs, the first scientist ever so honoured. (Numerous scientists, including Newton and Thomson himself in 1866 had been knighted, but the award of a peerage is an honour of an entirely different order.) When he died in 1907 at age 83, he was buried in Westminster Abbey next to the grave of Isaac Newton. For one who accomplished so much, and was so celebrated in his lifetime, Lord Kelvin is largely forgotten today, remembered mostly for the absolute temperature scale named in his honour and, perhaps, for the Kelvinator company of Detroit, Michigan which used his still-celebrated name to promote their ice-boxes and refrigerators. While Thomson had his hand in much of the creation of the edifice of classical physics in the 19th century, there isn't a single enduring piece of work you can point to which is entirely his. This isn't indicative of any shortcoming on his part, but rather of the maturation of science from rare leaps of insight by isolated geniuses to a collective endeavour by an international community reading each other's papers and building a theory by the collaborative effort of many minds. Science was growing up, and Kelvin's reputation has suffered, perhaps, not due to any shortcomings in his contributions, but because they were so broad, as opposed to being identified with a single discovery which was entirely his own.

This is a delightful biography of a figure whose contributions to our knowledge of the world we live in are little remembered. Lord Kelvin never wavered from his belief that science consisted in collecting the data, developing a model and theory to explain what was observed, and following the implications of that theory to their logical conclusions. In doing so, he was often presciently right and occasionally spectacularly wrong, but he was always true to science as he saw it, which is how most scientists see their profession today.

Amusingly, the chapter titles are:

  1. Cambridge
  2. Conundrums
  3. Cable
  4. Controversies
  5. Compass
  6. Kelvin

September 2007 Permalink

Linenger, Jerry M. Off the Planet. New York: McGraw-Hill, 2000. ISBN 0-07-137230-X.

November 2001 Permalink

Macintyre, Ben. Agent Zigzag. New York: Three Rivers Press, 2007. ISBN 978-0-307-35341-2.
I'm not sure I'd agree with the cover blurb by the Boston Globe reviewer who deemed this “The best book ever written”, but it's a heck of a great read and will keep you enthralled from start to finish. Imagine the best wartime espionage novel you've ever read, stir in exploits from a criminal caper yarn, leaven with an assortment of delightfully eccentric characters, and then make the whole thing totally factual, exhaustively documented from archives declassified decades later by MI5, and you have this compelling story.

The protagonist, Eddie Chapman was, over his long and convoluted career, a British soldier; deserter; safecracker; elite criminal; prisoner of His Majesty, the government of the Isle of Jersey, and the Nazi occupation in Paris; volunteer spy and saboteur for the German Abwehr; parachute spy in Britain; double agent for MI5; instructor at a school for German spies in Norway; spy once again in Britain, deceiving the Germans about V-1 impact locations; participant in fixed dog track races; serial womaniser married to the same woman for fifty years; and for a while an “honorary crime correspondent” to the Sunday Telegraph. That's a lot to fit into even a life as long as Chapman's, and a decade after his death, those who remember him still aren't sure where his ultimate allegiance lay or even if the concept applied to him. If you simply look at him as an utterly amoral person who managed to always come up standing, even after intensive interrogations by MI5, the Abwehr, Gestapo, and SS, you miss his engaging charm, whether genuine or feigned, which engendered deeply-felt and long-lasting affection among his associates, both British and Nazi, criminal and police, all of whom describe him as a unique character.

Information on Chapman's exploits has been leaking out ever since he started publishing autobiographical information in 1953. Dodging the Official Secrets Act, in 1966 he published a more detailed account of his adventures, which was made into a very bad movie starring Christopher Plummer as Eddie Chapman. Since much of this information came from Chapman, it's not surprising that a substantial part of it was bogus. It is only with the release of the MI5 records, and through interviews with surviving participants in Chapman's exploits that the author was able to piece together an account which, while leaving many questions of motivation uncertain, at least pins down the facts and chronology.

This is a thoroughly delightful story of a totally ambiguous character: awarded the Iron Cross for his services to the Nazi Reich, having mistresses simultaneously supported in Britain and Norway by MI5 and the Abwehr, covertly pardoned for his high-profile criminal record for his service to the Crown, and unreconstructed rogue in his long life after the war. If published as spy fiction, this would be considered implausible in the extreme; the fact that it really happened makes this one of the most remarkable wartime stories I've read and an encounter with a character few novelists could invent.

November 2008 Permalink

Magueijo, João. A Brilliant Darkness. New York: Basic Books, 2009. ISBN 978-0-465-00903-9.
Ettore Majorana is one of the most enigmatic figures in twentieth century physics. The son of a wealthy Sicilian family and a domineering mother, he was a mathematical prodigy who, while studying for a doctorate in engineering, was recruited to join Enrico Fermi's laboratory: the “Via Panisperna boys”. (Can't read that without seeing “panspermia”? Me neither.) Majorana switched to physics, and received his doctorate at the age of 22.

At Fermi's lab, he almost immediately became known as the person who could quickly solve intractable mathematical problems others struggled with for weeks. He also acquired a reputation for working on whatever interested him, declining to collaborate with others. Further, he would often investigate a topic to his own satisfaction, speak of his conclusions to his colleagues, but never get around to writing a formal article for publication—he seemed almost totally motivated by satisfying his own intellectual curiosity and not at all by receiving credit for his work. This infuriated his fiercely competitive boss Fermi, who saw his institute scooped on multiple occasions by others who independently discovered and published work Majorana had done and left to languish in his desk drawer or discarded as being “too obvious to publish”. Still, Fermi regarded Majorana as one of those wild talents who appear upon rare occasions in the history of science. He said,

There are many categories of scientists, people of second and third rank, who do their best, but do not go very far. There are also people of first class, who make great discoveries, which are of capital importance for the development of science. But then there are the geniuses, like Galileo and Newton. Well, Ettore was one of these.

In 1933, Majorana visited Werner Heisenberg in Leipzig and quickly became a close friend of this physicist who was, in most personal traits, his polar opposite. Afterward, he returned to Rome and flip-flopped from his extroversion in the company of Heisenberg to the life of a recluse, rarely leaving his bedroom in the family mansion for almost four years. Then something happened, and he jumped into the competition for the position of full professor at the University of Naples, bypassing the requirement for an examination due to his “exceptional merit”. He emerged from his reclusion, accepted the position, and launched into his teaching career, albeit giving lectures at a level which his students often found bewildering.

Then, on March 26th, 1938, he boarded a ship in Palermo Sicily bound for Naples and was never seen again. Before his departure he had posted enigmatic letters to his employer and family, sent a telegram, and left a further letter in his hotel room which some interpreted as suicide notes, but which forensic scientists who have read thousands of suicide notes say resemble none they've ever seen (but then, would a note by a Galileo or Newton read like that of the run of the mill suicide?). This event set in motion investigation and speculation which continues to this very day. Majorana was said to have withdrawn a large sum of money from his bank a few days before: is this plausible for one bent on self-annihilation (we'll get back to that infra)? Based on his recent interest in religion and reports of his having approached religious communities to join them, members of his family spent a year following up reports that he'd joined a monastery; despite “sightings”, none of these leads panned out. Years later, multiple credible sources with nothing apparently to gain reported that Majorana had been seen on numerous occasions in Argentina, and, abandoning physics (which he had said “was on the wrong path” before his disappearance), pursued a career as an engineer.

This only scratches the surface of the legends which have grown up around Majorana. His disappearance, occurring after nuclear fission had already been produced in Fermi's laboratory, but none of the “boys” had yet realised what they'd seen, spawns speculation that Majorana, as he often did, figured it out, worked out the implications, spoke of it to someone, and was kidnapped by the Germans (maybe he mentioned it to his friend Heisenberg), the Americans, or the Soviets. There is an Italian comic book in which Majorana is abducted by Americans, spirited off to Los Alamos to work on the Manhattan Project, only to be abducted again (to his great relief) by aliens in a flying saucer. Nobody knows—this is just one of the many mysteries bearing the name Majorana.

Today, Majorana is best known for his work on the neutrino. He responded to Paul Dirac's theory of the neutrino (which he believed unnecessarily complicated and unphysical) with his own, in which, as opposed to there being neutrinos and antineutrinos, the neutrino is its own antiparticle and hence neutrinos of the same flavour can annihilate one another. At the time these theories were proposed the neutrino had not been detected, nor would it be for twenty years. When the existence of the neutrino was confirmed (although few doubted its existence by the time Reines and Cowan detected it in 1956), few believed it would ever be possible to distinguish the Dirac and Majorana theories of the neutrino, because that particle was almost universally believed to be massless. But then the “scientific consensus” isn't always the way to bet.

Starting with solar neutrino experiments in the 1960s, and continuing to the present day, it became clear that neutrinos did have mass, albeit very little compared to the electron. This meant that the distinction between the Dirac and Majorana theories of the neutrino was accessible to experiment, and could, at least in principle, be resolved. “At least in principle”: what a clarion call to the bleeding edge experimentalist! If the neutrino is a Majorana particle, as opposed to a Dirac particle, then neutrinoless double beta decay should occur, and we'll know whether Majorana's model, proposed more than seven decades ago, was correct. I wish there'd been more discussion of the open controversy over experiments which claim a 6σ signal for neutrinoless double beta decay in 76Ge, but then one doesn't want to date one's book with matters actively disputed.

To the book: this may be the first exemplar of a new genre I'll dub “gonzo scientific biography”. Like the “new journalism” of the 1960s and '70s, this is as much about the author as the subject; the author figures as a central character in the narrative, whether transcribing his queries in pidgin Italian to the Majorana family:

“Signora wifed a brother of Ettore, Luciano?”
“What age did signora owned at that time”
“But he was olded fifty years!”
“But in end he husbanded you.”

Besides humourously trampling on the language of Dante, the author employs profanity as a superlative as do so many “new journalists”. I find this unseemly in a scientific biography of an ascetic, deeply-conflicted individual who spent most of his short life in a search for the truth and, if he erred, erred always on the side of propriety, self-denial, and commitment to dignity of all people.

Should you read this? Well, if you've come this far, of course you should!   This is an excellent, albeit flawed, biography of a singular, albeit flawed, genius whose intellectual legacy motivates massive experiments conducted deep underground and in the seas today. Suppose a neutrinoless double beta decay experiment should confirm the Majorana theory? Should he receive the Nobel prize for it? On the merits, absolutely: many physics Nobels have been awarded for far less, and let's not talk about the “soft Nobels”. But under the rules a Nobel prize can't be awarded posthumously. Which then compels one to ask, “Is Ettore dead?” Well, sure, that's the way to bet: he was born in 1906 and while many people have lived longer, most don't. But how you can you be certain? I'd say, should an experiment for neutrinoless double beta decay prove conclusive, award him the prize and see if he shows up to accept it. Then we'll all know for sure.

Heck, if he did, it'd probably make Drudge.

December 2009 Permalink

Mahon, Basil. The Man Who Changed Everything. Chichester, UK: John Wiley & Sons, 2003. ISBN 978-0-470-86171-4.
In the 19th century, science in general and physics in particular grew up, assuming their modern form which is still recognisable today. At the start of the century, the word “scientist” was not yet in use, and the natural philosophers of the time were often amateurs. University research in the sciences, particularly in Britain, was rare. Those working in the sciences were often occupied by cataloguing natural phenomena, and apart from Newton's monumental achievements, few people focussed on discovering mathematical laws to explain the new physical phenomena which were being discovered such as electricity and magnetism.

One person, James Clerk Maxwell, was largely responsible for creating the way modern science is done and the way we think about theories of physics, while simultaneously restoring Britain's standing in physics compared to work on the Continent, and he created an institution which would continue to do important work from the time of his early death until the present day. While every physicist and electrical engineer knows of Maxwell and his work, he is largely unknown to the general public, and even those who are aware of his seminal work in electromagnetism may be unaware of the extent his footprints are found all over the edifice of 19th century physics.

Maxwell was born in 1831 to a Scottish lawyer, John Clerk, and his wife Frances Cay. Clerk subsequently inherited a country estate, and added “Maxwell” to his name in honour of the noble relatives from whom he inherited it. His son's first name, then was “James” and his surname “Clerk Maxwell”: this is why his full name is always used instead of “James Maxwell”. From childhood, James was curious about everything he encountered, and instead of asking “Why?” over and over like many children, he drove his parents to distraction with “What's the go o' that?”. His father did not consider science a suitable occupation for his son and tried to direct him toward the law, but James's curiosity did not extend to legal tomes and he concentrated on topics that interested him. He published his first scientific paper, on curves with more than two foci, at the age of 14. He pursued his scientific education first at the University of Edinburgh and later at Cambridge, where he graduated in 1854 with a degree in mathematics. He came in second in the prestigious Tripos examination, earning the title of Second Wrangler.

Maxwell was now free to begin his independent research, and he turned to the problem of human colour vision. It had been established that colour vision worked by detecting the mixture of three primary colours, but Maxwell was the first to discover that these primaries were red, green, and blue, and that by mixing them in the correct proportion, white would be produced. This was a matter to which Maxwell would return repeatedly during his life.

In 1856 he accepted an appointment as a full professor and department head at Marischal College, in Aberdeen Scotland. In 1857, the topic for the prestigious Adams Prize was the nature of the rings of Saturn. Maxwell's submission was a tour de force which proved that the rings could not be either solid nor a liquid, and hence had to be made of an enormous number of individually orbiting bodies. Maxwell was awarded the prize, the significance of which was magnified by the fact that his was the only submission: all of the others who aspired to solve the problem had abandoned it as too difficult.

Maxwell's next post was at King's College London, where he investigated the properties of gases and strengthened the evidence for the molecular theory of gases. It was here that he first undertook to explain the relationship between electricity and magnetism which had been discovered by Michael Faraday. Working in the old style of physics, he constructed an intricate mechanical thought experiment model which might explain the lines of force that Faraday had introduced but which many scientists thought were mystical mumbo-jumbo. Maxwell believed the alternative of action at a distance without any intermediate mechanism was wrong, and was able, with his model, to explain the phenomenon of rotation of the plane of polarisation of light by a magnetic field, which had been discovered by Faraday. While at King's College, to demonstrate his theory of colour vision, he took and displayed the first colour photograph.

Maxwell's greatest scientific achievement was done while living the life of a country gentleman at his estate, Glenair. In his textbook, A Treatise on Electricity and Magnetism, he presented his famous equations which showed that electricity and magnetism were two aspects of the same phenomenon. This was the first of the great unifications of physical laws which have continued to the present day. But that isn't all they showed. The speed of light appeared as a conversion factor between the units of electricity and magnetism, and the equations allowed solutions of waves oscillating between an electric and magnetic field which could propagate through empty space at the speed of light. It was compelling to deduce that light was just such an electromagnetic wave, and that waves of other frequencies outside the visual range must exist. Thus was laid the foundation of wireless communication, X-rays, and gamma rays. The speed of light is a constant in Maxwell's equations, not depending upon the motion of the observer. This appears to conflict with Newton's laws of mechanics, and it was not until Einstein's 1905 paper on special relativity that the mystery would be resolved. In essence, faced with a dispute between Newton and Maxwell, Einstein decided to bet on Maxwell, and he chose wisely. Finally, when you look at Maxwell's equations (in their modern form, using the notation of vector calculus), they appear lopsided. While they unify electricity and magnetism, the symmetry is imperfect in that while a moving electric charge generates a magnetic field, there is no magnetic charge which, when moved, generates an electric field. Such a charge would be a magnetic monopole, and despite extensive experimental searches, none has ever been found. The existence of monopoles would make Maxwell's equations even more beautiful, but sometimes nature doesn't care about that. By all evidence to date, Maxwell got it right.

In 1871 Maxwell came out of retirement to accept a professorship at Cambridge and found the Cavendish Laboratory, which would focus on experimental science and elevate Cambridge to world-class status in the field. To date, 29 Nobel Prizes have been awarded for work done at the Cavendish.

Maxwell's theoretical and experimental work on heat and gases revealed discrepancies which were not explained until the development of quantum theory in the 20th century. His suggestion of Maxwell's demon posed a deep puzzle in the foundations of thermodynamics which eventually, a century later, showed the deep connections between information theory and statistical mechanics. His practical work on automatic governors for steam engines foreshadowed what we now call control theory. He played a key part in the development of the units we use for electrical quantities.

By all accounts Maxwell was a modest, generous, and well-mannered man. He wrote whimsical poetry, discussed a multitude of topics (although he had little interest in politics), was an enthusiastic horseman and athlete (he would swim in the sea off Scotland in the winter), and was happily married, with his wife Katherine an active participant in his experiments. All his life, he supported general education in science, founding a working men's college in Cambridge and lecturing at such colleges throughout his career.

Maxwell lived only 48 years—he died in 1879 of the same cancer which had killed his mother when he was only eight years old. When he fell ill, he was engaged in a variety of research while presiding at the Cavendish Laboratory. We shall never know what he might have done had he been granted another two decades.

Apart from the significant achievements Maxwell made in a wide variety of fields, he changed the way physicists look at, describe, and think about natural phenomena. After using a mental model to explore electromagnetism, he discarded it in favour of a mathematical description of its behaviour. There is no theory behind Maxwell's equations: the equations are the theory. To the extent they produce the correct results when experimental conditions are plugged in, and predict new phenomena which are subsequently confirmed by experiment, they are valuable. If they err, they should be supplanted by something more precise. But they say nothing about what is really going on—they only seek to model what happens when you do experiments. Today, we are so accustomed to working with theories of this kind: quantum mechanics, special and general relativity, and the standard model of particle physics, that we don't think much about it, but it was revolutionary in Maxwell's time. His mathematical approach, like Newton's, eschewed explanation in favour of prediction: “We have no idea how it works, but here's what will happen if you do this experiment.” This is perhaps Maxwell's greatest legacy.

This is an excellent scientific biography of Maxwell which also gives the reader a sense of the man. He was such a quintessentially normal person there aren't a lot of amusing anecdotes to relate. He loved life, loved his work, cherished his friends, and discovered the scientific foundations of the technologies which allow you to read this. In the Kindle edition, at least as read on an iPad, the text appears in a curious, spidery, almost vintage, font in which periods are difficult to distinguish from commas. Numbers sometimes have spurious spaces embedded within them, and the index cites pages in the print edition which are useless since the Kindle edition does not include real page numbers.

August 2014 Permalink

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.
At age eleven, in 1861, young Oliver Heaviside's family, supported by his father's irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family's house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid's geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today's A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother's elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver's favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today's money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship's position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea's propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside's work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside's paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton's Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell's Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell's abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson's intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside's paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside's ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver's employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell's work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell's theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender's key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell's theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it's much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell's theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher's equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside's work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside's work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside's life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell's theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell's original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

Maxwell's equations: original form

into the four famous vector equations we today think of as Maxwell's.

Maxwell's equations: original form

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell's theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn't because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he'd invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside's work and long after engineers were routinely using it in their calculations.)

Heaviside's intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein's special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn't have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain's highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

November 2018 Permalink

Manchester, William and Paul Reid. The Last Lion. Vol. 3. New York: Little, Brown, 2012. ISBN 978-0-316-54770-3.
William Manchester's monumental three volume biography of Winston Churchill, The Last Lion, began with the 1984 publication of the first volume, Visions of Glory, 1874–1932 and continued with second in 1989, Alone, 1932–1940. I devoured these books when they came out, and eagerly awaited the concluding volume which would cover Churchill's World War II years and subsequent career and life. This was to be a wait of more than two decades. By 1988, William Manchester had concluded his research for the present volume, subtitled Defender of the Realm, 1940–1965 and began to write a draft of the work. Failing health caused him to set the project aside after about a hundred pages covering events up to the start of the Battle of Britain. In 2003, Manchester, no longer able to write, invited Paul Reid to audition to complete the work by writing a chapter on the London Blitz. The result being satisfactory to Manchester, his agent, and the publisher, Reid began work in earnest on the final volume, with the intent that Manchester would edit the manuscript as it was produced. Alas, Manchester died in 2004, and Reid was forced to interpret Manchester's research notes, intended for his own use and not to guide another author, without the assistance of the person who compiled them. This required much additional research and collecting original source documents which Manchester had examined. The result of this is that this book took almost another decade of work by Reid before its publication. It has been a protracted wait, especially for those who admired the first two volumes, but ultimately worth it. This is a thoroughly satisfying conclusion to what will likely remain the definitive biography of Churchill for the foreseeable future.

When Winston Churchill became prime minister in the dark days of May 1940, he was already sixty-five years old: retirement age for most of his generation, and faced a Nazi Germany which was consolidating its hold on Western Europe with only Britain to oppose its hegemony. Had Churchill retired from public life in 1940, he would still be remembered as one of the most consequential British public figures of the twentieth century; what he did in the years to come elevated him to the stature of one of the preeminent statesmen of modern times. These events are chronicled in this book, dominated by World War II, which occupies three quarters of the text. In fact, although the focus is on Churchill, the book serves also as a reasonably comprehensive history of the war in the theatres in which British forces were engaged, and of the complex relations among the Allies.

It is often forgotten at this remove that at the time Churchill came to power he was viewed by many, including those of his own party and military commanders, as a dangerous and erratic figure given to enthusiasm for harebrained schemes and with a propensity for disaster (for example, his resignation in disgrace after the Gallipoli catastrophe in World War I). Although admired for his steadfastness and ability to rally the nation to the daunting tasks before it, Churchill's erratic nature continued to exasperate his subordinates, as is extensively documented here from their own contemporary diaries.

Churchill's complex relationships with the other leaders of the Grand Alliance: Roosevelt and Stalin, are explored in depth. Although Churchill had great admiration for Roosevelt and desperately needed the assistance the U.S. could provide to prosecute the war, Roosevelt comes across as a lightweight, ill-informed and not particularly engaged in military affairs and blind to the geopolitical consequences of the Red Army's occupying eastern and central Europe at war's end. (This was not just Churchill's view, but widely shared among senior British political and military circles.) While despising Bolshevism, Churchill developed a grudging respect for Stalin, considering his grasp of strategy to be excellent and, while infuriating to deal with, reliable in keeping his commitments to the other allies.

As the war drew to a close, Churchill was one of the first to warn of the great tragedy about to befall those countries behind what he dubbed the “iron curtain” and the peril Soviet power posed to the West. By July 1950, the Soviets fielded 175 divisions, of which 25 were armoured, against a Western force of 12 divisions (2 armoured). Given the correlation of forces, only Soviet postwar exhaustion and unwillingness to roll the dice given the threat of U.S. nuclear retaliation kept the Red Army from marching west to the Atlantic.

After the war, in opposition once again as the disastrous Attlee Labour government set Britain on an irreversible trajectory of decline, he thundered against the dying of the light and retreat from Empire not, as in the 1930s, a back-bencher, but rather leader of the opposition. In 1951 he led the Tories to victory and became prime minister once again, for the first time with the mandate of winning a general election as party leader. He remained prime minister until 1955 when he resigned in favour of Anthony Eden. His second tenure as P.M. was frustrating, with little he could to do to reverse Britain's economic decline and shrinkage on the world stage. In 1953 he suffered a serious stroke, which was covered up from all but his inner circle. While he largely recovered, approaching his eightieth birthday, he acknowledged the inevitable and gave up the leadership and prime minister positions.

Churchill remained a member of Parliament for Woodford until 1964. In January 1965 he suffered another severe stroke and died at age 90 on the 24th of that month.

It's been a long time coming, but this book is a grand conclusion of the work Manchester envisioned. It is a sprawling account of a great sprawling life engaged with great historical events over most of a century: from the last cavalry charge of the British Army to the hydrogen bomb. Churchill was an extraordinarily complicated and in many ways conflicted person, and this grand canvas provides the scope to explore his character and its origins in depth. Manchester and Reid have created a masterpiece. It is daunting to contemplate a three volume work totalling three thousand pages, but if you are interested in the subject, it is a uniquely rewarding read.

January 2013 Permalink

McCullough, David. The Wright Brothers. New York: Simon & Schuster, 2015. ISBN 978-1-4767-2874-2.
On December 8th, 1903, all was in readiness. The aircraft was perched on its launching catapult, the brave airman at the controls. The powerful internal combustion engine roared to life. At 16:45 the catapult hurled the craft into the air. It rose straight up, flipped, and with its wings coming apart, plunged into the Potomac river just 20 feet from the launching point. The pilot was initially trapped beneath the wreckage but managed to free himself and swim to the surface. After being rescued from the river, he emitted what one witness described as “the most voluble series of blasphemies” he had ever heard.

So ended the last flight of Samuel Langley's “Aerodrome”. Langley was a distinguished scientist and secretary of the Smithsonian Institution in Washington D.C. Funded by the U.S. Army and the Smithsonian for a total of US$ 70,000 (equivalent to around 1.7 million present-day dollars), the Aerodrome crashed immediately on both of its test flights, and was the subject of much mockery in the press.

Just nine days later, on December 17th, two brothers, sons of a churchman, with no education beyond high school, and proprietors of a bicycle shop in Dayton, Ohio, readied their own machine for flight near Kitty Hawk, on the windswept sandy hills of North Carolina's Outer Banks. Their craft, called just the Flyer, took to the air with Orville Wright at the controls. With the 12 horsepower engine driving the twin propellers and brother Wilbur running alongside to stabilise the machine as it moved down the launching rail into the wind, Orville lifted the machine into the air and achieved the first manned heavier-than-air powered flight, demonstrating the Flyer was controllable in all three axes. The flight lasted just 12 seconds and covered a distance of 120 feet.

After the first flight, the brothers took turns flying the machine three more times on the 17th. On the final flight Wilbur flew a distance of 852 feet in a flight of 59 seconds (a strong headwind was blowing, and this flight was over half a mile through the air). After completion of the fourth flight, while being prepared to fly again, a gust of wind caught the machine and dragged it, along with assistant John T. Daniels, down the beach toward the ocean. Daniels escaped, but the Flyer was damaged beyond repair and never flew again. (The Flyer which can seen in the Smithsonian's National Air and Space Museum today has been extensively restored.)

Orville sent a telegram to his father in Dayton announcing the success, and the brothers packed up the remains of the aircraft to be shipped back to their shop. The 1903 season was at an end. The entire budget for the project between 1900 through the successful first flights was less than US$ 1000 (24,000 dollars today), and was funded entirely by profits from the brothers' bicycle business.

How did two brothers with no formal education in aerodynamics or engineering succeed on a shoestring budget while Langley, with public funds at his disposal and the resources of a major scientific institution fail so embarrassingly? Ultimately it was because the Wright brothers identified the key problem of flight and patiently worked on solving it through a series of experiments. Perhaps it was because they were in the bicycle business. (Although they are often identified as proprietors of a “bicycle shop”, they also manufactured their own bicycles and had acquired the machine tools, skills, and co-workers for the business, later applied to building the flying machine.)

The Wrights believed the essential problem of heavier than air flight was control. The details of how a bicycle is built don't matter much: you still have to learn to ride it. And the problem of control in free flight is much more difficult than riding a bicycle, where the only controls are the handlebars and, to a lesser extent, shifting the rider's weight. In flight, an airplane must be controlled in three axes: pitch (up and down), yaw (left and right), and roll (wings' angle to the horizon). The means for control in each of these axes must be provided, and what's more, just as for a child learning to ride a bike, the would-be aeronaut must master the skill of using these controls to maintain his balance in the air.

Through a patient program of subscale experimentation, first with kites controlled by from the ground by lines manipulated by the operators, then gliders flown by a pilot on board, the Wrights developed their system of pitch control by a front-mounted elevator, yaw by a rudder at the rear, and roll by warping the wings of the craft. Further, they needed to learn how to fly using these controls and verify that the resulting plane would be stable enough that a person could master the skill of flying it. With powerless kites and gliders, this required a strong, consistent wind. After inquiries to the U.S. Weather Bureau, the brothers selected the Kitty Hawk site on the North Carolina coast. Just getting there was an adventure, but the wind was as promised and the sand and lack of large vegetation was ideal for their gliding experiments. They were definitely “roughing it” at this remote site, and at times were afflicted by clouds of mosquitos of Biblical plague proportions, but starting in 1900 they tested a series of successively larger gliders and by 1902 had a design which provided three axis control, stability, and the controls for a pilot on board. In the 1902 season they made more than 700 flights and were satisfied the control problem had been mastered.

Now all that remained was to add an engine and propellers to the successful glider design, again scaling it up to accommodate the added weight. In 1903, you couldn't just go down to the hardware store and buy an engine, and automobile engines were much too heavy, so the Wrights' resourceful mechanic, Charlie Taylor, designed and built the four cylinder motor from scratch, using the new-fangled material aluminium for the engine block. The finished engine weighed just 152 pounds and produced 12 horsepower. The brothers could find no references for the design of air propellers and argued intensely over the topic, but eventually concluded they'd just have to make a best guess and test it on the real machine.

The Flyer worked the on the second attempt (an earlier try on December 14th ended in a minor crash when Wilbur over-controlled at the moment of take-off). But this stunning success was the product of years of incremental refinement of the design, practical testing, and mastery of airmanship through experience.

Those four flights in December of 1903 are now considered one of the epochal events of the twentieth century, but at the time they received little notice. Only a few accounts of the flights appeared in the press, and some of them were garbled and/or sensationalised. The Wrights knew that the Flyer (whose wreckage was now in storage crates at Dayton), while a successful proof of concept and the basis for a patent filing, was not a practical flying machine. It could only take off into the strong wind at Kitty Hawk and had not yet demonstrated long-term controlled flight including aerial maneuvers such as turns or flying around a closed course. It was just too difficult travelling to Kitty Hawk, and the facilities of their camp there didn't permit rapid modification of the machines based upon experimentation.

They arranged to use an 84 acre cow pasture called Huffman Prairie located eight miles from Dayton along an interurban trolley line which made it easy to reach. The field's owner let them use it without charge as long as they didn't disturb the livestock. The Wrights devised a catapult to launch their planes, powered by a heavy falling weight, which would allow them to take off in still air. It was here, in 1904, that they refined the design into a practical flying machine and fully mastered the art of flying it over the course of about fifty test flights. Still, there was little note of their work in the press, and the first detailed account was published in the January 1905 edition of Gleanings in Bee Culture. Amos Root, the author of the article and publisher of the magazine, sent a copy to Scientific American, saying they could republish it without fee. The editors declined, and a year later mocked the achievements of the Wright brothers.

For those accustomed to the pace of technological development more than a century later, the leisurely pace of progress in aviation and lack of public interest in the achievement of what had been a dream of humanity since antiquity seems odd. Indeed, the Wrights, who had continued to refine their designs, would not become celebrities nor would their achievements be widely acknowledged until a series of demonstrations Wilbur would perform at Le Mans in France in the summer of 1908. Le Figaro wrote, “It was not merely a success, but a triumph…a decisive victory for aviation, the news of which will revolutionize scientific circles throughout the world.” And it did: stories of Wilbur's exploits were picked up by the press on the Continent, in Britain, and, belatedly, by papers in the U.S. Huge crowds came out to see the flights, and the intrepid American aviator's name was on every tongue.

Meanwhile, Orville was preparing for a series of demonstration flights for the U.S. Army at Fort Myer, Virginia. The army had agreed to buy a machine if it passed a series of tests. Orville's flights also began to draw large crowds from nearby Washington and extensive press coverage. All doubts about what the Wrights had wrought were now gone. During a demonstration flight on September 17, 1908, a propeller broke in flight. Orville tried to recover, but the machine plunged to the ground from an altitude of 75 feet, severely injuring him and killing his passenger, Lieutenant Thomas Selfridge, who became the first person to die in an airplane crash. Orville's recuperation would be long and difficult, aided by his sister, Katharine.

In early 1909, Orville and Katharine would join Wilbur in France, where he was to do even more spectacular demonstrations in the south of the country, training pilots for the airplanes he was selling to the French. Upon their return to the U.S., the Wrights were awarded medals by President Taft at the White House. They were feted as returning heroes in a two day celebration in Dayton. The diligent Wrights continued their work in the shop between events.

The brothers would return to Fort Myer, the scene of the crash, and complete their demonstrations for the army, securing the contract for the sale of an airplane for US$ 30,000. The Wrights would continue to develop their company, defend their growing portfolio of patents against competitors, and innovate. Wilbur was to die of typhoid fever in 1912, aged only 45 years. Orville sold his interest in the Wright Company in 1915 and, in his retirement, served for 28 years on the National Advisory Committee for Aeronautics, the precursor of NASA. He died in 1948. Neither brother ever married.

This book is a superb evocation of the life and times of the Wrights and their part in creating, developing, promoting, and commercialising one of the key technologies of the modern world.

February 2016 Permalink

Mowat, Farley. And No Birds Sang. Vancouver: Douglas & McIntyre, [1975] 2012. ISBN 978-1-77100-030-7.
When Canadians were warriors: a personal account of military training and the brutal reality of infantry combat in Italy during World War II.

April 2020 Permalink

Mullane, Mike. Riding Rockets. New York: Scribner, 2006. ISBN 0-7432-7682-5.
Mike Mullane joined NASA in 1978, one of the first group of astronauts recruited specifically for the space shuttle program. An Air Force veteran of 134 combat missions in Vietnam as back-seater in the RF-4C reconnaissance version of the Phantom fighter (imperfect eyesight disqualified him from pilot training), he joined NASA as a mission specialist and eventually flew on three shuttle missions: STS-41D in 1984, STS-27 in 1988, and STS-36 in 1990, the latter two classified Department of Defense missions for which he was twice awarded the National Intelligence Medal of Achievement. (Receipt of this medal was, at the time, itself a secret, but was declassified after the collapse of the Soviet Union. The work for which the medals were awarded remains secret to this day.)

As a mission specialist, Mullane never maneuvered the shuttle in space nor landed it on Earth, nor did he perform a spacewalk, mark any significant “first” in space exploration or establish any records apart from being part of the crew of STS-36 which flew the highest inclination (62°) orbit of any human spaceflight so far. What he has done here is write one of the most enlightening, enthralling, and brutally honest astronaut memoirs ever published, far and away the best describing the shuttle era. All of the realities of NASA in the 1980s which were airbrushed out by Public Affairs Officers with the complicity of an astronaut corps who knew that to speak to an outsider about what was really going on would mean they'd never get another flight assignment are dealt with head-on: the dysfunctional, intimidation- and uncertainty-based management culture, the gap between what astronauts knew about the danger and unreliability of the shuttle and what NASA was telling Congress and public, the conflict between battle-hardened military astronauts and perpetual student post-docs recruited as scientist-astronauts, the shameless toadying to politicians, and the perennial over-promising of shuttle capabilities and consequent corner-cutting and workforce exhaustion. (Those of a libertarian bent might wish they could warp back in time, shake the author by the shoulders, and remind him, “Hey dude, you're working for a government agency!”)

The realities of flying a space shuttle mission are described without any of the sugar-coating or veiled references common in other astronaut accounts, and always with a sense of humour. The deep-seated dread of strapping into an experimental vehicle with four million pounds of explosive fuel and no crew escape system is discussed candidly, along with the fact that, while universally shared by astronauts, it was, of course, never hinted to outsiders, even passengers on the shuttle who were told it was a kind of very fast, high-flying airliner. Even if the shuttle doesn't kill you, there's still the toilet to deal with, and any curiosity you've had about that particular apparatus will not outlast your finishing this book (the on-orbit gross-out prank on p. 179 may be too much even for “South Park”). Barfing in space and the curious and little-discussed effects of microgravity on the male and female anatomy which may someday contribute mightily to the popularity of orbital tourism are discussed in graphic detail. A glossary of NASA jargon and acronyms is included but there is no index, which would be a valuable addition.

February 2006 Permalink

Neven, Thomas E. Sir, The Private Don't Know! Seattle: Amazon Digital Services, 2013. ASIN B00D5EO5EU.
The author, a self-described “[l]onghaired surfer dude” from Florida, wasn't sure what he wanted to do with his life after graduating from high school, but he was certain he didn't want to go directly to college—he didn't have the money for it and had no idea what he might study. He had thought about a military career, but was unimpressed when a Coast Guard recruiter never got back to him. He arrived at the Army recruiter's office only to find the recruiter a no-show. While standing outside the Army recruiter's office, he was approached by a Marine recruiter, whose own office was next door. He was receptive to the highly polished pitch and signed enlistment papers on March 10, 1975.

This was just about the lowest ebb in 20th century U.S. military history. On that very day, North Vietnam launched the offensive which would, two months later, result in the fall of Saigon and the humiliating images of the U.S. embassy being evacuated by helicopter. Opposition to the war had had reduced public support for the military to all-time lows, and the image of veterans as drug-addicted, violence-prone sociopaths was increasingly reinforced by the media. In this environment, military recruiters found it increasingly difficult to meet their quotas (which failure could torpedo their careers), and were motivated and sometimes encouraged to bend the rules. Physical fitness, intelligence, and even criminal records were often ignored or covered up in order to make quota. This meant that the recruits arriving for basic training, even for a supposedly elite force as the Marines, included misfits, some of whom were “dumb as a bag of hammers”.

Turning this flawed raw material into Marines had become a matter of tearing down the recruits' individuality and personality to ground level and the rebuilding it into a Marine. When the author arrived at Parris Island a month after graduating from high school, he found himself fed into the maw of this tree chipper of the soul. Within minutes he, and his fellow recruits, all shared the thought, “What have I gotten myself into?”, as the mental and physical stress mounted higher and higher. “The DIs [drill instructors] were gods; they had absolute power and were capricious and cruel in exercising it.” It was only in retrospect that the author appreciated that this was not just hazing or sadism (although there were plenty of those), but a deliberate part of the process to condition the recruits to instantly obey any order without questioning it and submit entirely to authority.

This is a highly personal account of one individual's experience in Marine basic training. The author served seven years in the Marine Corps, retiring with the rank of staff sergeant. He then went on to college and graduate school, and later was associate editor of the Marine Corps Gazette, the professional journal of the Corps.

The author was one of the last Marines to graduate from the “old basic training”. Shortly thereafter, a series of scandals involving mistreatment of recruits at the hands of drill instructors brought public and Congressional scrutiny of Marine practices, and there was increasing criticism among the Marine hierarchy that “Parris Island was graduating recruits, not Marines.” A great overhaul of training was begun toward the end of the 1970s and has continued to the present day, swinging back and forth between leniency and rigour. Marine basic has never been easy, but today there is less overt humiliation and make-work and more instruction and testing of actual war-fighting skills. An epilogue (curiously set in a monospace typewriter font) describes the evolution of basic training in the years after the author's own graduation from Parris Island. For a broader-based perspective on Marine basic training, see Thomas Ricks's Making the Corps (February 2002).

This book is available only in electronic form for the Kindle as cited above, under the given ASIN. No ISBN has been assigned to it.

June 2013 Permalink

Noonan, Peggy. When Character Was King. New York: Viking, 2001. ISBN 0-670-88235-6.

March 2002 Permalink

O'Leary, Brian. The Making of an Ex-Astronaut. Boston: Houghton Mifflin, 1970. LCCN 70-112277.
This book is out of print. The link above will search for used copies at abebooks.com.

July 2003 Permalink

Pais, Abraham. The Genius of Science. Oxford: Oxford University Press, 2000. ISBN 0-19-850614-7.
In this volume Abraham Pais, distinguished physicist and author of Subtle Is the Lord, the definitive scientific biography of Einstein, presents a “portrait gallery” of eminent twentieth century physicists, including Bohr, Dirac, Pauli, von Neumann, Rabi, and others. If you skip the introduction, you may be puzzled at some of the omissions: Heisenberg, Fermi, and Feynman, among others. Pais wanted to look behind the physics to the physicist, and thus restricted his biographies to scientists he personally knew; those not included simply didn't cross his career path sufficiently to permit sketching them in adequate detail. Many of the chapters were originally written for publication in other venues and revised for this book; consequently the balance of scientific and personal biography varies substantially among them, as does the length of the pieces: the chapter on Victor Weisskopf, adapted from an honorary degree presentation, is a mere two and half pages, while that on George Eugene Uhlenbeck, based on a lecture from a memorial symposium, is 33 pages long. The scientific focus is very much on quantum theory and particle physics, and the collected biographies provide an excellent view of the extent to which researchers groped in the dark before discovering phenomena which, presented in a modern textbook, seem obvious in retrospect. One wonders whether the mysteries of present-day physics will seem as straightforward a century from now.

April 2005 Permalink

Pendle, George. Strange Angel. New York: Harcourt, 2005. ISBN 978-0-15-603179-0.
For those who grew up after World War II “rocket science” meant something extremely difficult, on the very edge of the possible, pursued by the brightest of the bright, often at risk of death or dire injury. In the first half of the century, however, “rocket” was a pejorative, summoning images of pulp magazines full of “that Buck Rogers stuff”, fireworks that went fwoosh—flash—bang if all went well, and often in the other order when it didn't, with aspiring rocketeers borderline lunatics who dreamed of crazy things like travelling to the Moon but usually ended blowing things up, including, but not limited to, themselves.

This was the era in which John Whiteside “Jack” Parsons came of age. Parsons was born and spent most of his life in Pasadena, California, a community close enough to Los Angeles to participate in its frontier, “anything goes” culture, but also steeped in well-heeled old wealth, largely made in the East and seeking the perpetually clement climate of southern California. Parsons was attracted to things that went fwoosh and bang from the very start. While still a high school senior, he was hired by the Hercules Powder Company, and continued to support himself as an explosives chemist for the rest of his life. He never graduated from college, no less pursued an advanced degree, but his associates and mentors, including legends such as Theodore von Kármán were deeply impressed by his knowledge and meticulously careful work with dangerous substances and gave him their highest recommendations. On several occasions he was called as an expert witness to testify in high-profile trials involving bombings.

And yet, at the time, to speak seriously about rockets was as outré as to admit one was a fan of “scientifiction” (later science fiction), or a believer in magic. Parsons was all-in on all of them. An avid reader of science fiction and member of the Los Angeles Science Fantasy Society, Parsons rubbed shoulders with Ray Bradbury, Robert Heinlein, and Forrest J. Ackerman. On the darker side, Parsons became increasingly involved in the Ordo Templi Orientis (OTO), followers of Aleister Crowley, and practitioners of his “magick”. One gets the sense that Parsons saw no conflict whatsoever among these pursuits—all were ways to transcend the prosaic everyday life and explore a universe enormously larger and stranger than even that of Los Angeles and its suburbs.

Parsons and his small band of rocket enthusiasts, “the suicide squad”, formed an uneasy alliance with the aeronautical laboratory of the California Institute of Technology, and with access to their resources and cloak of respectability, pursued their dangerous experiments first on campus, and then after a few embarrassing misadventures, in Arroyo Seco behind Pasadena. With the entry of the United States into World War II, the armed services had difficult problems to solve which overcame the giggle factor of anything involving the word “rocket”. In particular, the U.S. Navy had an urgent need to launch heavily-laden strike aircraft from short aircraft carrier decks (steam catapults were far in the future), and were willing to consider even Buck Rogers rockets to get them off the deck. Well, at least as long as you didn't call them “rockets”! So, the Navy sought to procure “Jet Assisted Take-Off” units, and Caltech created the “Jet Propulsion Laboratory” with Parsons as a founder to develop them, and then its members founded the Aerojet Engineering Corporation to build them in quantity. Nope, no rockets around here, nowhere—just jets.

Even as Parsons' rocket dreams came true and began to make him wealthy, he never forsook his other interests: they were all integral to him. He advanced in Crowley's OTO, became a regular correspondent of the Great Beast, and proprietor of the OTO lodge in Pasadena, home to a motley crew of bohemians who prefigured the beatniks and hippies of the 1950s and '60s. And he never relinquished his interest in science fiction, taking author L. Ron Hubbard into his community. Hubbard, a world class grifter even in his early days, took off with Parsons' girlfriend and most of his savings on the promise of buying yachts in Florida and selling them at a profit in California. Uh-huh! I'd put it down to destructive engrams.

Amidst all of this turmoil, Parsons made one of the most important inventions in practical rocketry of the 20th century. Apart from the work of Robert Goddard, which occurred largely disconnected from others due to Goddard's obsessive secrecy due to his earlier humiliation by learned ignoramuses, and the work by the German rocket team, conducted in secrecy in Nazi Germany, rockets mostly meant solid rockets, and solid rockets were little changed from mediaeval China: tubes packed with this or that variant of black powder which went fwoosh all at once when ignited. Nobody before Parsons saw an alternative to this. When faced by the need for a reliable, storable, long-duration burn propellant for Navy JATO boosters, he came up with the idea of castable solid propellant (initially based upon asphalt and potassium perchlorate), which could be poured as a liquid into a booster casing with a grain shape which permitted tailoring the duration and thrust profile of the motor to the mission requirements. Every single solid rocket motor used today employs this technology, and Jack Parsons, high school graduate and self-taught propulsion chemist, invented it all by himself.

On June 17th, 1952, an explosion destroyed a structure on Pasadena's Orange Grove Avenue where Jack Parsons had set up his home laboratory prior to his planned departure with his wife to Mexico. He said he had just one more job to do for his client, a company producing explosives for Hollywood special effects. Parsons was gravely injured and pronounced dead at the hospital.

The life of Jack Parsons was one which could only have occurred in the time and place he lived it. It was a time when a small band of outcasts could have seriously imagined building a rocket and travelling to the Moon; a time when the community they lived in was aboil with new religions, esoteric cults, and alternative lifestyles; and an entirely new genre of fiction was exploring the ultimate limits of the destiny of humanity and its descendents. Jack swam in this sea and relished it. His short life (just 37 years) was lived in a time and place which has never existed before and likely will never exist again. The work he did, the people he influenced, and the consequences cast a long shadow still visible today (every time you see a solid rocket booster heave a launcher off the pad, its coruscant light, casting that shadow, is Jack Parsons' legacy). This is a magnificent account of a singular life which changed our world, and is commemorated on the rock next door. On the lunar far side the 40 kilometre diameter crater Parsons is named for the man who dreamt of setting foot, by rocketry or magick, upon that orb and, in his legacy, finally did with a big footprint indeed—more than eight times larger than the one named for that Armstrong fellow.

July 2012 Permalink

Raimondo, Justin. An Enemy of the State. Amherst, NY: Prometheus Books, 2000. ISBN 978-1-57392-809-0.
Had Murray Rothbard been a man of the Left, he would probably be revered today as one of the towering intellects of the twentieth century. Certainly, there was every reason from his origin and education to have expected him to settle on the Left: the child of Jewish immigrants from Poland and Russia, he grew up in a Jewish community in New York City where, as he later described it, the only question was whether one would join the Communist Party or settle for being a fellow traveller. He later remarked that, “I had two sets of Communist Party uncles and aunts, on both sides of my family.” While studying for his B.A., M.A., and Ph.D. in economics from Columbia University in the 1940s and '50s, he was immersed in a political spectrum which ranged from “Social Democrats on the ‘right’ to Stalinists on the left”.

Yet despite the political and intellectual milieu surrounding him, Rothbard followed his own compass, perhaps inherited in part from his fiercely independent father. From an early age, he came to believe individual liberty was foremost among values, and that based upon that single desideratum one could deduce an entire system of morality, economics, natural law, and governance which optimised the individual's ability to decide his or her own destiny. In the context of the times, he found himself aligned with the Old Right: the isolationist, small government, and hard money faction of the Republican Party which was, in the Eisenhower years, approaching extinction as “conservatives” acquiesced to the leviathan “welfare-warfare state” as necessary to combat the Soviet menace. Just as Rothbard began to put the foundations of the Old Right on a firm intellectual basis, the New Right of William F. Buckley and his “coven of ex-Communists” at National Review drove the stake through that tradition, one of the first among many they would excommunicate from the conservative cause as they defined it.

Rothbard was a disciple of Ludwig von Mises, and applied his ideas and those of other members of the Austrian school of economics to all aspects of economics, politics, and culture. His work, both scholarly and popular, is largely responsible for the influence of Austrian economics today. (Here is a complete bibliography of Rothbard's publications.)

Rothbard's own beliefs scarcely varied over his life, and yet as the years passed and the political tectonic plates shifted, he found himself aligned with the Old Right, the Ayn Rand circle (from which he quickly extricated himself after diagnosing the totalitarian tendencies of Rand and the cult-like nature of her followers), the nascent New Left (before it was taken over by communists), the Libertarian Party, the Cato Institute, and finally back to the New Old Right, with several other zigs and zags along the way. In each case, Rothbard embraced his new allies and threw himself into the cause, only to discover that they were more interested in factionalism, accommodation with corrupt power structures, or personal ambition than the principles which motivated him.

While Rothbard's scholarly publications alone dwarf those of many in the field, he was anything but an ivory tower academic. He revelled in the political fray, participating in campaigns, writing speeches and position papers, formulating strategy, writing polemics aimed at the general populace, and was present at the creation of several of the key institutions of the contemporary libertarian movement. Fully engaged in the culture, he wrote book and movie reviews, satire, and commentary on current events. Never discouraged by the many setbacks he experienced, he was always a “happy warrior”, looking at the follies of the society around him with amusement and commenting wittily about them in his writings. While eschewing grand systems and theories of history in favour of an entirely praxeology-based view of the social sciences (among which he counted economics, rejecting entirely the mathematically-intense work of pseudoscientists who believed one could ignore human action when analysing the aggregate behaviour of human actors), he remained ever optimistic that liberty would triumph in the end simply because it works better, and will inevitably supplant authoritarian schemes which constrain the human potential.

This is a well-crafted overview of Rothbard's life, work, and legacy by an author who knew and worked with Rothbard in the last two decades of his career. Other than a coruscating animus toward Buckley and his minions, it provides a generally even-handed treatment of the many allies and adversaries (often the same individuals at different times) with which Rothbard interacted over his career. Chapter 7 provides an overview and reading guide to Rothbard's magisterial History of Economic Thought, which is so much more—essentially a general theory of the social sciences—that you'll probably be persuaded to add it to your reading list.

April 2011 Permalink

Reeves, Richard. A Force of Nature. New York: W. W. Norton, 2008. ISBN 978-0-393-33369-5.
In 1851, the Crystal Palace Exhibition opened in London. It was a showcase of the wonders of industry and culture of the greatest empire the world had ever seen and attracted a multitude of visitors. Unlike present-day “World's Fair” boondoggles, it made money, and the profits were used to fund good works, including endowing scholarships for talented students from the far reaches of the Empire to study in Britain. In 1895, Ernest Rutherford, hailing from a remote area in New Zealand and recent graduate of Canterbury College in Christchurch, won a scholarship to study at Cambridge. Upon learning of the award in a field of his family's farm, he threw his shovel in the air and exclaimed, “That's the last potato I'll ever dig.” It was.

When he arrived at Cambridge, he could hardly have been more out of place. He and another scholarship winner were the first and only graduate students admitted who were not Cambridge graduates. Cambridge, at the end of the Victorian era, was a clubby, upper-class place, where even those pursuing mathematics were steeped in the classics, hailed from tony public schools, and spoke with refined accents. Rutherford, by contrast, was a rough-edged colonial, bursting with energy and ambition. He spoke with a bizarre accent (which he retained all his life) which blended the Scottish brogue of his ancestors with the curious intonations of the antipodes. He was anything but the ascetic intellectual so common at Cambridge—he had been a fierce competitor at rugby, spoke about three times as loud as was necessary (many years later, when the eminent Rutherford was tapped to make a radio broadcast from Cambridge, England to Cambridge, Massachusetts, one of his associates asked, “Why use radio?”), and spoke vehemently on any and all topics (again, long afterward, when a ceremonial portrait was unveiled, his wife said she was surprised the artist had caught him with his mouth shut).

But it quickly became apparent that this burly, loud, New Zealander was extraordinarily talented, and under the leadership of J.J. Thomson, he began original research in radio, but soon abandoned the field to pursue atomic research, which Thomson had pioneered with his discovery of the electron. In 1898, with Thomson's recommendation, Rutherford accepted a professorship at McGill University in Montreal. While North America was considered a scientific backwater in the era, the generous salary would allow him to marry his fiancée, who he had left behind in New Zealand until he could find a position which would support them.

At McGill, he and his collaborator Frederick Soddy, studying the radioactive decay of thorium, discovered that radioactive decay was characterised by a unique half-life, and was composed of two distinct components which he named alpha and beta radiation. He later named the most penetrating product of nuclear reactions gamma rays. Rutherford was the first to suggest, in 1902, that radioactivity resulted from the transformation of one chemical element into another—something previously thought impossible.

In 1907, Rutherford was offered, and accepted a chair of physics at the University of Manchester, where, with greater laboratory resources than he had had in Canada, pursued the nature of the products of radioactive decay. By 1907, by a clever experiment, he had identified alpha radiation (or particles, as we now call them) with the nuclei of helium atoms—nuclear decay was heavy atoms being spontaneously transformed into a lighter element and a helium nucleus.

Based upon this work, Rutherford won the Nobel Prize in Chemistry in 1908. As a person who considered himself first and foremost an experimental physicist and who was famous for remarking, “All science is either physics or stamp collecting”, winning the Chemistry Nobel had to feel rather odd. He quipped that while he had observed the transmutation of elements in his laboratory, no transmutation was as startling as discovering he had become a chemist. Still, physicist or chemist, his greatest work was yet to come.

In 1909, along with Hans Geiger (later to invent the Geiger counter) and Ernest Marsden, he conducted an experiment where high-energy alpha particles were directed against a very thin sheet of gold foil. The expectation was that few would be deflected and those only slightly. To the astonishment of the experimenters, some alpha particles were found to be deflected through large angles, some bouncing directly back toward the source. Geiger exclaimed, “It was almost as incredible as if you fired a 15-inch [battleship] shell at a piece of tissue paper and it came back and hit you.” It took two years before Rutherford fully understood and published what was going on, and it forever changed the concept of the atom. The only way to explain the scattering results was to replace the early model of the atom with one in which a diffuse cloud of negatively charged electrons surrounded a tiny, extraordinarily dense, positively charged nucleus (that word was not used until 1913). This experimental result fed directly into the development of quantum theory and the elucidation of the force which bound the particles in the nucleus together, which was not fully understood until more than six decades later.

In 1919 Rutherford returned to Cambridge to become the head of the Cavendish Laboratory, the most prestigious position in experimental physics in the world. Continuing his research with alpha emitters, he discovered that bombarding nitrogen gas with alpha particles would transmute nitrogen into oxygen, liberating a proton (the nucleus of hydrogen). Rutherford simultaneously was the first to deliberately transmute one element into another, and also to discover the proton. In 1921, he predicted the existence of the neutron, completing the composition of the nucleus. The neutron was eventually discovered by his associate, James Chadwick, in 1932.

Rutherford's discoveries, all made with benchtop apparatus and a small group of researchers, were the foundation of nuclear physics. He not only discovered the nucleus, he also found or predicted its constituents. He was the first to identify natural nuclear transmutation and the first to produce it on demand in the laboratory. As a teacher and laboratory director his legacy was enormous: eleven of his students and research associates went on to win Nobel prizes. His students John Cockcroft and Ernest Walton built the first particle accelerator and ushered in the era of “big science”. Rutherford not only created the science of nuclear physics, he was the last person to make major discoveries in the field by himself, alone or with a few collaborators, and with simple apparatus made in his own laboratory.

In the heady years between the wars, there were, in the public mind, two great men of physics: Einstein the theoretician and Rutherford the experimenter. (This perception may have understated the contributions of the creators of quantum mechanics, but they were many and less known.) Today, we still revere Einstein, but Rutherford is less remembered (except in New Zealand, where everybody knows his name and achievements). And yet there are few experimentalists who have discovered so much in their lifetimes, with so little funding and the simplest apparatus. Rutherford, that boisterous, loud, and restless colonial, figured out much of what we now know about the atom, largely by himself, through a multitude of tedious experiments which often failed, and he should rightly be regarded as a pillar of 20th century physics.

This is the thousandth book to appear since I began to keep the reading list in January 2001.

February 2015 Permalink

Roberts, Andrew. Churchill: Walking with Destiny. New York: Viking, 2018. ISBN 978-1-101-98099-6.
At the point that Andrew Roberts sat down to write a new biography of Winston Churchill, there were a total of 1009 biographies of the man in print, examining every aspect of his life from a multitude of viewpoints. Works include the encyclopedic three-volume The Last Lion (January 2013) by William Manchester and Paul Reid, and Roy Jenkins' single-volume Churchill: A Biography (February 2004), which concentrates on Churchill's political career. Such books may seem to many readers to say just about everything about Churchill there is to be said from the abundant documentation available for his life. What could a new biography possibly add to the story?

As the author demonstrates in this magnificent and weighty book (1152 pages, 982 of main text), a great deal. Earlier Churchill biographers laboured under the constraint that many of Churchill's papers from World War II and the postwar era remained under the seal of official secrecy. These included the extensive notes taken by King George VI during his weekly meetings with the Prime Minister during the war and recorded in his personal diary. The classified documents were made public only fifty years after the end of the war, and the King's wartime diaries were made available to the author by special permission granted by the King's daughter, Queen Elizabeth II.

The royal diaries are an invaluable source on Churchill's candid thinking as the war progressed. As a firm believer in constitutional monarchy, Churchill withheld nothing in his discussions with the King. Even the deepest secrets, such as the breaking of the German codes, the information obtained from decrypted messages, and atomic secrets, which were shared with only a few of the most senior and trusted government officials, were discussed in detail with the King. Further, while Churchill was constantly on stage trying to hold the Grand Alliance together, encourage Britons to stay in the fight, and advance his geopolitical goals which were often at variance with even the Americans, with the King he was brutally honest about Britain's situation and what he was trying to accomplish. Oddly, perhaps the best insight into Churchill's mind as the war progressed comes not from his own six-volume history of the war, but rather the pen of the King, writing only to himself. In addition, sources such as verbatim notes of the war cabinet, diaries of the Soviet ambassador to the U.K. during the 1930s through the war, and other recently-disclosed sources resulted in, as the author describes it, there being something new on almost every page.

The biography is written in an entirely conventional manner: the author eschews fancy stylistic tricks in favour of an almost purely chronological recounting of Churchill's life, flipping back and forth from personal life, British politics, the world stage and Churchill's part in the events of both the Great War and World War II, and his career as an author and shaper of opinion.

Winston Churchill was an English aristocrat, but not a member of the nobility. A direct descendant of John Churchill, the 1st Duke of Marlborough, his father, Lord Randolph Churchill, was the third son of the 7th Duke of Marlborough. As only the first son inherits the title, although Randolph bore the honorific “Lord”, he was a commoner and his children, including first-born Winston, received no title. Lord Randolph was elected to the House of Commons in 1874, the year of Winston's birth, and would serve until his death in 1895, having been Chancellor of the Exchequer, Leader of the House of Commons, and Secretary of State for India. His death, aged just forty-five (rumoured at the time to be from syphilis, but now attributed to a brain tumour, as his other symptoms were inconsistent with syphilis), along with the premature deaths of three aunts and uncles at early ages, convinced the young Winston his own life might be short and that if he wanted to accomplish great things, he had no time to waste.

In terms of his subsequent career, his father's early death might have been an unappreciated turning point in Winston Churchill's life. Had his father retired from the House of Commons prior to his death, he would almost certainly have been granted a peerage in return for his long service. When he subsequently died, Winston, as eldest son, would have inherited the title and hence not been entitled to serve in the House of Commons. It is thus likely that had his father not died while still an MP, the son would never have had the political career he did nor have become prime minister in 1940.

Young, from a distinguished family, wealthy (by the standards of the average Briton, but not compared to the landed aristocracy or titans of industry and finance), ambitious, and seeking novelty and adventures to the point of recklessness, the young Churchill believed he was meant to accomplish great things in however many years Providence might grant him on Earth. In 1891, at the age of just 16, he confided to a friend,

I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.

He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood.

Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures.

He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book.

Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,

Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.

This was not a man likely to disappear into the mass of back-benchers and not rock the boat.

Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,

A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.

Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure.

Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.”

Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,

You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.

And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,

No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.

Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated.

Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,

Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.

This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived.

Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom.

Here is an Uncommon Knowledge interview with the author about Churchill and this biography.

This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.

May 2019 Permalink

Roberts, Siobhan. King of Infinite Space. New York: Walker and Company, 2006. ISBN 0-8027-1499-4.
Mathematics is often said to be a game for the young. The Fields Medal, the most prestigious prize in mathematics, is restricted to candidates 40 years or younger. While many older mathematicians continue to make important contributions in writing books, teaching, administration, and organising and systematising topics, most work on the cutting edge is done by those in their twenties and thirties. The life and career of Donald Coxeter (1907–2003), the subject of this superb biography, is a stunning and inspiring counter-example. Coxeter's publications (all of which are listed in an appendix to this book) span a period of eighty years, with the last, a novel proof of Beecroft's theorem, completed just a few days before his death.

Coxeter was one of the last generation to be trained in classical geometry, and he continued to do original work and make striking discoveries in that field for decades after most other mathematicians had abandoned it as mined out or insufficiently rigorous, and it had disappeared from the curriculum not only at the university level but, to a great extent, in secondary schools as well. Coxeter worked in an intuitive, visual style, frequently making models, kaleidoscopes, and enriching his publications with numerous diagrams. Over the many decades his career spanned, mathematical research (at least in the West) seemed to be climbing an endless stairway toward ever greater abstraction and formalism, epitomised in the work of the Bourbaki group. (When the unthinkable happened and a diagram was included in a Bourbaki book, fittingly it was a Coxeter diagram.) Coxeter inspired an increasingly fervent group of followers who preferred to discover new structures and symmetry using the mind's powers of visualisation. Some, including Douglas Hofstadter (who contributed the foreword to this work) and John Horton Conway (who figures prominently in the text) were inspired by Coxeter to carry on his legacy. Coxeter's interactions with M. C. Escher and Buckminster Fuller are explored in two chapters, and illustrate how the purest of mathematics can both inspire and be enriched by art and architecture (or whatever it was that Fuller did, which Coxeter himself wasn't too sure about—on one occasion he walked out of a new-agey Fuller lecture, noting in his diary “Out, disgusted, after ¾ hour” [p. 178]).

When the “new math” craze took hold in the 1960s, Coxeter immediately saw it for the disaster it was to be become and involved himself in efforts to preserve the intuitive and visual in mathematics education. Unfortunately, the power of a fad promoted by purists is difficult to counter, and a generation and more paid the price of which Coxeter warned. There is an excellent discussion at the end of chapter 9 of the interplay between the intuitive and formalist approaches to mathematics. Many modern mathematicians seem to have forgotten that one proves theorems in order to demonstrate that the insights obtained by intuition are correct. Intuition without rigour can lead to error, but rigour without intuition can blind one to beautiful discoveries in the mathematical objects which stand behind the austere symbols on paper.

The main text of this 400 page book is only 257 pages. Eight appendices expand upon technical topics ranging from phyllotaxis to the quilting of toilet paper and include a complete bibliography of Coxeter's publications. (If you're intrigued by “Morley's Miracle”, a novel discovery in the plane geometry of triangles made as late as 1899, check out this page and Java applet which lets you play with it interactively. Curiously, a diagram of Morley's theorem appears on the cover of Coxeter's and Greitzer's Geometry Revisited, but is misdrawn—the trisectors are inexact and the inner triangle is therefore not equilateral.) Almost 90 pages of endnotes provide both source citations (including Web links to MathWorld for technical terms and the University of St. Andrews biographical archive for mathematicians named in the text) and detailed amplification of numerous details. There are a few typos and factual errors (for example, on p. 101 the planets Uranus and Pluto are said to have been discovered in the nineteenth century when, in fact, neither was: Herschel discovered Uranus in 1781 and Tombaugh Pluto in 1930), but none are central to the topic nor detract from this rewarding biography of an admirable and important mathematician.

February 2007 Permalink

Robinson, Andrew. The Last Man Who Knew Everything. New York: Pi Press, 2006. ISBN 0-13-134304-1.
The seemingly inexorable process of specialisation in the sciences and other intellectual endeavours—the breaking down of knowledge into categories so narrow and yet so deep that their mastery at the professional level seems to demand forsaking anything beyond a layman's competence in other, even related fields, is discouraging to those who believe that some of the greatest insights come from the cross-pollination of concepts from subjects previously considered unrelated. The twentieth century was inhospitable to polymaths—even within a single field such as physics, ever narrower specialities proliferated, with researchers interacting little with those working in other areas. The divide between theorists and experimentalists has become almost impassable; it is difficult to think of a single individual who achieved greatness in both since Fermi, and he was born in 1901.

As more and more becomes known, it is inevitable that it is increasingly difficult to cram it all into one human skull, and the investment in time to master a variety of topics becomes disproportionate to the length of a human life, especially since breakthrough science is generally the province of the young. And yet, one wonders whether the conventional wisdom that hyper-specialisation is the only way to go and that anybody who aspires to broad and deep understanding of numerous subjects must necessarily be a dilettante worthy of dismissal, might underestimate the human potential and discourage those looking for insights available only by synthesising the knowledge of apparently unrelated disciplines. After all, mathematicians have repeatedly discovered deep connections between topics thought completely unrelated to one another; why shouldn't this be the case in the sciences, arts, and humanities as well?

The life of Thomas Young (1773–1829) is an inspiration to anybody who seeks to understand as much as possible about the world in which they live. The eldest of ten children of a middle class Quaker family in southwest England (his father was a cloth merchant and later a banker), from childhood he immersed himself in every book he could lay his hands upon, and in his seventeenth year alone, he read Newton's Principia and Opticks, Blackstone's Commentaries, Linnaeus, Euclid's Elements, Homer, Virgil, Sophocles, Cicero, Horace, and many other classics in the original Greek or Latin. At age 19 he presented a paper on the mechanism by which the human eye focuses on objects at different distances, and on its merit was elected a Fellow of the Royal Society a week after his 21st birthday.

Young decided upon a career in medicine and studied in Edinburgh, Göttingen, and Cambridge, continuing his voracious reading and wide-ranging experimentation in whatever caught his interest, then embarked upon a medical practice in London and the resort town of Worthing, while pursuing his scientific investigations and publications, and popularising science in public lectures at the newly founded Royal Institution.

The breadth of Young's interests and contributions have caused some biographers, both contemporary and especially more recent, to dismiss him as a dilettante and dabbler, but his achievements give lie to this. Had the Nobel Prize existed in his era, he would almost certainly have won two (Physics for the wave theory of light, explanation of the phenomena of diffraction and interference [including the double slit experiment], and birefringence and polarisation; plus Physiology or Medicine for the explanation of the focusing of the eye [based, in part, upon some cringe-inducing experiments he performed upon himself], the trireceptor theory of colour vision, and the discovery of astigmatism), and possibly three (Physics again, for the theory of elasticity of materials: “Young's modulus” is a standard part of the engineering curriculum to this day).

But he didn't leave it at that. He was fascinated by languages since childhood, and in addition to the customary Latin and Greek, by age thirteen had taught himself Hebrew and read thirty chapters of the Hebrew Bible all by himself. In adulthood he undertook an analysis of four hundred different languages (pp. 184–186) ranging from Chinese to Cherokee, with the goal of classifying them into distinct families. He coined the name “Indo-European” for the group to which most Western languages belong. He became fascinated with the enigma of Egyptian hieroglyphics, and his work on the Rosetta Stone provided the first breakthrough and the crucial insight that hieroglyphic writing was a phonetic alphabet, not a pictographic language like Chinese. Champollion built upon Young's work in his eventual deciphering of hieroglyphics. Young continued to work on the fiendishly difficult demotic script, and was the first person since the fall of the Roman Empire to be able to read some texts written in it.

He was appointed secretary of the Board of Longitude and superintendent of the Nautical Almanac, and was instrumental in the establishment of a Southern Hemisphere observatory at the Cape of Good Hope. He consulted with the admiralty on naval architecture, with the House of Commons on the design for a replacement to the original London Bridge, and served as chief actuary for a London life insurance company and did original research on mortality in different parts of Britain.

Stereotypical characters from fiction might cause you to expect that such an intellect might be a recluse, misanthrope, obsessive, or seeker of self-aggrandisement. But no…, “He was a lively, occasionally caustic letter writer, a fair conversationalist, a knowledgeable musician, a respectable dancer, a tolerable versifier, an accomplished horseman and gymnast, and throughout his life, a participant in the leading society of London and, later, Paris, the intellectual capitals of his day” (p. 12). Most of the numerous authoritative articles he contributed to the Encyclopedia Britannica, including “Bridge”, “Carpentry”, “Egypt”, “Languages”, “Tides”, and “Weights and measures”, as well as 23 biographies, were published anonymously. And he was happily married from age 31 until the end of his life.

Young was an extraordinary person, but he never seems to have thought of himself as exceptional in any way other than his desire to understand how things worked and his willingness to invest as much time and effort as it took at arrive at the goals he set for himself. Reading this book reminded me of a remark by Admiral Hyman G. Rickover, “The only way to make a difference in the world is to put ten times as much effort into everything as anyone else thinks is reasonable. It doesn't leave any time for golf or cocktails, but it gets things done.” Young's life is a testament to just how many things one person can get done in a lifetime, enjoying every minute of it and never losing balance, by judicious application of this principle.

March 2007 Permalink

Ronald Reagan Presidential Foundation. Ronald Reagan: An American Hero. New York: Dorling Kindersley, 2001. ISBN 0-7894-7992-3.
This is basically a coffee-table book. There are a multitude of pictures, many you're unlikely to have seen before, but the text is sparse and lightweight. If you're looking for a narrative, try Peggy Noonan's When Character Was King (March 2002).

July 2004 Permalink

Rumsfeld, Donald. Known and Unknown. New York: Sentinel, 2011. ISBN 978-1-59523-067-6.
In his career in public life and the private sector, spanning more than half a century, the author was:

  • A Naval aviator, reaching the rank of Captain.
  • A Republican member of the House of Representatives from Illinois spanning the Kennedy, Johnson, and Nixon administrations.
  • Director of the Office of Economic Opportunity and the Economic Stabilization Program in the Nixon administration, both agencies he voted against creating while in Congress.
  • Ambassador to NATO in Brussels.
  • White House Chief of Staff for Gerald Ford.
  • Secretary of Defense in the Ford administration, the youngest person to have ever held that office.
  • CEO of G. D. Searle, a multinational pharmaceutical company, which he arranged to be sold to Monsanto.
  • Special Envoy to the Middle East during the Reagan administration.
  • National chairman of Bob Dole's 1996 presidential campaign.
  • Secretary of Defense in the George W. Bush administration, the oldest person to have ever held that office.

This is an extraordinary trajectory through life, and Rumsfeld's memoir is correspondingly massive: 832 pages in the hardcover edition. The parts which will be most extensively dissected and discussed are those dealing with his second stint at DOD, and the contentious issues regarding the Afghanistan and Iraq wars, treatment of detainees, interrogation methods, and other issues which made him a lightning rod during the administration of Bush fils. While it was interesting to see his recollection of how these consequential decisions were made, documented by extensive citations of contemporary records, I found the overall perspective of how decision-making was done over his career most enlightening. Nixon, Ford, and Bush all had very different ways of operating their administrations, all of which were very unlike those of an organisation such as NATO or a private company, and Rumsfeld, who experienced all of them in a senior management capacity, has much wisdom to share about what works and what doesn't, and how one must adapt management style and the flow of information to the circumstances which obtain in each structure.

Many supportive outside observers of the G. W. Bush presidency were dismayed at how little effort was made by the administration to explain its goals, strategy, and actions to the public. Certainly, the fact that it was confronted with a hostile legacy media which often seemed to cross the line from being antiwar to rooting for the other side didn't help, but Rumsfeld, the consummate insider, felt that the administration forfeited opportunity after opportunity to present its own case, even by releasing source documents which would in no way compromise national security but show the basis upon which decisions were made in the face of the kind of ambiguous and incomplete information which confronts executives in all circumstances.

The author's Web site provides a massive archive of source documents cited in the book, along with a copy of the book's end notes which links to them. Authors, this is how it's done! A transcript of an extended interview with the author is available; it was hearing this interview which persuaded me to buy the book. Having read it, I recommend it to anybody who wishes to comprehend how difficult it is to be in a position where one must make decisions in a fog of uncertainty, knowing the responsibility for them will rest solely with the decider, and that not to decide is a decision in itself which may have even more dire consequences. As much as Bush's national security team was reviled at the time, one had the sense that adults were in charge.

A well-produced Kindle edition is available, with the table of contents, footnotes, and source citations all properly linked to the text. One curiosity in the Kindle edition is that in the last 40% of the book the word “after” is capitalised everywhere it appears, even in the middle of a sentence. It seems that somebody in the production process accidentally hit “global replace” when attempting to fix a single instance. While such fat-finger errors happen all the time whilst editing documents, it's odd that a prestigious publisher (Sentinel is a member of the Penguin Group) would not catch such a blunder in a high profile book which went on to top the New York Times best seller list.

April 2011 Permalink

Satrapi, Marjane. Persepolis: The Story of a Childhood. New York: Pantheon Books, [2000, 2001] 2003. ISBN 0-375-71457-X.
This story is told in comic strip form, but there's nothing funny about it. Satrapi was a 10 year old girl in Tehran when the revolution overthrew the Shah of Iran. Her well-off family detested the Shah, had several relatives active in leftist opposition movements, and supported the revolution, but were horrified when the mullahs began to turn the clock back to the middle ages. The terror and mass slaughter of the Iran/Iraq war are seen through the eyes of a young girl, along with the paranoia and repression of the Islamic regime. At age 14, her parents sent her to Vienna to escape Iran; she now lives and works in Paris. Persepolis was originally published in French in two volumes (1, 2). This edition combines the two volumes, with Satrapi's original artwork re-lettered with the English translation.

November 2004 Permalink

Satrapi, Marjane. Persepolis 2: The Story of a Return. New York: Pantheon Books, [2002, 2003] 2004. ISBN 0-375-42288-9.
Having escaped from Iran in the middle of Iran/Iraq war to secular, decadent Austria, Marjane Satrapi picks up her comic book autobiography with the culture shock of encountering the amoral West. It ends badly. She returns to Tehran in search of her culture, and finds she doesn't fit there either, eventually abandoning a failed marriage to escape to the West, where she has since prospered as an author and illustrator. This intensely personal narrative brings home both why the West is hated in much of the world, and why, at the same time, so many people dream of escaping the tyranny of dull conformity for the light of liberty and reason in the West. Like Persepolis: The Story of a Childhood (November 2004), this is a re-lettered English translation of the original French edition published in two volumes: (3, 4).

February 2005 Permalink

Scoles, Sarah. Making Contact. New York: Pegasus Books, 2017. ISBN 978-1-68177-441-1.
There are few questions in our scientific inquiry into the universe and our place within it more profound than “are we alone?” As we have learned more about our world and the larger universe in which it exists, this question has become ever more fascinating. We now know that our planet, once thought the centre of the universe, is but one of what may be hundreds of billions of planets in our own galaxy, which is one of hundreds of billions of galaxies in the observable universe. Not long ago, we knew only of the planets in our own solar system, and some astronomers believed planetary systems were rare, perhaps formed by freak encounters between two stars following their orbits around the galaxy. But now, thanks to exoplanet hunters and, especially, the Kepler spacecraft, we know that it's “planets, planets, everywhere”—most stars have planets, and many stars have planets where conditions may be suitable for the origin of life.

If this be the case, then when we gaze upward at the myriad stars in the heavens, might there be other eyes (or whatever sense organs they use for the optical spectrum) looking back from planets of those stars toward our Sun, wondering if they are alone? Many are the children, and adults, who have asked themselves that question when standing under a pristine sky. For the ten year old Jill Tarter, it set her on a path toward a career which has been almost coterminous with humanity's efforts to discover communications from extraterrestrial civilisations—an effort which continues today, benefitting from advances in technology unimagined when she undertook the quest.

World War II had seen tremendous advancements in radio communications, in particular the short wavelengths (“microwaves”) used by radar to detect enemy aircraft and submarines. After the war, this technology provided the foundation for the new field of radio astronomy, which expanded astronomers' window on the universe from the traditional optical spectrum into wavelengths that revealed phenomena never before observed nor, indeed, imagined, and hinted at a universe which was much larger, complicated, and violent than previously envisioned.

In 1959, Philip Morrison and Guiseppe Cocconi published a paper in Nature in which they calculated that using only technologies and instruments already existing on the Earth, intelligent extraterrestrials could send radio messages across the distances to the nearby stars, and that these messages could be received, detected, and decoded by terrestrial observers. This was the origin of SETI—the Search for Extraterrestrial Intelligence. In 1960, Frank Drake used a radio telescope to search for signals from two nearby star systems; he heard nothing.

As they say, absence of evidence is not evidence of absence, and this is acutely the case in SETI. First of all, consider that you must first decide what kind of signal aliens might send. If it's something which can't be distinguished from natural sources, there's little hope you'll be able to tease it out of the cacophony which is the radio spectrum. So we must assume they're sending something that doesn't appear natural. But what is the variety of natural sources? There's a dozen or so Ph.D. projects just answering that question, including some surprising discoveries of natural sources nobody imagined, such as pulsars, which were sufficiently strange that when first observed they were called “LGM” sources for “Little Green Men”. On what frequency are they sending (in other words, where do we have to turn our dial to receive them, for those geezers who remember radios with dials)? The most efficient signals will be those with a very narrow frequency range, and there are billions of possible frequencies the aliens might choose. We could be pointed in the right place, at the right time, and simply be tuned to the wrong station.

Then there's that question of “the right time”. It would be absurdly costly to broadcast a beacon signal in all directions at all times: that would require energy comparable to that emitted by a star (which, if you think about it, does precisely that). So it's likely that any civilisation with energy resources comparable to our own would transmit in a narrow beam to specific targets, switching among them over time. If we didn't happen to be listening when they were sending, we'd never know they were calling.

If you put all of these constraints together, you come up with what's called an “observational phase space”—a multidimensional space of frequency, intensity, duration of transmission, angular extent of transmission, bandwidth, and other parameters which determine whether you'll detect the signal. And that assumes you're listening at all, which depends upon people coming up with the money to fund the effort and pursue it over the years.

It's beyond daunting. The space to be searched is so large, and our ability to search it so limited, that negative results, even after decades of observation, are equivalent to walking down to the seashore, sampling a glass of ocean water, and concluding that based on the absence of fish, the ocean contained no higher life forms. But suppose you find a fish? That would change everything.

Jill Tarter began her career in the mainstream of astronomy. Her Ph.D. research at the University of California, Berkeley was on brown dwarfs (bodies more massive than gas giant planets but too small to sustain the nuclear fusion reactions which cause stars to shine—a brown dwarf emits weakly in the infrared as it slowly radiates away the heat from the gravitational contraction which formed it). Her work was supported by a federal grant, which made her uncomfortable—what relevance did brown dwarfs have to those compelled to pay taxes to fund investigating them? During her Ph.D. work, she was asked by a professor in the department to help with an aged computer she'd used in an earlier project. To acquaint her with the project, the professor asked her to read the Project Cyclops report. It was a conversion experience.

Project Cyclops was a NASA study conducted in 1971 on how to perform a definitive search for radio communications from intelligent extraterrestrials. Its report [18.2 Mb PDF], issued in 1972, remains the “bible” for radio SETI, although advances in technology, particularly in computing, have rendered some of its recommendations obsolete. The product of a NASA which was still conducting missions to the Moon, it was grandiose in scale, envisioning a large array of radio telescope dishes able to search for signals from stars up to 1000 light years in distance (note that this is still a tiny fraction of the stars in the galaxy, which is around 150,000 light years in diameter). The estimated budget for the project was between 6 and 10 billion dollars (multiply those numbers by around six to get present-day funny money) spent over a period of ten to fifteen years. The report cautioned that there was no guarantee of success during that period, and that the project should be viewed as a long-term endeavour with ongoing funding to operate the system and continue the search.

The Cyclops report arrived at a time when NASA was downsizing and scaling back its ambitions: the final three planned lunar landing missions had been cancelled in 1970, and production of additional Saturn V launch vehicles had been terminated the previous year. The budget climate wasn't hospitable to Apollo-scale projects of any description, especially those which wouldn't support lots of civil service and contractor jobs in the districts and states of NASA's patrons in congress. Unsurprisingly, Project Cyclops simply landed on the pile of ambitious NASA studies that went nowhere. But to some who read it, it was an inspiration. Tarter thought, “This is the first time in history when we don't just have to believe or not believe. Instead of just asking the priests and philosophers, we can try to find an answer. This is an old and important question, and I have the opportunity to change how we try to answer it.” While some might consider searching the sky for “little green men” frivolous and/or absurd, to Tarter this, not the arcana of brown dwarfs, was something worthy of support, and of her time and intellectual effort, “something that could impact people's lives profoundly in a short period of time.”

The project to which Tarter had been asked to contribute, Project SERENDIP (a painful acronym of Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) was extremely modest compared to Cyclops. It had no dedicated radio telescopes at all, nor even dedicated time on existing observatories. Instead, it would “piggyback” on observations made for other purposes, listening to the feed from the telescope with an instrument designed to detect the kind of narrow-band beacons envisioned by Cyclops. To cope with the problem of not knowing the frequency on which to listen, the receiver would monitor 100 channels simultaneously. Tarter's job was programming the PDP 8/S computer to monitor the receiver's output and search for candidate signals. (Project SERENDIP is still in operation today, employing hardware able to simultaneously monitor 128 million channels.)

From this humble start, Tarter's career direction was set. All of her subsequent work was in SETI. It would be a roller-coaster ride all the way. In 1975, NASA had started a modest study to research (but not build) technologies for microwave SETI searches. In 1978, the program came into the sights of senator William Proxmire, who bestowed upon it his “Golden Fleece” award. The program initially survived his ridicule, but in 1982, the budget zeroed out the project. Carl Sagan personally intervened with Proxmire, and in 1983 the funding was reinstated, continuing work on a more capable spectral analyser which could be used with existing radio telescopes.

Buffeted by the start-stop support from NASA and encouraged by Hewlett-Packard executive Bernard Oliver, a supporter of SETI from its inception, Tarter decided that SETI needed its own institutional home, one dedicated to the mission and able to seek its own funding independent of the whims of congressmen and bureaucrats. In 1984, the SETI Institute was incorporated in California. Initially funded by Oliver, over the years major contributions have been made by technology moguls including William Hewlett, David Packard, Paul Allen, Gordon Moore, and Nathan Myhrvold. The SETI Institute receives no government funding whatsoever, although some researchers in its employ, mostly those working on astrobiology, exoplanets, and other topics not directly related to SETI, are supported by research grants from NASA and the National Science Foundation. Fund raising was a skill which did not come naturally to Tarter, but it was mission critical, and so she mastered the art. Today, the SETI Institute is considered one of the most savvy privately-funded research institutions, both in seeking large donations and in grass-roots fundraising.

By the early 1990s, it appeared the pendulum had swung once again, and NASA was back in the SETI game. In 1992, a program was funded to conduct a two-pronged effort: a targeted search of 800 nearby stars, and an all-sky survey looking for stronger beacons. Both would employ what were then state-of-the-art spectrum analysers able to monitor 15 million channels simultaneously. After just a year of observations, congress once again pulled the plug. The SETI Institute would have to go it alone.

Tarter launched Project Phoenix, to continue the NASA targeted search program using the orphaned NASA spectrometer hardware and whatever telescope time could be purchased from donations to the SETI Institute. In 1995, observations resumed at the Parkes radio telescope in Australia, and subsequently a telescope at the National Radio Astronomy Observatory in Green Bank, West Virginia, and the 300 metre dish at Arecibo Observatory in Puerto Rico. The project continued through 2004.

What should SETI look like in the 21st century? Much had changed since the early days in the 1960s and 1970s. Digital electronics and computers had increased in power a billionfold, not only making it possible to scan a billion channels simultaneously and automatically search for candidate signals, but to combine the signals from a large number of independent, inexpensive antennas (essentially, glorified satellite television dishes), synthesising the aperture of a huge, budget-busting radio telescope. With progress in electronics expected to continue in the coming decades, any capital investment in antenna hardware would yield an exponentially growing science harvest as the ability to analyse its output grew over time. But to take advantage of this technological revolution, SETI could no longer rely on piggyback observations, purchased telescope time, or allocations at the whim of research institutions: “SETI needs its own telescope”—one optimised for the mission and designed to benefit from advances in electronics over its lifetime.

In a series of meetings from 1998 to 2000, the specifications of such an instrument were drawn up: 350 small antennas, each 6 metres in diameter, independently steerable (and thus able to be used all together, or in segments to simultaneously observe different targets), with electronics to combine the signals, providing an effective aperture of 900 metres with all dishes operating. With initial funding from Microsoft co-founder Paul Allen (and with his name on the project, the Allen Telescope Array), the project began construction in 2004. In 2007, observations began with the first 42 dishes. By that time, Paul Allen had lost interest in the project, and construction of additional dishes was placed on hold until a new benefactor could be found. In 2011, a funding crisis caused the facility to be placed in hibernation, and the observatory was sold to SRI International for US$ 1. Following a crowdfunding effort led by the SETI Institute, the observatory was re-opened later that year, and continues operations to this date. No additional dishes have been installed: current work concentrates on upgrading the electronics of the existing antennas to increase sensitivity.

Jill Tarter retired as co-director of the SETI Institute in 2012, but remains active in its scientific, fundraising, and outreach programs. There has never been more work in SETI underway than at the present. In addition to observations with the Allen Telescope Array, the Breakthrough Listen project, funded at US$ 100 million over ten years by Russian billionaire Yuri Milner, is using thousands of hours of time on large radio telescopes, with a goal of observing a million nearby stars and the centres of a hundred galaxies. All data are available to the public for analysis. A new frontier, unimagined in the early days of SETI, is optical SETI. A pulsed laser, focused through a telescope of modest aperture, is able to easily outshine the Sun in a detector sensitive to its wavelength and pulse duration. In the optical spectrum, there's no need for fancy electronics to monitor a wide variety of wavelengths: all you need is a prism or diffraction grating. The SETI Institute has just successfully completed a US$ 100,000 Indiegogo campaign to crowdfund the first phase of the Laser SETI project, which has as its ultimate goal an all-sky, all-the-time search for short pulses of light which may be signals from extraterrestrials or new natural phenomena to which no existing astronomical instrument is sensitive.

People often ask Jill Tarter what it's like to spend your entire career looking for something and not finding it. But she, and everybody involved in SETI, always knew the search would not be easy, nor likely to succeed in the short term. The reward for engaging in it is being involved in founding a new field of scientific inquiry and inventing and building the tools which allow exploring this new domain. The search is vast, and to date we have barely scratched the surface. About all we can rule out, after more than half a century, is a Star Trek-like universe where almost every star system is populated by aliens chattering away on the radio. Today, the SETI enterprise, entirely privately funded and minuscule by the standards of “big science”, is strongly coupled to the exponential growth in computing power and hence, roughly doubles its ability to search around every two years.

The question “are we alone?” is one which has profound implications either way it is answered. If we discover one or more advanced technological civilisations (and they will almost certainly be more advanced than we—we've only had radio for a little more than a century, and there are stars and planets in the galaxy billions of years older than ours), it will mean it's possible to grow out of the daunting problems we face in the adolescence of our species and look forward to an exciting and potentially unbounded future. If, after exhaustive searches (which will take at least another fifty years of continued progress in expanding the search space), it looks like we're alone, then intelligent life is so rare that we may be its only exemplar in the galaxy and, perhaps, the universe. Then, it's up to us. Our destiny, and duty, is to ensure that this spark, lit within us, will never be extinguished.

September 2017 Permalink

Scott, David and Alexei Leonov with Christine Toomey. Two Sides of the Moon. London: Simon & Schuster, 2004. ISBN 0-7432-3162-7.
Astronaut David Scott flew on the Gemini 8 mission which performed the first docking in space, Apollo 9, the first manned test of the Lunar Module, and commanded the Apollo 15 lunar landing, the first serious scientific exploration of the Moon (earlier Apollo landing missions had far less stay time and, with no lunar rover, limited mobility, and hence were much more “land, grab some rocks, and scoot” exercises). Cosmonaut Alexei Leonov was the first to walk in space on Voskhod 2, led the training of cosmonauts for lunar missions and later the Salyut space station program, and commanded the Soviet side of the Apollo Soyuz Test Project in 1975. Had the Soviet Union won the Moon race, Leonov might well have been first to walk on the Moon. This book recounts the history of the space race as interleaved autobiographies of two participants from contending sides, from their training as fighter pilots ready to kill one another in the skies over Europe in the 1950s to Leonov's handshake in space with an Apollo crew in 1975. This juxtaposition works very well, and writer Christine Toomey (you're not a “ghostwriter” when your name appears on the title page and the principals effusively praise your efforts) does a marvelous job in preserving the engaging conversational style of a one-on-one interview, which is even more an achievement when one considers that she interviewed Leonov through an interpreter, then wrote his contributions in English which was translated to Russian for Leonov's review, with his comments in Russian translated back to English for incorporation in the text. A U.S. edition is scheduled for publication in October 2004.

August 2004 Permalink

Scurr, Ruth. Fatal Purity. London: Vintage Books, 2006. ISBN 0-09-945898-5.
In May 1791, Maximilien Robespierre, not long before an obscure provincial lawyer from Arras in northern France, elected to the Estates General convened by Louis XVI in 1789, spoke before what had by then reconstituted itself as the National Assembly, engaged in debating the penal code for the new Constitution of France. Before the Assembly were a number of proposals by a certain Dr. Guillotin, among which the second was, “In all cases of capital punishment (whatever the crime), it shall be of the same kind—i.e. beheading—and it shall be executed by means of a machine.” Robespierre argued passionately against all forms of capital punishment: “A conqueror that butchers his captives is called barbaric. Someone who butchers a perverse child that he could disarm and punish seems monstrous.” (pp. 133–136)

Just two years later, Robespierre had become synonymous not only with the French Revolution but with the Terror it had spawned. Either at his direction, with his sanction, or under the summary arrest and execution without trial or appeal which he advocated, the guillotine claimed more than 2200 lives in Paris alone, 1376 between June 10th and July 27th of 1793, when Robespierre's power abruptly ended, along with the Terror, with his own date with the guillotine.

How did a mild-mannered provincial lawyer who defended the indigent and disadvantaged, amused himself by writing poetry, studied philosophy, and was universally deemed, even by his sworn enemies, to merit his sobriquet, “The Incorruptible”, become an archetypal monster of the modern age, a symbol of the darkness beneath the Enlightenment?

This lucidly written, well-argued, and meticulously documented book traces Robespierre's life from birth through downfall and execution at just age 36, and places his life in the context of the upheavals which shook France and to which, in his last few years, he contributed mightily. The author shows the direct link between Rousseau's philosophy, Robespierre's inflexible, whatever-the-cost commitment to implementing it, and its horrific consequences for France. Too many people forget that it was Rousseau who wrote in The Social Contract, “Now, as citizen, no man is judge any longer of the danger to which the law requires him to expose himself, and when the prince says to him: ‘It is expedient for the state that you should die’, then he should die…”. Seen in this light, the madness of Robespierre's reign is not the work of a madman, but of a rigorously rational application of a profoundly anti-human system of beliefs which some people persist in taking seriously even today.

A U.S. edition is available.

May 2007 Permalink

Segrè, Gino and Bettina Hoerlin. The Pope of Physics. New York: Henry Holt, 2016. ISBN 978-1-62779-005-5.
By the start of the 20th century, the field of physics had bifurcated into theoretical and experimental specialties. While theorists and experimenters were acquainted with the same fundamentals and collaborated, with theorists suggesting phenomena to be explored in experiments and experimenters providing hard data upon which theorists could build their models, rarely did one individual do breakthrough work in both theory and experiment. One outstanding exception was Enrico Fermi, whose numerous achievements seemed to jump effortlessly between theory and experiment.

Fermi was born in 1901 to a middle class family in Rome, the youngest of three children born in consecutive years. As was common at the time, Enrico and his brother Giulio were sent to be wet-nursed and raised by a farm family outside Rome and only returned to live with their parents when two and a half years old. His father was a division head in the state railway and his mother taught elementary school. Neither parent had attended university, but hoped all of their children would have the opportunity. All were enrolled in schools which concentrated on the traditional curriculum of Latin, Greek, and literature in those languages and Italian. Fermi was attracted to mathematics and science, but little instruction was available to him in those fields.

At age thirteen, the young Fermi made the acquaintance of Adolfo Amidei, an engineer who worked with his father. Amidei began to loan the lad mathematics and science books, which Fermi devoured—often working out solutions to problems which Amidei was unable to solve. Within a year, studying entirely on his own, he had mastered geometry and calculus. In 1915, Fermi bought a used book, Elementorum Physicæ Mathematica, at a flea market in Rome. Published in 1830 and written entirely in Latin, it was a 900 page compendium covering mathematical physics of that era. By that time, he was completely fluent in the language and the mathematics used in the abundant equations, and worked his way through the entire text. As the authors note, “Not only was Fermi the only twentieth-century physics genius to be entirely self-taught, he surely must be the only one whose first acquaintance with the subject was through a book in Latin.”

At sixteen, Fermi skipped the final year of high school, concluding it had nothing more to teach him, and with Amidei's encouragement, sat for a competitive examination for a place at the elite Sculoa Normale Superiore, which provided a complete scholarship including room and board to the winners. He ranked first in all of the examinations and left home to study in Pisa. Despite his talent for and knowledge of mathematics, he chose physics as his major—he had always been fascinated by mechanisms and experiments, and looked forward to working with them in his career. Italy, at the time a leader in mathematics, was a backwater in physics. The university in Pisa had only one physics professor who, besides having already retired from research, had knowledge in the field not much greater than Fermi's own. Once again, this time within the walls of a university, Fermi would teach himself, taking advantage of the university's well-equipped library. He taught himself German and English in addition to Italian and French (in which he was already fluent) in order to read scientific publications. The library subscribed to the German journal Zeitschrift für Physik, one of the most prestigious sources for contemporary research, and Fermi was probably the only person to read it there. In 1922, after completing a thesis on X-rays and having already published three scientific papers, two on X-rays and one on general relativity (introducing what are now called Fermi coordinates, the first of many topics in physics which would bear his name), he received his doctorate in physics, magna cum laude. Just twenty-one, he had his academic credential, published work to his name, and the attention of prominent researchers aware of his talent. What he lacked was the prospect of a job in his chosen field.

Returning to Rome, Fermi came to the attention of Orso Mario Corbino, a physics professor and politician who had become a Senator of the Kingdom and appointed minister of public education. Corbino's ambition was to see Italy enter the top rank of physics research, and saw in Fermi the kind of talent needed to achieve this goal. He arranged a scholarship so Fermi could study physics in one the centres of research in northern Europe. Fermi chose Göttingen, Germany, a hotbed of work in the emerging field of quantum mechanics. Fermi was neither particularly happy nor notably productive during his eight months there, but was impressed with the German style of research and the intellectual ferment of the large community of German physicists. Henceforth, he published almost all of his research in either German or English, with a parallel paper submitted to an Italian journal. A second fellowship allowed him to spend 1924 in the Netherlands, working with Paul Ehrenfest's group at Leiden, deepening his knowledge of statistical and quantum mechanics.

Finally, upon returning to Italy, Corbino and his colleague Antonio Garbasso found Fermi a post as a lecturer in physics in Florence. The position paid poorly and had little prestige, but at least it was a step onto the academic ladder, and Fermi was happy to accept it. There, Fermi and his colleague Franco Rasetti did experimental work measuring the spectra of atoms under the influence of radio frequency fields. Their work was published in prestigious journals such as Nature and Zeitschrift für Physik.

In 1925, Fermi took up the problem of reconciling the field of statistical mechanics with the discovery by Wolfgang Pauli of the exclusion principle, a purely quantum mechanical phenomenon which restricts certain kinds of identical particles from occupying the same state at the same time. Fermi's paper, published in 1926, resolved the problem, creating what is now called Fermi-Dirac statistics (British physicist Paul Dirac independently discovered the phenomenon, but Fermi published first) for the particles now called fermions, which include all of the fundamental particles that make up matter. (Forces are carried by other particles called bosons, which go beyond the scope of this discussion.)

This paper immediately elevated the twenty-five year old Fermi to the top tier of theoretical physicists. It provided the foundation for understanding of the behaviour of electrons in solids, and thus the semiconductor technology upon which all our modern computing and communications equipment is based. Finally, Fermi won what he had aspired to: a physics professorship in Rome. In 1928, he married Laura Capon, whom he had first met in 1924. The daughter of an admiral in the World War I Italian navy, she was a member of one of the many secular and assimilated Jewish families in Rome. She was less than impressed on first encountering Fermi:

He shook hands and gave me a friendly grin. You could call it nothing but a grin, for his lips were exceedingly thin and fleshless, and among his upper teeth a baby tooth too lingered on, conspicuous in its incongruity. But his eyes were cheerful and amused.

Both Laura and Enrico shared the ability to see things precisely as they were, then see beyond that to what they could become.

In Rome, Fermi became head of the mathematical physics department at the Sapienza University of Rome, which his mentor, Corbino, saw as Italy's best hope to become a world leader in the field. He helped Fermi recruit promising physicists, all young and ambitious. They gave each other nicknames: ecclesiastical in nature, befitting their location in Rome. Fermi was dubbed Il Papa (The Pope), not only due to his leadership and seniority, but because he had already developed a reputation for infallibility: when he made a calculation or expressed his opinion on a technical topic, he was rarely if ever wrong. Meanwhile, Mussolini was increasing his grip on the country. In 1929, he announced the appointment of the first thirty members of the Royal Italian Academy, with Fermi among the laureates. In return for a lifetime stipend which would put an end to his financial worries, he would have to join the Fascist party. He joined. He did not take the Academy seriously and thought its comic opera uniforms absurd, but appreciated the money.

By the 1930s, one of the major mysteries in physics was beta decay. When a radioactive nucleus decayed, it could emit one or more kinds of radiation: alpha, beta, or gamma. Alpha particles had been identified as the nuclei of helium, beta particles as electrons, and gamma rays as photons: like light, but with a much shorter wavelength and correspondingly higher energy. When a given nucleus decayed by alpha or gamma, the emission always had the same energy: you could calculate the energy carried off by the particle emitted and compare it to the nucleus before and after, and everything added up according to Einstein's equation of E=mc². But something appeared to be seriously wrong with beta (electron) decay. Given a large collection of identical nuclei, the electrons emitted flew out with energies all over the map: from very low to an upper limit. This appeared to violate one of the most fundamental principles of physics: the conservation of energy. If the nucleus after plus the electron (including its kinetic energy) didn't add up to the energy of the nucleus before, where did the energy go? Few physicists were ready to abandon conservation of energy, but, after all, theory must ultimately conform to experiment, and if a multitude of precision measurements said that energy wasn't conserved in beta decay, maybe it really wasn't.

Fermi thought otherwise. In 1933, he proposed a theory of beta decay in which the emission of a beta particle (electron) from a nucleus was accompanied by emission of a particle he called a neutrino, which had been proposed earlier by Pauli. In one leap, Fermi introduced a third force, alongside gravity and electromagnetism, which could transform one particle into another, plus a new particle: without mass or charge, and hence extraordinarily difficult to detect, which nonetheless was responsible for carrying away the missing energy in beta decay. But Fermi did not just propose this mechanism in words: he presented a detailed mathematical theory of beta decay which made predictions for experiments which had yet to be performed. He submitted the theory in a paper to Nature in 1934. The editors rejected it, saying “it contained abstract speculations too remote from physical reality to be of interest to the reader.” This was quickly recognised and is now acknowledged as one of the most epic face-plants of peer review in theoretical physics. Fermi's theory rapidly became accepted as the correct model for beta decay. In 1956, the neutrino (actually, antineutrino) was detected with precisely the properties predicted by Fermi. This theory remained the standard explanation for beta decay until it was extended in the 1970s by the theory of the electroweak interaction, which is valid at higher energies than were available to experimenters in Fermi's lifetime.

Perhaps soured on theoretical work by the initial rejection of his paper on beta decay, Fermi turned to experimental exploration of the nucleus, using the newly-discovered particle, the neutron. Unlike alpha particles emitted by the decay of heavy elements like uranium and radium, neutrons had no electrical charge and could penetrate the nucleus of an atom without being repelled. Fermi saw this as the ideal probe to examine the nucleus, and began to use neutron sources to bombard a variety of elements to observe the results. One experiment directed neutrons at a target of silver and observed the creation of isotopes of silver when the neutrons were absorbed by the silver nuclei. But something very odd was happening: the results of the experiment seemed to differ when it was run on a laboratory bench with a marble top compared to one of wood. What was going on? Many people might have dismissed the anomaly, but Fermi had to know. He hypothesised that the probability a neutron would interact with a nucleus depended upon its speed (or, equivalently, energy): a slower neutron would effectively have more time to interact than one which whizzed through more rapidly. Neutrons which were reflected by the wood table top were “moderated” and had a greater probability of interacting with the silver target.

Fermi quickly tested this supposition by using paraffin wax and water as neutron moderators and measuring the dramatically increased probability of interaction (or as we would say today, neutron capture cross section) when neutrons were slowed down. This is fundamental to the design of nuclear reactors today. It was for this work that Fermi won the Nobel Prize in Physics for 1938.

By 1938, conditions for Italy's Jewish population had seriously deteriorated. Laura Fermi, despite her father's distinguished service as an admiral in the Italian navy, was now classified as a Jew, and therefore subject to travel restrictions, as were their two children. The Fermis went to their local Catholic parish, where they were (re-)married in a Catholic ceremony and their children baptised. With that paperwork done, the Fermi family could apply for passports and permits to travel to Stockholm to receive the Nobel prize. The Fermis locked their apartment, took a taxi, and boarded the train. Unbeknownst to the fascist authorities, they had no intention of returning.

Fermi had arranged an appointment at Columbia University in New York. His Nobel Prize award was US$45,000 (US$789,000 today). If he returned to Italy with the sum, he would have been forced to convert it to lire and then only be able to take the equivalent of US$50 out of the country on subsequent trips. Professor Fermi may not have been much interested in politics, but he could do arithmetic. The family went from Stockholm to Southampton, and then on an ocean liner to New York, with nothing other than their luggage, prize money, and, most importantly, freedom.

In his neutron experiments back in Rome, there had been curious results he and his colleagues never explained. When bombarding nuclei of uranium, the heaviest element then known, with neutrons moderated by paraffin wax, they had observed radioactive results which didn't make any sense. They expected to create new elements, heavier than uranium, but what they saw didn't agree with the expectations for such elements. Another mystery…in those heady days of nuclear physics, there was one wherever you looked. At just about the time Fermi's ship was arriving in New York, news arrived from Germany about what his group had observed, but not understood, four years before. Slow neutrons, which Fermi's group had pioneered, were able to split, or fission the nucleus of uranium into two lighter elements, releasing not only a large amount of energy, but additional neutrons which might be able to propagate the process into a “chain reaction”, producing either a large amount of energy or, perhaps, an enormous explosion.

As one of the foremost researchers in neutron physics, it was immediately apparent to Fermi that his new life in America was about to take a direction he'd never anticipated. By 1941, he was conducting experiments at Columbia with the goal of evaluating the feasibility of creating a self-sustaining nuclear reaction with natural uranium, using graphite as a moderator. In 1942, he was leading a project at the University of Chicago to build the first nuclear reactor. On December 2nd, 1942, Chicago Pile-1 went critical, producing all of half a watt of power. But the experiment proved that a nuclear chain reaction could be initiated and controlled, and it paved the way for both civil nuclear power and plutonium production for nuclear weapons. At the time he achieved one of the first major milestones of the Manhattan Project, Fermi's classification as an “enemy alien” had been removed only two months before. He and Laura Fermi did not become naturalised U.S. citizens until July of 1944.

Such was the breakneck pace of the Manhattan Project that even before the critical test of the Chicago pile, the DuPont company was already at work planning for the industrial scale production of plutonium at a facility which would eventually be built at the Hanford site near Richland, Washington. Fermi played a part in the design and commissioning of the X-10 Graphite Reactor in Oak Ridge, Tennessee, which served as a pathfinder and began operation in November, 1943, operating at a power level which was increased over time to 4 megawatts. This reactor produced the first substantial quantities of plutonium for experimental use, revealing the plutonium-240 contamination problem which necessitated the use of implosion for the plutonium bomb. Concurrently, he contributed to the design of the B Reactor at Hanford, which went critical in September 1944, running at 250 megawatts, that produced the plutonium for the Trinity test and the Fat Man bomb dropped on Nagasaki.

During the war years, Fermi divided his time among the Chicago research group, Oak Ridge, Hanford, and the bomb design and production group at Los Alamos. As General Leslie Groves, head of Manhattan Project, had forbidden the top atomic scientists from travelling by air, “Henry Farmer”, his wartime alias, spent much of his time riding the rails, accompanied by a bodyguard. As plutonium production ramped up, he increasingly spent his time with the weapon designers at Los Alamos, where Oppenheimer appointed him associate director and put him in charge of “Division F” (for Fermi), which acted as a consultant to all of the other divisions of the laboratory.

Fermi believed that while scientists could make major contributions to the war effort, how their work and the weapons they created were used were decisions which should be made by statesmen and military leaders. When appointed in May 1945 to the Interim Committee charged with determining how the fission bomb was to be employed, he largely confined his contributions to technical issues such as weapons effects. He joined Oppenheimer, Compton, and Lawrence in the final recommendation that “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use.”

On July 16, 1945, Fermi witnessed the Trinity test explosion in New Mexico at a distance of ten miles from the shot tower. A few seconds after the blast, he began to tear little pieces of paper from from a sheet and drop them toward the ground. When the shock wave arrived, he paced out the distance it had blown them and rapidly computed the yield of the bomb as around ten kilotons of TNT. Nobody familiar with Fermi's reputation for making off-the-cuff estimates of physical phenomena was surprised that his calculation, done within a minute of the explosion, agreed within the margin of error with the actual yield of 20 kilotons, determined much later.

After the war, Fermi wanted nothing more than to return to his research. He opposed the continuation of wartime secrecy to postwar nuclear research, but, unlike some other prominent atomic scientists, did not involve himself in public debates over nuclear weapons and energy policy. When he returned to Chicago, he was asked by a funding agency simply how much money he needed. From his experience at Los Alamos he wanted both a particle accelerator and a big computer. By 1952, he had both, and began to produce results in scattering experiments which hinted at the new physics which would be uncovered throughout the 1950s and '60s. He continued to spend time at Los Alamos, and between 1951 and 1953 worked two months a year there, contributing to the hydrogen bomb project and analysis of Soviet atomic tests.

Everybody who encountered Fermi remarked upon his talents as an explainer and teacher. Seven of his students: six from Chicago and one from Rome, would go on to win Nobel Prizes in physics, in both theory and experiment. He became famous for posing “Fermi problems”, often at lunch, exercising the ability to make and justify order of magnitude estimates of difficult questions. When Freeman Dyson met with Fermi to present a theory he and his graduate students had developed to explain the scattering results Fermi had published, Fermi asked him how many free parameters Dyson had used in his model. Upon being told the number was four, he said, “I remember my old friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Chastened, Dyson soon concluded his model was a blind alley.

After returning from a trip to Europe in the fall of 1954, Fermi, who had enjoyed robust good health all his life, began to suffer from problems with digestion. Exploratory surgery found metastatic stomach cancer, for which no treatment was possible at the time. He died at home on November 28, 1954, two months past his fifty-third birthday. He had made a Fermi calculation of how long to rent the hospital bed in which he died: the rental expired two days after he did.

There was speculation that Fermi's life may have been shortened by his work with radiation, but there is no evidence of this. He was never exposed to unusual amounts of radiation in his work, and none of his colleagues, who did the same work at his side, experienced any medical problems.

This is a masterful biography of one of the singular figures in twentieth century science. The breadth of his interests and achievements is reflected in the list of things named after Enrico Fermi. Given the hyper-specialisation of modern science, it is improbable we will ever again see his like.

July 2017 Permalink

Shlaes, Amity. Coolidge. New York: Harper Perennial, [2013] 2014. ISBN 978-0-06-196759-7.
John Calvin Coolidge, Jr. was born in 1872 in Plymouth Notch, Vermont. His family were among the branch of the Coolidge clan who stayed in Vermont while others left its steep, rocky, and often bleak land for opportunity in the Wild West of Ohio and beyond when the Erie canal opened up these new territories to settlement. His father and namesake made his living by cutting wood, tapping trees for sugar, and small-scale farming on his modest plot of land. He diversified his income by operating a general store in town and selling insurance. There was a long tradition of public service in the family. Young Coolidge's great-grandfather was an officer in the American Revolution and his grandfather was elected to the Vermont House of Representatives. His father was justice of the peace and tax collector in Plymouth Notch, and would later serve in the Vermont House of Representatives and Senate.

Although many in the cities would consider their rural life far from the nearest railroad terminal hard-scrabble, the family was sufficiently prosperous to pay for young Calvin (the name he went by from boyhood) to attend private schools, boarding with families in the towns where they were located and infrequently returning home. He followed a general college preparatory curriculum and, after failing the entrance examination the first time, was admitted on his second attempt to Amherst College as a freshman in 1891. A loner, and already with a reputation for being taciturn, he joined none of the fraternities to which his classmates belonged, nor did he participate in the athletics which were a part of college life. He quickly perceived that Amherst had a class system, where the scions of old money families from Boston who had supported the college were elevated above nobodies from the boonies like himself. He concentrated on his studies, mastering Greek and Latin, and immersing himself in the works of the great orators of those cultures.

As his college years passed, Coolidge became increasingly interested in politics, joined the college Republican Club, and worked on the 1892 re-election campaign of Benjamin Harrison, whose Democrat opponent, Grover Cleveland, was seeking to regain the presidency he had lost to Harrison in 1888. Writing to his father after Harrison's defeat, his analysis was that “the reason seems to be in the never satisfied mind of the American and in the ever desire to shift in hope of something better and in the vague idea of the working and farming classes that somebody is getting all the money while they get all the work.”

His confidence growing, Coolidge began to participate in formal debates, finally, in his senior year, joined a fraternity, and ran for and won the honour of being an orator at his class's graduation. He worked hard on the speech, which was a great success, keeping his audience engaged and frequently laughing at his wit. While still quiet in one-on-one settings, he enjoyed public speaking and connecting with an audience.

After graduation, Coolidge decided to pursue a career in the law and considered attending law school at Harvard or Columbia University, but decided he could not afford the tuition, as he was still being supported by his father and had no prospects for earning sufficient money while studying the law. In that era, most states did not require a law school education; an aspiring lawyer could, instead, become an apprentice at an established law firm and study on his own, a practice called reading the law. Coolidge became an apprentice at a firm in Northampton, Massachusetts run by two Amherst graduates and, after two years, in 1897, passed the Massachusetts bar examination and was admitted to the bar. In 1898, he set out on his own and opened a small law office in Northampton; he had embarked on the career of a country lawyer.

While developing his law practice, Coolidge followed in the footsteps of his father and grandfather and entered public life as a Republican, winning election to the Northampton City Council in 1898. In the following years, he held the offices of City Solicitor and county clerk of courts. In 1903 he married Grace Anna Goodhue, a teacher at the Clarke School for the Deaf in Northampton. The next year, running for the local school board, he suffered the only defeat of his political career, in part because his opponents pointed out he had no children in the schools. Coolidge said, “Might give me time.” (The Coolidges went on to have two sons, John, born in 1906, and Calvin Jr., in 1908.)

In 1906, Coolidge sought statewide office for the first time, running for the Massachusetts House of Representatives and narrowly defeating the Democrat incumbent. He was re-elected the following year, but declined to run for a third term, returning to Northampton where he ran for mayor, won, and served two one year terms. In 1912 he ran for the State Senate seat of the retiring Republican incumbent and won. In the presidential election of that year, when the Republican party split between the traditional wing favouring William Howard Taft and progressives backing Theodore Roosevelt, Coolidge, although identified as a progressive, having supported women's suffrage and the direct election of federal senators, among other causes, stayed with the Taft Republicans and won re-election. Coolidge sought a third term in 1914 and won, being named President of the State Senate with substantial influence on legislation in the body.

In 1915, Coolidge moved further up the ladder by running for the office of Lieutenant Governor of Massachusetts, balancing the Republican ticket led by a gubernatorial candidate from the east of the state with his own base of support in the rural west. In Massachusetts, the Lieutenant Governor does not preside over the State Senate, but rather fulfils an administrative role, chairing executive committees. Coolidge presided over the finance committee, which provided him experience in managing a budget and dealing with competing demands from departments that was to prove useful later in his career. After being re-elected to the office in 1915 and 1916 (statewide offices in Massachusetts at the time had a term of only one year), with the governor announcing his retirement, Coolidge was unopposed for the Republican nomination for governor and narrowly defeated the Democrat in the 1918 election.

Coolidge took office at a time of great unrest between industry and labour. Prices in 1918 had doubled from their 1913 level; nothing of the kind had happened since the paper money inflation during the Civil War and its aftermath. Nobody seemed to know why: it was usually attributed to the war, but nobody understood the cause and effect. There doesn't seem to have been a single mainstream voice who observed that the rapid rise in prices (which was really a depreciation of the dollar) began precisely at the moment the Creature from Jekyll Island was unleashed upon the U.S. economy and banking system. What was obvious, however, was that in most cases industrial wages had not kept pace with the rise in the cost of living, and that large companies which had raised their prices had not correspondingly increased what they paid their workers. This gave a powerful boost to the growing union movement. In early 1919 an ugly general strike in Seattle idled workers across the city, and the United Mine Workers threatened a nationwide coal strike for November 1919, just as the maximum demand for coal in winter would arrive. In Boston, police officers voted to unionise and affiliate with the American Federation of Labor, ignoring an order from the Police Commissioner forbidding officers to join a union. On September 9th, a majority of policemen defied the order and walked off the job.

Those who question the need for a police presence on the street in big cities should consider the Boston police strike as a cautionary tale, at least as things were in the city of Boston in the year 1919. As the Sun went down, the city erupted in chaos, mayhem, looting, and violence. A streetcar conductor was shot for no apparent reason. There were reports of rapes, murders, and serious injuries. The next day, more than a thousand residents applied for gun permits. Downtown stores were boarding up their display windows and hiring private security forces. Telephone operators and employees at the electric power plant threatened to walk out in sympathy with the police. From Montana, where he was campaigning in favour of ratification of the League of Nations treaty, President Woodrow Wilson issued a mealy-mouthed statement saying, “There is no use in talking about political democracy unless you have also industrial democracy”.

Governor Coolidge acted swiftly and decisively. He called up the Guard and deployed them throughout the city, fired all of the striking policemen, and issued a statement saying “The action of the police in leaving their posts of duty is not a strike. It is a desertion. … There is nothing to arbitrate, nothing to compromise. In my personal opinion there are no conditions under which the men can return to the force.” He directed the police commissioner to hire a new force to replace the fired men. He publicly rebuked American Federation of Labor chief Samuel Gompers in a telegram released to the press which concluded, “There is no right to strike against the public safety by anybody, anywhere, any time.”

When the dust settled, the union was broken, peace was restored to the streets of Boston, and Coolidge had emerged onto the national stage as a decisive leader and champion of what he called the “reign of law.” Later in 1919, he was re-elected governor with seven times the margin of his first election. He began to be spoken of as a potential candidate for the Republican presidential nomination in 1920.

Coolidge was nominated at the 1920 Republican convention, but never came in above sixth in the balloting, in the middle of the pack of regional and favourite son candidates. On the tenth ballot, Warren G. Harding of Ohio was chosen, and party bosses announced their choice for Vice President, a senator from Wisconsin. But when time came for delegates to vote, a Coolidge wave among rank and file tired of the bosses ordering them around gave him the nod. Coolidge did not attend the convention in Chicago; he got the news of his nomination by telephone. After he hung up, Grace asked him what it was all about. He said, “Nominated for vice president.” She responded, “You don't mean it.” “Indeed I do”, he answered. “You are not going to accept it, are you?” “I suppose I shall have to.”

Harding ran on a platform of “normalcy” after the turbulence of the war and Wilson's helter-skelter progressive agenda. He expressed his philosophy in a speech several months earlier,

America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.

The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them.

The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920.

At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years.

Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.”

As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort.

Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%.

Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.”

When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes.

Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred.

In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to.

Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace.

Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis.

This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.

August 2019 Permalink

Shute, Nevil [Nevil Shute Norway]. Slide Rule. Kelly Bray, UK: House of Stratus, [1954] 2000. ISBN 978-1-84232-291-8.
The author is best known for his novels, several of which were made into Hollywood movies, including No Highway and On the Beach. In this book, he chronicles his “day job” as an aeronautical engineer and aviation entrepreneur in what he describes as the golden age of aviation: an epoch where a small team of people could design and manufacture innovative aircraft without the huge budgets, enormous bureaucratic organisations, or intrusive regulation which overcame the spirit of individual invention and enterprise as aviation matured. (The author, fearing that being known as a fictioneer might make him seem disreputable as an engineer, published his books under the name “Nevil Shute”, while using his full name, “Nevil Shute Norway” in his technical and business career. He explains that decision in this book, published after he had become a full-time writer.)

This is a slim volume, but there is as much wisdom here as in a dozen ordinary books this size, and the writing is simultaneously straightforward and breathtakingly beautiful. A substantial part of the book recounts the history of the U.K. airship project, which pitted a private industry team in which Shute played a major rôle building the R.100 in competition with a government-designed and -built ship, the R.101, designed to the same specifications. Seldom in the modern history of technology has there been such a clear-cut illustration of the difference between private enterprise designing toward a specification under a deadline and fixed budget and a government project with unlimited funds, no oversight, and with specifications and schedules at the whim of politicians with no technical knowledge whatsoever. The messy triumph of the R.100 and the tragedy of the R.101, recounted here by an insider, explains the entire sordid history of NASA, the Concorde, and innumerable other politically-driven technological boondoggles.

Had Shute brought the book to a close at the end of the airship saga, it would be regarded as a masterpiece of reportage of a now-forgotten episode in aviation history. But then he goes on to describe his experience in founding, funding, and operating a start-up aircraft manufacturer, Airspeed Ltd., in the middle of the Great Depression. This is simply the best first-person account of entrepreneurship and the difficult decisions one must make in bringing a business into being and keeping it going “whatever it takes”, and of the true motivation of the entrepreneur (hint: money is way down the list) that I have ever read, and I speak as somebody who has written one of my own. Then, if that weren't enough, Shute sprinkles the narrative with gems of insight aspiring writers may struggle years trying to painfully figure out on their own, which are handed to those seeking to master the craft almost in passing.

I could quote dozens of lengthy passages from this book which almost made me shiver when I read them from the sheer life-tested insight distilled into so few words. But I'm not going to, because what you need to do is go and get this book, right now (see below for an electronic edition), and drop whatever you're doing and read it cover to cover. I have had several wise people counsel me to do the same over the years and, for whatever reason, never seemed to find the time. How I wish I had read this book before I embarked upon my career in business, and how much comfort and confidence it would have given me upon reaching the difficult point where a business has outgrown the capabilities and interests of its founders.

An excellent Kindle edition is available.

July 2011 Permalink

Simon, Roger L. Blacklisting Myself. New York: Encounter Books, 2008. ISBN 978-1-59403-247-9.
The author arrived in Hollywood in the tumultuous year of 1968, fired by his allegiance to the New Left and experience in the civil rights struggle in the South to bring his activism to the screen and, at the same time, driven by his ambition to make it big in the movie business. Unlike the multitudes who arrive starry-eyed in tinseltown only to be frustrated trying to “break in”, Simon succeeded, both as a screenwriter (he was nominated for an Oscar for his screen adaptation of Enemies: A Love Story and as a novelist, best known for his Moses Wine detective fiction. One of the Moses Wine novels, The Big Fix, made it to the screen, with Simon also writing the screenplay. Such has been his tangible success that the author today lives in the Hollywood Hills house once shared by Joe DiMaggio and Marilyn Monroe.

This is in large part a memoir of a life in Hollywood, with pull-no-punches anecdotes about the celebrities and players in the industry, and the often poisonous culture of the movie business. But is also the story of the author's political evolution from the New Left through Hollywood radical chic (he used to hang with the Black Panthers) and eventual conversion to neo-conservatism which has made him a “Hollywood apostate” and which he describes on the first page of the book as “the ideological equivalent of a sex change operation”. He describes how two key events—the O. J. Simpson trial and the terrorist attacks of 2001—caused him to question assumptions he'd always taken as received wisdom and how, once he did start to think for himself instead of nodding in agreement with the monolithic leftist consensus in Hollywood, began to perceive and be appalled by the hypocrisy not only in the beliefs of his colleagues but between their lifestyles and the values they purported to champion. (While Simon has become a staunch supporter of efforts, military and other, to meet the threat of Islamic aggression and considers himself a fiscal conservative, he remains as much on the left as ever when it comes to social issues. But, as he describes, any dissent whatsoever from the Hollywood leftist consensus is enough to put one beyond the pale among the smart set, and possibly injure the career of even somebody as well-established as he.)

While never suggesting that he or anybody else has been the victim of a formal blacklist like that of suspected Communist sympathisers in the 1940s and 1950s, he does describe how those who dissent often feign support for leftist causes or simply avoid politically charged discussions to protect their careers. Simon was one of the first Hollywood figures to jump in as a blogger, and has since reinvented himself as a New Media entrepreneur, founding Pajamas Media and its associated ventures; he continues to actively blog. An early adopter of technology since the days of the Osborne 1 and CompuServe forums, he believes that new technology provides the means for an end-run around Hollywood groupthink, but by itself is insufficient (p. 177):

The answer to the problem of Hollywood for those of a more conservative or centrist bent is to go make movies of their own. Of course, to do so means finding financing and distribution. Today's technologies are making that simpler. Cameras and editing equipment cost a pittance. Distribution is at hand for the price of a URL. All that's left is the creativity. Unfortunately, that's the difficult part.

A video interview with the author is available.

February 2009 Permalink

Snowden, Edward. Permanent Record. New York: Metropolitan Books, 2019. ISBN 978-1-250-23723-1.
The revolution in communication and computing technologies which has continually accelerated since the introduction of integrated circuits in the 1960s and has since given rise to the Internet, ubiquitous mobile telephony, vast data centres with formidable processing and storage capacity, and technologies such as natural language text processing, voice recognition, and image analysis, has created the potential, for the first time in human history, of mass surveillance to a degree unimagined even in dystopian fiction such as George Orwell's 1984 or attempted by the secret police of totalitarian regimes like the Soviet Union, Nazi Germany, or North Korea. But, residents of enlightened developed countries such as the United States thought, they were protected, by legal safeguards such as the Fourth Amendment to the U.S. Constitution, from having their government deploy such forbidding tools against its own citizens. Certainly, there was awareness, from disclosures such as those in James Bamford's 1982 book The Puzzle Palace, that agencies such as the National Security Agency (NSA) were employing advanced and highly secret technologies to spy upon foreign governments and their agents who might attempt to harm the United States and its citizens, but their activities were circumscribed by a legal framework which strictly limited the scope of their domestic activities.

Well, that's what most people believed until the courageous acts by Edward Snowden, a senior technical contractor working for the NSA, revealed, in 2013, multiple programs of indiscriminate mass surveillance directed against, well, everybody in the world, U.S. citizens most definitely included. The NSA had developed and deployed a large array of hardware and software tools whose mission was essentially to capture all the communications and personal data of everybody in the world, scan it for items of interest, and store it forever where it could be accessed in future investigations. Data were collected through a multitude of means: monitoring traffic across the Internet, collecting mobile phone call and location data (estimated at five billion records per day in 2013), spidering data from Web sites, breaking vulnerable encryption technologies, working with “corporate partners” to snoop data passing through their facilities, and fusing this vast and varied data with query tools such as XKEYSCORE, which might be thought of as a Google search engine built by people who from the outset proclaimed, “Heck yes, we're evil!”

How did Edward Snowden, over his career a contractor employee for companies including BAE Systems, Dell Computer, and Booz Allen Hamilton, and a government employee of the CIA, obtain access to such carefully guarded secrets? What motivated him to disclose this information to the media? How did he spirit the information out of the famously security-obsessed NSA and get it into the hands of the media? And what were the consequences of his actions? All of these questions are answered in this beautifully written, relentlessly candid, passionately argued, and technologically insightful book by the person who, more than anyone else, is responsible for revealing the malignant ambition of the government of the United States and its accomplices in the Five Eyes (Australia, Canada, New Zealand, and the United Kingdom) to implement and deploy a global panopticon which would shrink the scope of privacy of individuals to essentially zero—in the words of an NSA PowerPoint (of course) presentation from 2011, “Sniff It All, Know It All, Collect It All, Process It All, Exploit It All, Partner It All”. They didn't mention “Store It All Forever”, but with the construction of the US$1.5 billion Utah Data Center which consumes 65 megawatts of electricity, it's pretty clear that's what they're doing.

Edward Snowden was born in 1983 and grew up along with the personal computer revolution. His first contact with computers was when his father brought home a Commodore 64, on which father and son would play many games. Later, just seven years old, his father introduced him to programming on a computer at the Coast Guard base where he worked, and, a few years later, when the family had moved to the Maryland suburbs of Washington DC after his father had been transferred to Coast Guard Headquarters, the family got a Compaq 486 PC clone which opened the world of programming and exploration of online groups and the nascent World Wide Web via the narrow pipe of a dial-up connection to America Online. In those golden days of the 1990s, the Internet was mostly created by individuals for individuals, and you could have any identity, or as many identities as you wished, inventing and discarding them as you explored the world and yourself. This was ideal for a youth who wasn't interested in sports and tended to be reserved in the presence of others. He explored the many corners of the Internet and, like so many with the talent for understanding complex systems, learned to deduce the rules governing systems and explore ways of using them to his own ends. Bob Bickford defines a hacker as “Any person who derives joy from discovering ways to circumvent limitations.” Hacking is not criminal, and it has nothing to do with computers. As his life progressed, Snowden would learn how to hack school, the job market, and eventually the oppressive surveillance state.

By September 2001, Snowden was working for an independent Web site developer operating out of her house on the grounds of Fort Meade, Maryland, the home of the NSA (for whom, coincidentally, his mother worked in a support capacity). After the attacks on the World Trade Center and Pentagon, he decided, in his family's long tradition of service to their country (his grandfather is a Rear Admiral in the Coast Guard, and ancestors fought in the Revolution, Civil War, and both world wars), that his talents would be better put to use in the intelligence community. His lack of a four year college degree would usually be a bar to such employment, but the terrorist attacks changed all the rules, and military veterans were being given a fast track into such jobs, so, after exploring his options, Snowden enlisted in the Army, under a special program called 18 X-Ray, which would send qualifying recruits directly into Special Forces training after completing their basic training.

His military career was to prove short. During a training exercise, he took a fall in the forest which fractured the tibia bone in both legs and was advised he would never be able to qualify for Special Forces. Given the option of serving out his time in a desk job or taking immediate “administrative separation” (in which he would waive the government's liability for the injury), he opted for the latter. Finally, after a circuitous process, he was hired by a government contractor and received the exclusive Top Secret/Sensitive Compartmented Information security clearance which qualified him to work at the CIA.

A few words are in order about contractors at government agencies. In some media accounts of the Snowden disclosures, he has been dismissed as “just a contractor”, but in the present-day U.S. government where nothing is as it seems and much of everything is a scam, in fact many of the people working in the most sensitive capacities in the intelligence community are contractors supplied by the big “beltway bandit” firms which have sprung up like mushrooms around the federal swamp. You see, agencies operate under strict limits on the number of pure government (civil service) employees they can hire and, of course, government employment is almost always forever. But, if they pay a contractor to supply a body to do precisely the same job, on site, they can pay the contractor from operating funds and bypass the entire civil service mechanism and limits and, further, they're free to cut jobs any time they wish and to get rid of people and request a replacement from the contractor without going through the arduous process of laying off or firing a “govvy”. In all of Snowden's jobs, the blue badged civil servants worked alongside the green badge contractors without distinction in job function. Contractors would rarely ever visit the premises of their nominal “employers” except for formalities of hiring and employee benefits. One of Snowden's co-workers said “contracting was the third biggest scam in Washington after the income tax and Congress.”

His work at the CIA was in system administration, and he rapidly learned that regardless of classification levels, compartmentalisation, and need to know, the person in a modern organisation who knows everything, or at least has the ability to find out if interested, is the system administrator. In order to keep a system running, ensure the integrity of the data stored on it, restore backups when hardware, software, or user errors cause things to be lost, and the myriad other tasks that comprise the work of a “sysadmin”, you have to have privileges to access pretty much everything in the system. You might not be able to see things on other systems, but the ones under your control are an open book. The only safeguard employers have over rogue administrators is monitoring of their actions, and this is often laughably poor, especially as bosses often lack the computer savvy of the administrators who work for them.

After nine months on the job, an opening came up for a CIA civil servant job in overseas technical support. Attracted to travel and exotic postings abroad, Snowden turned in his green badge for a blue one and after a training program, was sent to exotic…Geneva as computer security technician, under diplomatic cover. As placid as it may seem, Geneva was on the cutting edge of CIA spying technology, with the United Nations, numerous international agencies, and private banks all prime targets for snooping.

Two years later Snowden was a contractor once again, this time with Dell Computer, who placed him with the NSA, first in Japan, then back in Maryland, and eventually in Hawaii as lead technologist of the Office of Information Sharing, where he developed a system called “Heartbeat” which allowed all of NSA's sites around the world to share their local information with others. It can be thought of as an automated blog aggregator for Top Secret information. This provided him personal access to just about everything the NSA was up to, world-wide. And he found what he read profoundly disturbing and dismaying.

Once he became aware of the scope of mass surveillance, he transferred to another job in Hawaii which would allow him to personally verify its power by gaining access to XKEYSCORE. His worst fears were confirmed, and he began to patiently, with great caution, and using all of his insider's knowledge, prepare to bring the archives he had spirited out from the Heartbeat system to the attention of the public via respected media who would understand the need to redact any material which might, for example, put agents in the field at risk. He discusses why, based upon his personal experience and that of others, he decided the whistleblower approach within the chain of command was not feasible: the unconstitutional surveillance he had discovered had been approved at the highest levels of government—there was nobody who could stop it who had not already approved it.

The narrative then follows preparing for departure, securing the data for travel, taking a leave of absence from work, travelling to Hong Kong, and arranging to meet the journalists he had chosen for the disclosure. There is a good deal of useful tradecraft information in this narrative for anybody with secrets to guard. Then, after the stories began to break in June, 2013, the tale of his harrowing escape from the long reach of Uncle Sam is recounted. Popular media accounts of Snowden “defecting to Russia” are untrue. He had planned to seek asylum in Ecuador, and had obtained a laissez-passer from the Ecuadoran consul and arranged to travel to Quito from Hong Kong via Moscow, Havana, and Caracas, as that was the only routing which did not pass through U.S. airspace or involve stops in countries with extradition treaties with the U.S. Upon arrival in Moscow, he discovered that his U.S. passport had been revoked while en route from Hong Kong, and without a valid passport he could neither board an onward flight nor leave the airport. He ended up trapped in the Moscow airport for forty days while twenty-seven countries folded to U.S. pressure and denied him political asylum. After spending so long in the airport he even became tired of eating at the Burger King there, on August 1st, 2013 Russia granted him temporary asylum. At this writing, he is still in Moscow, having been joined in 2017 by Lindsay Mills, the love of his life he left behind in Hawaii in 2013, and who is now his wife.

This is very much a personal narrative, and you will get an excellent sense for who Edward Snowden is and why he chose to do what he did. The first thing that struck me is that he really knows his stuff. Some of the press coverage presented him as a kind of low-level contractor systems nerd, but he was principal architect of EPICSHELTER, NSA's worldwide backup and archiving system, and sole developer of the Heartbeat aggregation system for reports from sites around the globe. At the time he left to make his disclosures, his salary was US$120,000 per year, hardly the pay of a humble programmer. His descriptions of technologies and systems in the book are comprehensive and flawless. He comes across as motivated entirely by outrage at the NSA's flouting of the constitutional protections supposed to be afforded U.S. citizens and its abuses in implementing mass surveillance, sanctioned at the highest levels of government across two administrations from different political parties. He did not seek money for his disclosures, and did not offer them to foreign governments. He took care to erase all media containing the documents he removed from the NSA before embarking on his trip from Hong Kong, and when approached upon landing in Moscow by agents from the Russian FSB (intelligence service) with what was obviously a recruitment pitch, he immediately cut it off, saying,

Listen, I understand who you are, and what this is. Please let me be clear that I have no intention to cooperate with you. I'm not going to cooperate with any intelligence service. I mean no disrespect, but this isn't going to be that kind of meeting. If you want to search my bag, it's right here. But I promise you, there's nothing in it that can help you.

And that was that.

Edward Snowden could have kept quiet, done his job, collected his handsome salary, continued to live in a Hawaiian paradise, and share his life with Lindsay, but he threw it all away on a matter of principle and duty to his fellow citizens and the Constitution he had sworn to defend when taking the oath upon joining the Army and the CIA. On the basis of the law, he is doubtless guilty of the three federal crimes with which he has been charged, sufficient to lock him up for as many as thirty years should the U.S. lay its hands on him. But he believes he did the correct thing in an attempt to right wrongs which were intolerable. I agree, and can only admire his courage. If anybody is deserving of a Presidential pardon, it is Edward Snowden.

There is relatively little discussion here of the actual content of the documents which were disclosed and the surveillance programs they revealed. For full details, visit the Snowden Surveillance Archive, which has copies of all of the documents which have been disclosed by the media to date. U.S. government employees and contractors should read the warning on the site before viewing this material.

September 2019 Permalink

Stafford, Thomas P. with Michael Cassutt. We Have Capture. Washington: Smithsonian Institution Press, 2002. ISBN 1-58834-070-8.

October 2002 Permalink

Waugh, Auberon. Will This Do? New York: Carroll & Graf 1991. ISBN 0-7867-0639-2.
This is about the coolest title for an autobiography I've yet to encounter.

April 2003 Permalink

Wendt, Guenter and Russell Still. The Unbroken Chain. Burlington, Canada: Apogee Books, 2001. ISBN 1-896522-84-X.

December 2001 Permalink

Wolfram, Stephen. Idea Makers. Champaign, IL: Wolfram Media, 2016. ISBN 978-1-57955-003-5.
I first met Stephen Wolfram in 1988. Within minutes, I knew I was in the presence of an extraordinary mind, combined with intellectual ambition the likes of which I had never before encountered. He explained that he was working on a system to automate much of the tedious work of mathematics—both pure and applied—with the goal of changing how science and mathematics were done forever. I not only thought that was ambitious; I thought it was crazy. But then Stephen went and launched Mathematica and, twenty-eight years and eleven major releases later, his goal has largely been achieved. At the centre of a vast ecosystem of add-ons developed by his company, Wolfram Research, and third parties, it has become one of the tools of choice for scientists, mathematicians, and engineers in numerous fields.

Unlike many people who founded software companies, Wolfram never took his company public nor sold an interest in it to a larger company. This has allowed him to maintain complete control over the architecture, strategy, and goals of the company and its products. After the success of Mathematica, many other people, and I, learned to listen when Stephen, in his soft-spoken way, proclaims what seems initially to be an outrageously ambitious goal. In the 1990s, he set to work to invent A New Kind of Science: the book was published in 2002, and shows how simple computational systems can produce the kind of complexity observed in nature, and how experimental exploration of computational spaces provides a new path to discovery unlike that of traditional mathematics and science. Then he said he was going to integrate all of the knowledge of science and technology into a “big data” language which would enable knowledge-based computing and the discovery of new facts and relationships by simple queries short enough to tweet. Wolfram Alpha was launched in 2009, and Wolfram Language in 2013. So when Stephen speaks of goals such as curating all of pure mathematics or discovering a simple computational model for fundamental physics, I take him seriously.

Here we have a less ambitious but very interesting Wolfram project. Collected from essays posted on his blog and elsewhere, he examines the work of innovators in science, mathematics, and industry. The subjects of these profiles include many people the author met in his career, as well as historical figures he tries to get to know through their work. As always, he brings his own unique perspective to the project and often has insights you'll not see elsewhere. The people profiled are:

Many of these names are well known, while others may elicit a “who?” Solomon Golomb, among other achievements, was a pioneer in the development of linear-feedback shift registers, essential to technologies such as GPS, mobile phones, and error detection in digital communications. Wolfram argues that Golomb's innovation may be the most-used mathematical algorithm in history. It's a delight to meet the pioneer.

This short (250 page) book provides personal perspectives on people whose ideas have contributed to the intellectual landscape we share. You may find the author's perspectives unusual, but they're always interesting, enlightening, and well worth reading.

September 2016 Permalink

Worden, Al with Francis French. Falling to Earth. Washington: Smithsonian Books, 2011. ISBN 978-1-58834-309-3.
Al Worden (his given name is Alfred, but he has gone by “Al” his whole life) was chosen as a NASA astronaut in April 1966, served as backup command module pilot for the Apollo 12 mission, the second Moon landing, and then flew to the Moon as command module pilot of Apollo 15, the first serious geological exploration mission. As command module pilot, Worden did not land on the Moon but, while tending the ship in orbit awaiting the return of his crewmates, operated a series of scientific experiments, some derived from spy satellite technology, which provided detailed maps of the Moon and a survey of its composition. To retrieve the film from the mapping cameras in the service module, Worden performed the first deep-space EVA during the return to Earth.

Growing up on a farm in rural Michigan during the first great depression and the second World War, Worden found his inclination toward being a loner reinforced by the self-reliance his circumstances forced upon him. He remarks on several occasions how he found satisfaction in working by himself and what he achieved on his own and while not disliking the company of others, found no need to validate himself through their opinions of him. This inner-directed drive led him to West Point, which he viewed as the only way to escape from a career on the farm given his family's financial circumstances, an Air Force commission, and graduation from the Empire Test Pilots' School in Farnborough, England under a US/UK exchange program.

For one inclined to be a loner, it would be difficult to imagine a more ideal mission than Worden's on Apollo 15. Orbiting the Moon in the command module Endeavour for almost three days by himself he was, at maximum distance on the far side of the Moon, more isolated from his two crewmates on the surface than any human has been from any other humans before or since (subsequent Apollo missions placed the command module in a lower lunar orbit, reducing this distance slightly). He candidly admits how much he enjoyed being on his own in the capacious command module, half the time entirely his own man while out of radio contact behind the Moon, and how his joy at the successful return of his comrades from the surface was tempered by how crowded and messy the command module was with them, the Moon rocks they collected, and all the grubby Moon dust clinging to their spacesuits on board.

Some Apollo astronauts found it difficult to adapt to life on Earth after their missions. Travelling to the Moon before you turn forty is a particularly extreme case of “peaking early”, and the question of “What next?” can be formidable, especially when the entire enterprise of lunar exploration was being dismantled at its moment of triumph. Still, one should not overstate this point: of the twenty-four astronauts who flew to the Moon, most went on to subsequent careers you'd expect for the kind of overachievers who become astronauts in the first place—in space exploration, the military, business, politics, education, and even fine arts. Few, however, fell to Earth so hard as the crew of Apollo 15. The collapse of one of their three landing parachutes before splashdown due to the canopy's being eroded due to a dump of reaction control propellant might have been seen as a premonition of this, but after the triumphal conclusion of a perfect mission, a White House reception, an address to a joint session of Congress, and adulatory celebrations on a round-the-world tour, it all came undone in an ugly scandal involving, of all things, postage stamps.

The Apollo 15 crew, like those of earlier NASA missions, had carried on board as part of their “personal preference kits” postage stamp covers commemorating the flight. According to Worden's account in this book, the Apollo 15 covers were arranged by mission commander Dave Scott, and agreed to by Worden and lunar module pilot Jim Irwin on Scott's assurance that this was a routine matter which would not affect their careers and that any sales of the covers would occur only after their retirement from NASA and the Air Force (in which all three were officers). When, after the flight, the covers began to come onto the market, an ugly scandal erupted, leading to the Apollo 15 crew being removed from flight status, and Worden and Irwin being fired from NASA with reprimands placed in their Air Force records which would block further promotion. Worden found himself divorced (before the Moon mission), out of a job at NASA, and with no future in the Air Force.

Reading this book, you get the impression that this was something like the end of Worden's life. And yet it wasn't—he went on to complete his career in the flight division at NASA's Ames Research Center and retire with the rank and pension of a Colonel in the U.S. Air Force. He then served in various capacities in private sector aerospace ventures and as chairman of the Astronaut Scholarship Foundation. Honestly, reading this book, you get the sense that everybody has forgotten the stupid postage stamps except the author. If there is some kind of redemption to be had by recounting the episode here (indeed, “Redemption” is the title of chapter 13 of this work), then fine, but whilst reading this account, I found myself inclined to shout, “Dude—you flew to the Moon! Yes, you messed up and got fired—who hasn't? But you landed on your feet and have had a wonderful life since, including thirty years of marriage. Get over the shaggy brown ugliness of the 1970s and enjoy the present and all the years to come!”

October 2011 Permalink