2019  

February 2019

Dutton, Edward and Michael A. Woodley of Menie. At Our Wits' End. Exeter, UK: Imprint Academic, 2018. ISBN 978-1-84540-985-2.
During the Great Depression, the Empire State Building was built, from the beginning of foundation excavation to official opening, in 410 days (less than 14 months). After the destruction of the World Trade Center in New York on September 11, 2001, design and construction of its replacement, the new One World Trade Center was completed on November 3, 2014, 4801 days (160 months) later.

In the 1960s, from U.S. president Kennedy's proposal of a manned lunar mission to the landing of Apollo 11 on the Moon, 2978 days (almost 100 months) elapsed. In January, 2004, U.S. president Bush announced the “Vision for Space Exploration”, aimed at a human return to the lunar surface by 2020. After a comical series of studies, revisions, cancellations, de-scopings, redesigns, schedule slips, and cost overruns, its successor now plans to launch a lunar flyby mission (not even a lunar orbit like Apollo 8) in June 2022, 224 months later. A lunar landing is planned for no sooner than 2028, almost 300 months after the “vision”, and almost nobody believes that date (the landing craft design has not yet begun, and there is no funding for it in the budget).

Wherever you look: junk science, universities corrupted with bogus “studies” departments, politicians peddling discredited nostrums a moment's critical thinking reveals to be folly, an economy built upon an ever-increasing tower of debt that nobody really believes is ever going to be paid off, and the dearth of major, genuine innovations (as opposed to incremental refinement of existing technologies, as has driven the computing, communications, and information technology industries) in every field: science, technology, public policy, and the arts, it often seems like the world is getting dumber. What if it really is?

That is the thesis explored by this insightful book, which is packed with enough “hate facts” to detonate the head of any bien pensant academic or politician. I define a “hate fact” as something which is indisputably true, well-documented by evidence in the literature, which has not been contradicted, but the citation of which is considered “hateful” and can unleash outrage mobs upon anyone so foolish as to utter the fact in public and be a career-limiting move for those employed in Social Justice Warrior-converged organisations. (An example of a hate fact, unrelated to the topic of this book, is the FBI violent crime statistics broken down by the race of the criminal and victim. Nobody disputes the accuracy of this information or the methodology by which it is collected, but woe betide anyone so foolish as to cite the data or draw the obvious conclusions from it.)

In April 2004 I made my own foray into the question of declining intelligence in “Global IQ: 1950–2050” in which I combined estimates of the mean IQ of countries with census data and forecasts of population growth to estimate global mean IQ for a century starting at 1950. Assuming the mean IQ of countries remains constant (which is optimistic, since part of the population growth in high IQ countries with low fertility rates is due to migration from countries with lower IQ), I found that global mean IQ, which was 91.64 for a population of 2.55 billion in 1950, declined to 89.20 for the 6.07 billion alive in 2000, and was expected to fall to 86.32 for the 9.06 billion population forecast for 2050. This is mostly due to the explosive population growth forecast for Sub-Saharan Africa, where many of the populations with low IQ reside.

U.N. World Population Prospects: 2017 Revision

This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90.

But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today's developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.

  1. There is a factor, “general intelligence” or g, which measures the ability to solve a wide variety of mental problems, and this factor, measured by IQ tests, is largely stable across an individual's life.
  2. Intelligence, as measured by IQ tests, is, like height, in part heritable. The heritability of IQ is estimated at around 80%, which means that 80% of children's IQ can be estimated from that of their parents, and 20% is due to other factors.
  3. IQ correlates positively with factors contributing to success in society. The correlation with performance in education is 0.7, with highest educational level completed 0.5, and with salary 0.3.
  4. In Europe, between 1400 and around 1850, the wealthier half of the population had more children who survived to adulthood than the poorer half.
  5. Because IQ correlates with social success, that portion of the population which was more intelligent produced more offspring.
  6. Just as in selective breeding of animals by selecting those with a desired trait for mating, this resulted in a population whose average IQ increased (slowly) from generation to generation over this half-millennium.

The gradually rising IQ of the population resulted in a growing standard of living as knowledge and inventions accumulated due to the efforts of those with greater intelligence over time. In particular, even a relatively small increase in the mean IQ of a population makes an enormous difference in the tiny fraction of people with “genius level” IQ who are responsible for many of the significant breakthroughs in all forms of human intellectual endeavour. If we consider an IQ of 145 as genius level, in a population of a million with a mean IQ of 100, one in 741 people will have an IQ of 145 or above, so there will be around 1350 people with such an IQ. But if the population's mean IQ is 95, just five points lower, only one in 2331 people will have a genius level IQ, and there will be just 429 potential geniuses in the population of a million. In a population of a million with a mean IQ of 90, there will be just 123 potential geniuses.

(Some technical details are in order. A high IQ [generally 125 or above] appears to be a necessary condition for genius-level achievement, but it is insufficient by itself. Those who produce feats of genius usually combine high intelligence with persistence, ambition, often a single-minded focus on a task, and usually require an environment which allows them to acquire the knowledge and intellectual tools required to apply their talent. But since a high IQ is a requirement, the mean IQ determines what fraction of the population are potential geniuses; other factors such as the society's educational institutions, resources such as libraries, and wealth which allows some people to concentrate on intellectual endeavours instead of manual labour, contribute to how many actual works of genius will be produced. The mean IQ of most Western industrial nations is around 100, and the standard deviation of IQ is normalised to be 15. Using this information you can perform calculations such as those in the previous paragraph using Fourmilab's z Score Calculator, as explained in my Introduction to Probability and Statistics.)

Of the pillars of the argument listed above, items 1 through 3 are noncontroversial except by those who deny the existence of general intelligence entirely or the ability of IQ tests to measure it. The authors present the large body of highly persuasive evidence in favour of those items in a form accessible to the non-specialist. If you reject that evidence, then you needn't consider the rest of the argument.

Item 4, the assertion that wealthier families had more children survive to adulthood, is substantiated by a variety of research, much of it done in England, where recorded wills and church records of baptisms and deaths provide centuries of demographic data. One study, for example, examining wills filed between 1585 and 1638 in Suffolk and Essex found that the richer half of estates (determined by the bequests in the wills) had almost twice as many children named in wills compared to the poorer half. An investigation of records in Norfolk covering the years 1500 to 1630 found an average of four children for middle class families as opposed to two for the lower class. Another, covering Saxony in Germany between 1547 and 1671, found the middle class had an average of 3.4 children who survived to become married, while the working class had just 1.6. This differential fertility seems, in conjunction with item 5, the known correlation between intelligence and social success, to make plausible that a process of selection for intelligence was going on, and probably had been for centuries. (Records are sparse before the 17th century, so detailed research for that period is difficult.)

Another form of selection got underway as the middle ages gave way to the early modern period around the year 1500 in Europe. While in medieval times criminals were rarely executed due to opposition by the Church, by the early modern era almost all felonies received the death penalty. This had the effect of “culling the herd” of its most violent members who, being predominantly young, male, and of low intelligence, would often be removed from the breeding population before fathering any children. To the extent that the propensity to violent crime is heritable (which seems plausible, as almost all human characteristics are heritable to one degree or another), this would have “domesticated” the European human population and contributed to the well-documented dramatic drop in the murder rate in this period. It would have also selected out those of low intelligence, who are prone to violent crime. Further, in England, there was a provision called “Benefit of Clergy” where those who could demonstrate literacy could escape the hangman. This was another selection for intelligence.

If intelligence was gradually increasing in Europe from the middle ages through the time of the Industrial Revolution, can we find evidence of this in history? Obviously, we don't have IQ tests from that period, but there are other suggestive indications. Intelligent people have lower time preference: they are willing to defer immediate gratification for a reward in the future. The rate of interest on borrowed money is a measure of a society's overall time preference. Data covering the period from 1150 through 1950 found that interest rates had declined over the entire time, from over 10% in the year 1200 to around 5% in the 1800s. This is consistent with an increase in intelligence.

Literacy correlates with intelligence, and records from marriage registers and court documents show continually growing literacy from 1580 through 1920. In the latter part of this period, the introduction of government schools contributed to much of the increase, but in early years it may reflect growing intelligence.

A population with growing intelligence should produce more geniuses who make contributions which are recorded in history. In a 2005 study, American physicist Jonathan Huebner compiled a list of 8,583 significant events in the history of science and technology from the Stone Age through 2004. He found that, after adjusting for the total population of the time, the rate of innovation per capita had quadrupled between 1450 and 1870. Independently, Charles Murray's 2003 book Human Accomplishment found that the rate of innovation and the appearance of the figures who created them increased from the Middle Ages through the 1870s.

The authors contend that a growing population with increasing mean intelligence eventually reached a critical mass which led to the industrial revolution, due to a sufficiently large number of genius intellects alive at the same time and an intelligent workforce who could perform the jobs needed to build and operate the new machines. This created unprecedented prosperity and dramatically increased the standard of living throughout the society.

And then an interesting thing happened. It's called the “demographic transition”, and it's been observed in country after country as it develops from a rural, agrarian economy to an urban, industrial society. Pre-industrial societies are characterised by a high birth rate, a high rate of infant and childhood mortality, and a stable or very slowly growing population. Families have many children in the hope of having a few survive to adulthood to care for them in old age and pass on their parents' genes. It is in this phase that the intense selection pressure obtains: the better-off and presumably more intelligent parents will have more children survive to adulthood.

Once industrialisation begins, it is usually accompanied by public health measures, better sanitation, improved access to medical care, and the introduction of innovations such as vaccination, antiseptics, and surgery with anæsthesia. This results in a dramatic fall in the mortality rate for the young, larger families, and an immediate bulge in the population. As social welfare benefits are extended to reach the poor through benefits from employers, charity, or government services, this occurs more broadly across social classes, reducing the disparity in family sizes among the rich and poor.

Eventually, parents begin to see the advantage of smaller families now that they can be confident their offspring have a high probability of surviving to adulthood. This is particularly the case for the better-off, as they realise their progeny will gain an advantage by splitting their inheritance fewer ways and in receiving the better education a family can afford for fewer children. This results in a decline in the birth rate, which eventually reaches the replacement rate (or below), where it comes into line with the death rate.

But what does this do to the selection for intelligence from which humans have been benefitting for centuries? It ends it, and eventually puts it into reverse. In country after country, the better educated and well-off (both correlates of intelligence) have fewer children than the less intelligent. This is easy to understand: in the prime child-bearing years they tend to be occupied with their education and starting a career. They marry later, have children (if at all) at an older age, and due to the female biological clock, have fewer kids even if they desire more. They also use contraception to plan their families and tend to defer having children until the “right time”, which sometimes never comes.

Meanwhile, the less intelligent, who in the modern welfare state are often clients on the public dole, who have less impulse control, high time preference, and when they use contraception often do so improperly resulting in unplanned pregnancies, have more children. They start earlier, don't bother with getting married (as the stigma of single motherhood has largely been eliminated), and rely upon the state to feed, house, educate, and eventually imprison their progeny. This sad reality was hilariously mocked in the introduction to the 2006 film Idiocracy.

While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I'm talking about China, Korea, and [to a lesser extent] Japan.)

If you've followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven's Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is.

What's going on, then? The authors join James Flynn in concluding that what we're seeing is an increase in the population's proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn't make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we're seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening.

Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.)

There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since.

If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China.

And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn't make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we've learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans.

Or, maybe we won't. In which case, “Winter has come and it's only going to get colder. Wrap up warm.”

Here is a James Delingpole interview of the authors and discussion of the book.

 Permalink

April 2019

Nelson, Roger D. Connected: The Emergence of Global Consciousness. Princeton: ICRL Press, 2019. ISBN 978-1-936033-35-5.
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.

Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.

Such beliefs have been common over much of human history and in a wide variety of very diverse cultures around the globe, but in recent centuries these beliefs have been displaced by the view of mechanistic, reductionist science, which argues that the brain is just a kind of (phenomenally complicated) biological computer and that consciousness can be thought of as an emergent phenomenon which arises when the brain computer's software becomes sufficiently complex to be able to examine its own operation. From this perspective, consciousness is confined within the brain, cannot affect the outside world or the consciousness of others except by physical interactions initiated by motor neurons, and perceives the world only through sensory neurons. There is no “consciousness field”, and individual consciousness dies when the brain does.

But while this view is more in tune with the scientific outlook which spawned the technological revolution that has transformed the world and continues to accelerate, it has, so far, made essentially zero progress in understanding consciousness. Although we have built electronic computers which can perform mathematical calculations trillions of times faster than the human brain, and are on track to equal the storage capacity of that brain some time in the next decade or so, we still don't have the slightest idea how to program a computer to be conscious: to be self-aware and act out of a sense of free will (if free will, however defined, actually exists). So, if we adopt a properly scientific and sceptical view, we must conclude that the jury is still out on the question of consciousness. If we don't understand enough about it to program it into a computer, then we can't be entirely confident that it is something we could program into a computer, or that it is just some kind of software running on our brain-computer.

It looks like humans are, dare I say, programmed to believe in consciousness as a force not confined to the brain. Many cultures have developed shamanism, religions, philosophies, and practices which presume the existence of the following kinds of what Dean Radin calls Real Magic, and which I quote from my review of his book with that title.

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

Starting in the 19th century, a small number of scientists undertook to investigate whether these phenomena could possibly be real, whether they could be demonstrated under controlled conditions, and what mechanism might explain these kinds of links between consciousness and will and the physical world. In 1882 the Society for Psychical Research was founded in London and continues to operate today, publishing three journals. Psychic research, now more commonly called parapsychology, continues to investigate the interaction of consciousness with the outside world through (unspecified) means other than the known senses, usually in laboratory settings where great care is taken to ensure no conventional transfer of information occurs and with elaborate safeguards against fraud, either by experimenters or test subjects. For a recent review of the state of parapsychology research, I recommend Dean Radin's excellent 2006 book, Entangled Minds.

Parapsychologists such as Radin argue that while phenomena such as telepathy, precognition, and psychokinesis are very weak effects, elusive, and impossible to produce reliably on demand, the statistical evidence for their existence from large numbers of laboratory experiments is overwhelming, with a vanishingly small probability that the observed results are due to chance. Indeed, the measured confidence levels and effect sizes of some categories of parapsychological experiments exceed those of medical clinical trials such as those which resulted in the recommendation of routine aspirin administration to reduce the risk of heart disease in older males.

For more than a quarter of a century, an important centre of parapsychology research was the Princeton Engineering Anomalies Research (PEAR) laboratory, established in 1979 by Princeton University's Dean of Engineering, Robert G. Jahn. (The lab closed in 2007 with Prof. Jahn's retirement, and has now been incorporated into the International Consciousness Research Laboratories, which is the publisher of the present book.) An important part of PEAR's research was with electronic random event generators (REGs) connected to computers in experiments where a subject (or “operator”, in PEAR terminology) would try to influence the generator to produce an excess of one or zero bits. In a large series of experiments [PDF] run over a period of twelve years with multiple operators, it was reported that an influence in the direction of the operator's intention was seen with a highly significant probability of chance of one in a trillion. The effect size was minuscule, with around one bit in ten thousand flipping in the direction of the operator's stated goal.

If one operator can produce a tiny effect on the random data, what if many people were acting together, not necessarily with active intention, but with their consciousnesses focused on a single thing, for example at a sporting event, musical concert, or religious ceremony? The miniaturisation of electronics and computers eventually made it possible to build a portable REG and computer which could be taken into the field. This led to the FieldREG experiments in which this portable unit was taken to a variety of places and events to monitor its behaviour. The results were suggestive of an effect, but the data set was far too small to be conclusive.

Mindsong random event generator In 1998, Roger D. Nelson, the author of this book, realised that the rapid development and worldwide deployment of the Internet made it possible to expand the FieldREG concept to a global scale. Random event generators based upon quantum effects (usually shot noise from tunnelling across a back-biased Zener diode or a resistor) had been scaled down to small, inexpensive devices which could be attached to personal computers via an RS-232 serial port. With more and more people gaining access to the Internet (originally mostly via dial-up to commercial Internet Service Providers, then increasingly via persistent broadband connections such as ADSL service over telephone wires or a cable television connection), it might be possible to deploy a network of random event generators at locations all around the world, each of which would constantly collect timestamped data which would be transmitted to a central server, collected there, and made available to researchers for analysis by whatever means they chose to apply.

As Roger Nelson discussed the project with his son Greg (who would go on to be the principal software developer for the project), Greg suggested that what was proposed was essentially an electroencephalogram (EEG) for the hypothetical emerging global mind, an “ElectroGaiaGram” or EGG. Thus was born the “EGG Project” or, as it is now formally called, the Global Consciousness Project. Just as the many probes of an EEG provide a (crude) view into the operation of a single brain, perhaps the wide-flung, always-on network of REGs would pick up evidence of coherence when a large number of the world's minds were focused on a single event or idea. Once the EGG project was named, terminology followed naturally: the individual hosts running the random event generators would be “eggs” and the central data archiving server the “basket”.

In April 1998, Roger Nelson released the original proposal for the project and shortly thereafter Greg Nelson began development of the egg and basket software. I became involved in the project in mid-summer 1998 and contributed code to the egg and basket software, principally to allow it to be portable to other variants of Unix systems (it was originally developed on Linux) and machines with different byte order than the Intel processors on which it ran, and also to reduce the resource requirements on the egg host, making it easier to run on a non-dedicated machine. I also contributed programs for the basket server to assemble daily data summaries from the raw data collected by the basket and to produce a real-time network status report. Evolved versions of these programs remain in use today, more than two decades later. On August 2nd, 1998, I began to run the second egg in the network, originally on a Sun workstation running Solaris; this was the first non-Linux, non-Intel, big-endian egg host in the network. A few days later, I brought up the fourth egg, running on a Sun server in the Hall of the Servers one floor below the second egg; this used a different kind of REG, but was otherwise identical. Both of these eggs have been in continuous operation from 1998 to the present (albeit with brief outages due to power failures, machine crashes, and other assorted disasters over the years), and have migrated from machine to machine over time. The second egg is now connected to Raspberry Pi running Linux, while the fourth is now hosted on a Dell Intel-based server also running Linux, which was the first egg host to run on a 64-bit machine in native mode.

Here is precisely how the network measures deviation from the expectation for genuinely random data. The egg hosts all run a Network Time Protocol (NTP) client to provide accurate synchronisation with Internet time server hosts which are ultimately synchronised to atomic clocks or GPS. At the start of every second a total of 200 bits are read from the random event generator. Since all the existing generators provide eight bits of random data transmitted as bytes on a 9600 baud serial port, this involves waiting until the start of the second, reading 25 bytes from the serial port (first flushing any potentially buffered data), then breaking the eight bits out of each byte of data. A precision timing loop guarantees that the sampling starts at the beginning of the second-long interval to the accuracy of the computer's clock.

This process produces 200 random bits. These bits, one or zero, are summed to produce a “sample” which counts the number of one bits for that second. This sample is stored in a buffer on the egg host, along with a timestamp (in Unix time() format), which indicates when it was taken.

Buffers of completed samples are archived in files on the egg host's file system. Periodically, the basket host will contact the egg host over the Internet and request any samples collected after the last packet it received from the egg host. The egg will then transmit any newer buffers it has filled to the basket. All communications are performed over the stateless UDP Internet protocol, and the design of the basket request and egg reply protocol is robust against loss of packets or packets being received out of order.

(This data transfer protocol may seem odd, but recall that the network was designed more than twenty years ago when many people, especially those outside large universities and companies, had dial-up Internet access. The architecture would allow a dial-up egg to collect data continuously and then, when it happened to be connected to the Internet, respond to a poll from the basket and transmit its accumulated data during the time it was connected. It also makes the network immune to random outages in Internet connectivity. Over two decades of operation, we have had exactly zero problems with Internet outages causing loss of data.)

When a buffer from an egg host is received by the basket, it is stored in a database directory for that egg. The buffer contains a time stamp identifying the second at which each sample within it was collected. All times are stored in Universal Time (UTC), so no correction for time zones or summer and winter time is required.

This is the entire collection process of the network. The basket host, which was originally located at Princeton University and now is on a server at global-mind.org, only stores buffers in the database. Buffers, once stored, are never modified by any other program. Bad data, usually long strings of zeroes or ones produced when a hardware random event generator fails electrically, are identified by a “sanity check” program and then manually added to a “rotten egg” database which causes these sequences to be ignored by analysis programs. The random event generators are very simple and rarely fail, so this is a very unusual circumstance.

The raw database format is difficult for analysis programs to process, so every day an automated program (which I wrote) is run which reads the basket database, extracts every sample collected for the previous 24 hour period (or any desired 24 hour window in the history of the project), and creates a day summary file with a record for every second in the day with a column for the samples from each egg which reported that day. Missing data (eggs which did not report for that second) is indicated by a blank in that column. The data are encoded in CSV format which is easy to load into a spreadsheet or read with a program. Because some eggs may not report immediately due to Internet outages or other problems, the summary data report is re-generated two days later to capture late-arriving data. You can request custom data reports for your own analysis from the Custom Data Request page. If you are interested in doing your own exploratory analysis of the Global Consciousness Project data set, you may find my EGGSHELL C++ libraries useful.

The analysis performed by the Project proceeds from these summary files as follows.

First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).

Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.

It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:

Summing Stouffer Z-scores

over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance.

Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds.

The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.

Cumulative deviation: State of the Union 2012

What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)?

Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.

Cumulative deviation: Gadaffi killed, 2011-10-20

Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111.

What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes).

The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.

Cumulative deviation: GCP 1998 through 2015

Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.

Cumulative deviation: GCP results versus pseudorandom simulations

Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.

Cumulative deviation of chi-square: terrorist attacks 2001-09-11

The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.

Cumulative deviation of Z-score: terrorist attacks 2001-09-11

Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.

Cumulative deviation of chi-square: seven day window around 2001-09-11

Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.

Odds: twenty hour window around 2001-09-11, one hour smoothing

Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.

Binned odds: 2001-09-06 to 2001-09-13

The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.

Binned odds: pseudorandom data 2001-09-06 to 2001-09-13

So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”.

If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere.

The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs.

A Kindle edition is available.

All graphs in this article are as posted on the Global Consciousness Project Web site.

 Permalink

Corcoran, Travis J. I. The Powers of the Earth. New Hampshire: Morlock Publishing, 2017. ISBN 978-1-9733-1114-0.
Corcoran, Travis J. I. Causes of Separation. New Hampshire: Morlock Publishing, 2018. ISBN 978-1-9804-3744-4.
(Note: This is novel is the first of an envisioned four volume series titled Aristillus. It and the second book, Causes of Separation, published in May, 2018, together tell a single story which reaches a decisive moment just as the first book ends. Unusually, this will be a review of both novels, taken as a whole. If you like this kind of story at all, there's no way you'll not immediately plunge into the second book after setting down the first.)

Around the year 2050, collectivists were firmly in power everywhere on Earth. Nations were subordinated to the United Nations, whose force of Peace Keepers (PKs) had absorbed all but elite special forces, and were known for being simultaneously brutal, corrupt, and incompetent. (Due to the equality laws, military units had to contain a quota of “Alternatively Abled Soldiers” who other troops had to wheel into combat.) The United States still existed as a country, but after decades of rule by two factions of the Democrat party: Populist and Internationalist, was mired in stagnation, bureaucracy, crumbling infrastructure, and on the verge of bankruptcy. The U.S. President, Themba Johnson, a former talk show host who combined cluelessness, a volatile temper, and vulpine cunning when it came to manipulating public opinion, is confronted with all of these problems and looking for a masterstroke to get beyond the next election.

Around 2050, when the collectivists entered the inevitable end game their policies lead to everywhere they are tried, with the Bureau of Sustainable Research (BuSuR) suppressing new technologies in every field and the Construction Jobs Preservation Act and Bureau of Industrial Planning banning anything which might increase productivity, a final grasp to loot the remaining seed corn resulted in the CEO Trials aimed at the few remaining successful companies, with expropriation of their assets and imprisonment of their leaders. CEO Mike Martin manages to escape from prison and link up with renegade physicist Ponnala (“Ponzie”) Srinivas, inventor of an anti-gravity drive he doesn't want the slavers to control. Mike buys a rustbucket oceangoing cargo ship, equips it with the drive, an airtight compartment and life support, and flees Earth with a cargo of tunnel boring machines and water to exile on the Moon, in the crater Aristillus in Mare Imbrium on the lunar near side where, fortuitously, the impact of a metal-rich asteroid millions of years ago enriched the sub-surface with metals rare in the Moon's crust.

Let me say a few words about the anti-gravity drive, which is very unusual and original, and whose properties play a significant role in the story. The drive works by coupling to the gravitational field of a massive body and then pushing against it, expending energy as it rises and gains gravitational potential energy. Momentum is conserved, as an equal and opposite force is exerted on the massive body against which it is pushing. The force vector is always along the line connecting the centre of mass of the massive body and the drive unit, directed away from the centre of mass. The force is proportional to the strength of the gravitational field in which the drive is operating, and hence stronger when pushing against a body like Earth as opposed to a less massive one like the Moon. The drive's force diminishes with distance from the massive body as its gravitational field falls off with the inverse square law, and hence the drive generates essentially no force when in empty space far from a gravitating body. When used to brake a descent toward a massive body, the drive converts gravitational potential energy into electricity like the regenerative braking system of an electric vehicle: energy which can be stored for use when later leaving the body.

Because the drive can only push outward radially, when used to, say, launch from the Earth to the Moon, it is much like Jules Verne's giant cannon—the launch must occur at the latitude and longitude on Earth where the Moon will be directly overhead at the time the ship arrives at the Moon. In practice, the converted ships also carried auxiliary chemical rockets and reaction control thrusters for trajectory corrections and precision maneuvering which could not be accomplished with the anti-gravity drive.

By 2064, the lunar settlement, called Aristillus by its inhabitants, was thriving, with more than a hundred thousand residents, and growing at almost twenty percent a year. (Well, nobody knew for sure, because from the start the outlook shared by the settlers was aligned with Mike Martin's anarcho-capitalist worldview. There was no government, no taxes, no ID cards, no business licenses, no regulations, no zoning [except covenants imposed by property owners on those who sub-leased property from them], no central bank, no paper money [an entrepreneur had found a vein of gold left by the ancient impactor and gone into business providing hard currency], no elections, no politicians, no forms to fill out, no police, and no army.) Some of these “features” of life on grey, regimented Earth were provided by private firms, while many of the others were found to be unnecessary altogether.

The community prospered as it grew. Like many frontier settlements, labour was in chronic short supply, and even augmented by robot rovers and machines (free of the yoke of BuSuR), there was work for anybody who wanted it and job offers awaiting new arrivals. A fleet of privately operated ships maintained a clandestine trade with Earth, bringing goods which couldn't yet be produced on the Moon, atmosphere, water from the oceans (in converted tanker ships), and new immigrants who had sold their Earthly goods and quit the slave planet. Waves of immigrants from blood-soaked Nigeria and chaotic China established their own communities and neighbourhoods in the ever-growing network of tunnels beneath Aristillus.

The Moon has not just become a refuge for humans. When BuSuR put its boot on the neck of technology, it ordered the shutdown of a project to genetically “uplift” dogs to human intelligence and beyond, creating “Dogs” (the capital letter denoting the uplift) and all existing Dogs to be euthanised. Many were, but John (we never learn his last name), a former U.S. Special Forces operator, manages to rescue a colony of Dogs from one of the labs before the killers arrive and escape with them to Aristillus, where they have set up the Den and engage in their own priorities, including role-playing games, software development, and trading on the betting markets. Also rescued by John was Gamma, the first Artificial General Intelligence to be created, whose intelligence is above the human level but not (yet, anyway) intelligence runaway singularity-level transcendent. Gamma has established itself in its own facility in Sinus Lunicus on the other side of Mare Imbrium, and has little contact with the human or Dog settlers.

Inevitably, liberty produces prosperity, and prosperity eventually causes slavers to regard the free with envious eyes, and slowly and surely draw their plans against them.

This is the story of the first interplanetary conflict, and a rousing tale of liberty versus tyranny, frontier innovation against collectivised incompetence, and principles (there is even the intervention of a Vatican diplomat) confronting brutal expedience. There are delicious side-stories about the creation of fake news, scheming politicians, would-be politicians in a libertarian paradise, open source technology, treachery, redemption, and heroism. How do three distinct species: human, Dog, and AI work together without a top-down structure or subordinating one to another? Can the lunar colony protect itself without becoming what its settlers left Earth to escape?

Woven into the story is a look at how a libertarian society works (and sometimes doesn't work) in practice. Aristillus is in no sense a utopia: it has plenty of rough edges and things to criticise. But people there are free, and they prefer it to the prison planet they escaped.

This is a wonderful, sprawling, action-packed story with interesting characters, complicated conflicts, and realistic treatment of what a small colony faces when confronted by a hostile planet of nine billion slaves. Think of this as Heinlein's The Moon is a Harsh Mistress done better. There are generous tips of the hat to Heinlein and other science fiction in the book, but this is a very different story with an entirely different outcome, and truer to the principles of individualism and liberty. I devoured these books and give them my highest recommendation. The Powers of the Earth won the 2018 Prometheus Award for best libertarian science fiction novel.

 Permalink

Coppley, Jackson. The Code Hunters. Chevy Chase, MD: Contour Press, 2019. ISBN 978-1-09-107011-0.
A team of expert cavers exploring a challenging cave in New Mexico in search of a possible connection to Carlsbad Caverns tumble into a chamber deep underground containing something which just shouldn't be there: a huge slab of metal, like titanium, twenty-four feet square and eight inches thick, set into the rock of the cave, bearing markings which resemble the pits and lands on an optical storage disc. No evidence for human presence in the cave prior to the discoverers is found, and dating confirms that the slab is at least ten thousand years old. There is no way an object that large could be brought through the cramped and twisting passages of the cave to the chamber where it was found.

Wealthy adventurer Nicholas Foxe, with degrees in archaeology and cryptography, gets wind of the discovery and pulls strings to get access to the cave, putting together a research program to try to understand the origin of the slab and decode its enigmatic inscription. But as news of the discovery reaches others, they begin to pursue their own priorities. A New Mexico senator sends his on-the-make assistant to find out what is going on and see how it might be exploited to his advantage. An ex-Army special forces operator makes stealthy plans. An MIT string theorist with a wide range of interests begins exploring unorthodox ideas about how the inscriptions might be encoded. A televangelist facing hard times sees the Tablet as the way back to the top of the heap. A wealthy Texan sees the potential in the slab for wealth beyond his abundant dreams of avarice. As the adventure unfolds, we encounter a panoply of fascinating characters: a World Health Organization scientist, an Italian violin maker with an eccentric theory of language and his autistic daughter, and a “just the facts” police inspector. As clues are teased from the enigma, we visit exotic locations and experience harrowing adventure, finally grasping the significance of a discovery that bears on the very origin of modern humans.

About now, you might be thinking “This sounds like a Dan Brown novel”, and in a sense you'd be right. But this is the kind of story Dan Brown would craft if he were a lot better author than he is: whereas Dan Brown books have become stereotypes of cardboard characters and fill-in-the-blanks plots with pseudo-scientific bafflegab stirred into the mix (see my review of Origin [May 2018]), this is a gripping tale filled with complex, quirky characters, unexpected plot twists, beautifully sketched locales, and a growing sense of wonder as the significance of the discovery is grasped. If anybody in Hollywood had any sense (yes, I know…) they would make this into a movie instead of doing another tedious Dan Brown sequel. This is subtitled “A Nicholas Foxe Adventure”: I sincerely hope there will be more to come.

The author kindly let me read a pre-publication manuscript of this novel. The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

May 2019

Smolin, Lee. Einstein's Unfinished Revolution. New York: Penguin Press, 2019. ISBN 978-1-59420-619-1.
In the closing years of the nineteenth century, one of those nagging little discrepancies vexing physicists was the behaviour of the photoelectric effect. Originally discovered in 1887, the phenomenon causes certain metals, when illuminated by light, to absorb the light and emit electrons. The perplexing point was that there was a minimum wavelength (colour of light) necessary for electron emission, and for longer wavelengths, no electrons would be emitted at all, regardless of the intensity of the beam of light. For example, a certain metal might emit electrons when illuminated by green, blue, violet, and ultraviolet light, with the intensity of electron emission proportional to the light intensity, but red or yellow light, regardless of how intense, would not result in a single electron being emitted.

This didn't make any sense. According to Maxwell's wave theory of light, which was almost universally accepted and had passed stringent experimental tests, the energy of light depended upon the amplitude of the wave (its intensity), not the wavelength (or, reciprocally, its frequency). And yet the photoelectric effect didn't behave that way—it appeared that whatever was causing the electrons to be emitted depended on the wavelength of the light, and what's more, there was a sharp cut-off below which no electrons would be emitted at all.

In 1905, in one of his “miracle year” papers, “On a Heuristic Viewpoint Concerning the Production and Transformation of Light”, Albert Einstein suggested a solution to the puzzle. He argued that light did not propagate as a wave at all, but rather in discrete particles, or “quanta”, later named “photons”, whose energy was proportional to the wavelength of the light. This neatly explained the behaviour of the photoelectric effect. Light with a wavelength longer than the cut-off point was transmitted by photons whose energy was too low to knock electrons out of metal they illuminated, while those above the threshold could liberate electrons. The intensity of the light was a measure of the number of photons in the beam, unrelated to the energy of the individual photons.

This paper became one of the cornerstones of the revolutionary theory of quantum mechanics, the complete working out of which occupied much of the twentieth century. Quantum mechanics underlies the standard model of particle physics, which is arguably the most thoroughly tested theory in the history of physics, with no experiment showing results which contradict its predictions since it was formulated in the 1970s. Quantum mechanics is necessary to explain the operation of the electronic and optoelectronic devices upon which our modern computing and communication infrastructure is built, and describes every aspect of physical chemistry.

But quantum mechanics is weird. Consider: if light consists of little particles, like bullets, then why when you shine a beam of light on a barrier with two slits do you get an interference pattern with bright and dark bands precisely as you get with, say, water waves? And if you send a single photon at a time and try to measure which slit it went through, you find it always went through one or the other, but then the interference pattern goes away. It seems like whether the photon behaves as a wave or a particle depends upon how you look at it. If you have an hour, here is grand master explainer Richard Feynman (who won his own Nobel Prize in 1965 for reconciling the quantum mechanical theory of light and the electron with Einstein's special relativity) exploring how profoundly weird the double slit experiment is.

Fundamentally, quantum mechanics seems to violate the principle of realism, which the author defines as follows.

The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.

This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn't possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists.

From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location.

In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn't what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant.

In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton's third law of motion holds that every action has an equal and opposite reaction, and in Einstein's general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm's theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers.

Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations.

The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein's geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don't fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn't so: a simple experiment shows that a clock runs slower when it's in a gravitational field. Quantum mechanics says that it isn't possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces.

Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object).

This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today's near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today.

This needn't forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it's an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake's theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results.

The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:

There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.

Well, it doesn't puzzle me. Ever since I learned classical economics, I've always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today's institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today.

Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.

 Permalink

Kotkin, Stephen. Stalin, Vol. 2: Waiting for Hitler, 1929–1941. New York: Penguin Press, 2017. ISBN 978-1-59420-380-0.
This is the second volume in the author's monumental projected three-volume biography of Joseph Stalin. The first volume, Stalin: Paradoxes of Power, 1878–1928 (December 2018) covers the period from Stalin's birth through the consolidation of his sole power atop the Soviet state after the death of Lenin. The third volume, which will cover the period from the Nazi invasion of the Soviet Union in 1941 through the death of Stalin in 1953 has yet to be published.

As this volume begins in 1928, Stalin is securely in the supreme position of the Communist Party of the Soviet Union, and having over the years staffed the senior ranks of the party and the Soviet state (which the party operated like the puppet it was) with loyalists who owed their positions to him, had no serious rivals who might challenge him. (It is often claimed that Stalin was paranoid and feared a coup, but would a despot fearing for his position regularly take summer holidays, months in length, in Sochi, far from the capital?)

By 1928, the Soviet Union had largely recovered from the damage inflicted by the Great War, Bolshevik revolution, and subsequent civil war. Industrial and agricultural production were back to around their 1914 levels, and most measures of well-being had similarly recovered. To be sure, compared to the developed industrial economies of countries such as Germany, France, or Britain, Russia remained a backward economy largely based upon primitive agriculture, but at least it had undone the damage inflicted by years of turbulence and conflict.

But in the eyes of Stalin and his close associates, who were ardent Marxists, there was a dangerous and potentially deadly internal contradiction in the Soviet system as it then stood. In 1921, in response to the chaos and famine following the 1917 revolution and years-long civil war, Lenin had proclaimed the New Economic Policy (NEP), which tempered the pure collectivism of original Bolshevik doctrine by introducing a mixed economy, where large enterprises would continue to be owned and managed by the state, but small-scale businesses could be privately owned and run for profit. More importantly, agriculture, which had previously been managed under a top-down system of coercive requisitioning of grain and other products by the state, was replaced by a market system where farmers could sell their products freely, subject to a tax, payable in product, proportional to their production (and thus creating an incentive to increase production).

The NEP was a great success, and shortages of agricultural products were largely eliminated. There was grousing about the growing prosperity of the so-called NEPmen, but the results of freeing the economy from the shackles of state control were evident to all. But according to Marxist doctrine, it was a dagger pointed at the heart of the socialist state.

By 1928, the Soviet economy could be described, in Marxist terms, as socialism in the industrial cities and capitalism in the agrarian countryside. But, according to Marx, the form of politics was determined by the organisation of the means of production—paraphrasing Brietbart, politics is downstream of economics. This meant that preserving capitalism in a large sector of the country, one employing a large majority of its population and necessary to feed the cities, was an existential risk. In such a situation it would only be normal for the capitalist peasants to eventually prevail over the less numerous urbanised workers and destroy socialism.

Stalin was a Marxist. He was not an opportunist who used Marxism-Leninism to further his own ambitions. He really believed this stuff. And so, in 1928, he proclaimed an end to the NEP and began the forced collectivisation of Soviet agriculture. Private ownership of land would be abolished, and the 120 million peasants essentially enslaved as “workers” on collective or state farms, with planting, quotas to be delivered, and management essentially controlled by the party. After an initial lucky year, the inevitable catastrophe ensued. Between 1931 and 1933 famine and epidemics resulting from it killed between five and seven million people. The country lost around half of its cattle and two thirds of its sheep. In 1929, the average family in Kazakhstan owned 22.6 cattle; in 1933 3.7. This was a calamity on the same order as the Jewish Holocaust in Germany, and just as man-made: during this period there was a global glut of food, but Stalin refused to admit the magnitude of the disaster for fear of inciting enemies to attack and because doing so would concede the failure of his collectivisation project. In addition to the famine, the process of collectivisation resulted in between four and five million people being arrested, executed, deported to other regions, or jailed.

Many in the starving countryside said, “If only Stalin knew, he would do something.” But the evidence is overwhelming: Stalin knew, and did nothing. Marxist theory said that agriculture must be collectivised, and by pure force of will he pushed through the project, whatever the cost. Many in the senior Soviet leadership questioned this single-minded pursuit of a theoretical goal at horrendous human cost, but they did not act to stop it. But Stalin remembered their opposition and would settle scores with them later.

By 1936, it appeared that the worst of the period of collectivisation was over. The peasants, preferring to live in slavery than starve to death, had acquiesced to their fate and resumed production, and the weather co-operated in producing good harvests. And then, in 1937, a new horror was unleashed upon the Soviet people, also completely man-made and driven by the will of Stalin, the Great Terror. Starting slowly in the aftermath of the assassination of Sergey Kirov in 1934, by 1937 the absurd devouring of those most loyal to the Soviet regime, all over Stalin's signature, reached a crescendo. In 1937 and 1938 1,557,259 people would be arrested and 681,692 executed, the overwhelming majority for political offences, this in a country with a working-age population of 100 million. Counting deaths from other causes as a result of the secret police, the overall death toll was probably around 830,000. This was so bizarre, and so unprecedented in human history, it is difficult to find any comparable situation, even in Nazi Germany. As the author remarks,

To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?

Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies.

And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.)

But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state.

With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union.

Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course.

Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution.

Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production.

On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border.

After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere.

But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine.

Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping.

The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn.

This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses.

Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia.

Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.

 Permalink

Wood, Fenton. Pirates of the Electromagnetic Waves. Seattle: Amazon Digital Services, 2018. ASIN B07H2RJK8J.
This is an utterly charming short novel (or novella: it is just 123 pages) which, on the surface, reads like a young adult adventure from the golden age, along the lines of the original Tom Swift or Hardy Boys series. But as you get deeper into the story, you discover clues there is much more going on than you first suspected, and that this may be the beginning of a wonderful exploration of an alternative reality which is a delight to visit and you may wish were your home.

Philo Hergenschmidt, Randall Quinn, and their young friends live in Porterville, deep in the mountain country of the Yankee Republic. The mountains that surround it stopped the glaciers when they came down from the North a hundred thousand years ago, and provided a refuge for the peace-loving, self-sufficient, resourceful, and ornery people who fled the wars. Many years later, they retain those properties, and most young people are members of the Survival Scouts, whose eight hundred page Handbook contains every thing a mountain man needs to know to survive and prosper under any circumstances.

Porterville is just five hundred miles from the capital of Iburakon, but might as well be on a different planet. Although the Yankee Republic's technology is in many ways comparable to our own, the mountains shield Porterville from television and FM radio broadcasts and, although many families own cars with radios installed by default, the only thing they can pick up is a few scratchy AM stations from far away when the skywave opens up at night. Every summer, Randall spends two weeks with his grandparents in Iburakon and comes back with tales of wonders which enthrall his friends like an explorer of yore returned from Shangri-La. (Randall is celebrated as a raconteur—and some of his tales may be true.) This year he told of the marvel of television and a science fiction series called Xenotopia, and for weeks the boys re-enacted battles from his descriptions. Broadcasting: that got Philo thinking….

One day Philo calls up Randall and asks him to dig out an old radio he recalled him having and tune it to the usually dead FM band. Randall does, and is astonished to hear Philo broadcasting on “Station X” with amusing patter. It turns out he found a book in the attic, 101 Radio Projects for Boys, written by a creative and somewhat subversive author, and following the directions, put together a half watt FM transmitter from scrounged spare parts. Philo briefs Randall on pirate radio stations: although the penalties for operating without a license appear severe, in fact, unless you willingly interfere with a licensed broadcaster, you just get a warning the first time and a wrist-slap ticket thereafter unless you persist too long.

This gets them both thinking…. With the help of adults willing to encourage youth in their (undisclosed) projects, or just to look the other way (the kids of Porterville live free-range lives, as I did in my childhood, as their elders have not seen fit to import the vibrant diversity into their community which causes present-day youth to live under security lock-down), and a series of adventures, radio station 9X9 goes on the air, announced with great fanfare in handbills posted around the town. Suddenly, there is something to listen to, and people start tuning in. Local talent tries their hands at being a DJ, and favourites emerge. Merchants start to sign up for advertisements. Church services are broadcast for shut-ins. Even though no telephone line runs anywhere near the remote and secret studio, ingenuity and some nineteenth-century technology allow them to stage a hit call-in show. And before long, live talent gets into the act. A big baseball game provides both a huge opportunity and a seemingly insurmountable challenge until the boys invent an art which, in our universe, was once masterfully performed by a young Ronald Reagan.

Along the way, we learn of the Yankee Republic in brief, sometimes jarring, strokes of the pen, as the author masterfully follows the science fiction principle of “show, don't tell”.

Just imagine if William the Bastard had succeeded in conquering England. We'd probably be speaking some unholy crossbreed of French and English….

The Republic is the only country in the world that recognizes allodial title,….

When Congress declares war, they have to elect one of their own to be a sacrificial victim,….

“There was a man from the state capitol who wanted to give us government funding to build what he called a ‘proper’ school, but he was run out of town, the poor dear.”

Pirates, of course, must always keenly scan the horizon for those who might want to put an end to the fun. And so it is for buccaneers sailing the Hertzian waves. You'll enjoy every minute getting to the point where you find out how it ends. And then, when you think it's all over, another door opens into a wider, and weirder, world in which we may expect further adventures. The second volume in the series, Five Million Watts, was published in April, 2019.

At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is very inexpensive.

 Permalink

Roberts, Andrew. Churchill: Walking with Destiny. New York: Viking, 2018. ISBN 978-1-101-98099-6.
At the point that Andrew Roberts sat down to write a new biography of Winston Churchill, there were a total of 1009 biographies of the man in print, examining every aspect of his life from a multitude of viewpoints. Works include the encyclopedic three-volume The Last Lion (January 2013) by William Manchester and Paul Reid, and Roy Jenkins' single-volume Churchill: A Biography (February 2004), which concentrates on Churchill's political career. Such books may seem to many readers to say just about everything about Churchill there is to be said from the abundant documentation available for his life. What could a new biography possibly add to the story?

As the author demonstrates in this magnificent and weighty book (1152 pages, 982 of main text), a great deal. Earlier Churchill biographers laboured under the constraint that many of Churchill's papers from World War II and the postwar era remained under the seal of official secrecy. These included the extensive notes taken by King George VI during his weekly meetings with the Prime Minister during the war and recorded in his personal diary. The classified documents were made public only fifty years after the end of the war, and the King's wartime diaries were made available to the author by special permission granted by the King's daughter, Queen Elizabeth II.

The royal diaries are an invaluable source on Churchill's candid thinking as the war progressed. As a firm believer in constitutional monarchy, Churchill withheld nothing in his discussions with the King. Even the deepest secrets, such as the breaking of the German codes, the information obtained from decrypted messages, and atomic secrets, which were shared with only a few of the most senior and trusted government officials, were discussed in detail with the King. Further, while Churchill was constantly on stage trying to hold the Grand Alliance together, encourage Britons to stay in the fight, and advance his geopolitical goals which were often at variance with even the Americans, with the King he was brutally honest about Britain's situation and what he was trying to accomplish. Oddly, perhaps the best insight into Churchill's mind as the war progressed comes not from his own six-volume history of the war, but rather the pen of the King, writing only to himself. In addition, sources such as verbatim notes of the war cabinet, diaries of the Soviet ambassador to the U.K. during the 1930s through the war, and other recently-disclosed sources resulted in, as the author describes it, there being something new on almost every page.

The biography is written in an entirely conventional manner: the author eschews fancy stylistic tricks in favour of an almost purely chronological recounting of Churchill's life, flipping back and forth from personal life, British politics, the world stage and Churchill's part in the events of both the Great War and World War II, and his career as an author and shaper of opinion.

Winston Churchill was an English aristocrat, but not a member of the nobility. A direct descendant of John Churchill, the 1st Duke of Marlborough, his father, Lord Randolph Churchill, was the third son of the 7th Duke of Marlborough. As only the first son inherits the title, although Randolph bore the honorific “Lord”, he was a commoner and his children, including first-born Winston, received no title. Lord Randolph was elected to the House of Commons in 1874, the year of Winston's birth, and would serve until his death in 1895, having been Chancellor of the Exchequer, Leader of the House of Commons, and Secretary of State for India. His death, aged just forty-five (rumoured at the time to be from syphilis, but now attributed to a brain tumour, as his other symptoms were inconsistent with syphilis), along with the premature deaths of three aunts and uncles at early ages, convinced the young Winston his own life might be short and that if he wanted to accomplish great things, he had no time to waste.

In terms of his subsequent career, his father's early death might have been an unappreciated turning point in Winston Churchill's life. Had his father retired from the House of Commons prior to his death, he would almost certainly have been granted a peerage in return for his long service. When he subsequently died, Winston, as eldest son, would have inherited the title and hence not been entitled to serve in the House of Commons. It is thus likely that had his father not died while still an MP, the son would never have had the political career he did nor have become prime minister in 1940.

Young, from a distinguished family, wealthy (by the standards of the average Briton, but not compared to the landed aristocracy or titans of industry and finance), ambitious, and seeking novelty and adventures to the point of recklessness, the young Churchill believed he was meant to accomplish great things in however many years Providence might grant him on Earth. In 1891, at the age of just 16, he confided to a friend,

I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.

He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood.

Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures.

He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book.

Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,

Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.

This was not a man likely to disappear into the mass of back-benchers and not rock the boat.

Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,

A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.

Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure.

Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.”

Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,

You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.

And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,

No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.

Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated.

Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,

Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.

This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived.

Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom.

Here is an Uncommon Knowledge interview with the author about Churchill and this biography.

This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.

 Permalink

Kroese, Robert. The Dawn of the Iron Dragon. Seattle: CreateSpace, 2018. ISBN 978-1-7220-2331-7.
This is the second volume in the Iron Dragon trilogy which began with The Dream of the Iron Dragon (August 2018). At the end of the first book, the crew of the Andrea Luhman stranded on Earth in the middle ages faced a seemingly impossible challenge. They, and their Viking allies, could save humanity from extinction in a war in the distant future only by building a space program capable of launching a craft into Earth orbit starting with an infrastructure based upon wooden ships and edged weapons. Further, given what these accidental time travellers, the first in history, had learned about the nature of travel to the past in their adventures to date, all of this must be done in the deepest secrecy and without altering the history to be written in the future. Recorded history, they discovered, cannot be changed, and hence any attempt to do something which would leave evidence of a medieval space program or intervention of advanced technology in the affairs of the time, would be doomed to failure. These constraints placed almost impossible demands upon what was already a formidable challenge.

From their ship's computer, the exiled spacemen had a close approximation to all of human knowledge, so they were rich in bits. But when it came to it: materials, infrastructure, tools, sources of energy and motive power, and everything else, they had almost nothing. Even the simplest rocket capable of achieving Earth orbit has tens to hundreds of thousands of parts, most requiring precision manufacture, stringent control of material quality, and rigorous testing. Consider a humble machine screw. In the 9th century A.D. there weren't any hardware stores. If you needed a screw, or ten thousand of them, to hold your rocket components together, you needed first to locate and mine the iron ore, then smelt the iron from the ore, refine it with high temperature and forced air (both of which require their own technologies, including machine screws) to achieve the desired carbon content, adding alloying metals such as nickel, chromium, cobalt, tungsten, and manganese, all of which have to be mined and refined first. Then the steel must be formed into the desired shape (requiring additional technologies), heat-treated, and then finally the threads must be cut into the blank, requiring machine tools made to sufficient precision that the screws will be interchangeable, with something to power the tools (all of which, of course, contain screws). And that's just a screw. Thinking about a turbopump, regeneratively cooled combustion chamber, hydraulically-actuated gimbal mechanism, gyroscopes and accelerometers, or any of the myriad other components of even the simplest launcher are apt to induce despair.

But the spacemen were survivors, and they knew that the entire future of the human species, driven in the future they had come from to near-extinction by the relentless Cho-ta'an, depended upon their getting off the Earth and delivering the planet-busting weapon which might turn the tide for their descendants centuries hence. While they needed just about everything, what they needed most was minds: human brainpower and the skills flowing from it to find and process the materials to build the machines to build the machines to build the machines which, after a decades-long process of recapitulating centuries of human technological progress, would enable them to accomplish their ambitious yet utterly essential mission.

People in the 9th century were just as intelligent as those today, but in most of the world literacy was rare and even more scarce was the acquired intellectual skill of thinking logically, breaking down a problem into its constituent parts, and the mental flexibility to learn and apply mind tools, such as algebra, trigonometry, calculus, Newton's and Kepler's laws, and a host of others which had yet to be discovered. These rare people were to be found in the emerging cities, where learning and the embryos of what would become the great universities of the later Middle Ages were developing. And so missions were dispatched to Constantinople, the greatest of these cities, and other centres of learning and innovation, to recruit not the famous figures recorded in history (whose disappearance into a secret project was inconsistent with that history, and hence impossible), but their promising young followers. These cities were cosmopolitan crossroads, dangerous but also sufficiently diverse that a Viking longboat showing up with people who barely spoke any known language would not attract undue attention. But the rulers of these cities appreciated the value of their learned people, and trying to attract them was perilous and could lead to hazards and misadventures.

On top of all of these challenges, a Cho-ta'an ship had followed the Andrea Luhman through the hyperspace gate and whatever had caused them to be thrown back in time, and a small contingent of the aliens had made it to Earth, bent on stopping the spacemen's getting off the planet at any cost. The situation was highly asymmetrical: while the spacemen had to accomplish a near-impossible task, the Cho-ta'an need only prevent them by any means possible. And being Cho-ta'an, if those means included loosing a doomsday plague to depopulate Europe, well, so be it. And the presence of the Cho-ta'an, wherever they might be hiding, redoubled the need for secrecy in every aspect of the Iron Dragon project.

Another contingent of the recruiting project finds itself in the much smaller West Francia city of Paris, just as Viking forces are massing for what history would record as the Siege of Paris in A.D. 885–886. In this epic raid, a force of tens of thousands (today estimated around 20,000, around half that claimed in the account by the monk Abbo Cernuus, who has been called “in a class of his own as an exaggerator”) of Vikings in hundreds (300, probably, 700 according to Abbo) of ships laid siege to a city defended by just two hundred Parisian men-at-arms. In this account, the spacemen, with foreknowledge of how it was going to come out, provide invaluable advice to Count Odo of Paris and Gozlin, the “fighting Bishop” of Paris, in defending their city as it was simultaneously ravaged by a plague (wonder where that came from?), and in persuading King Charles (“the Fat”) to come to the relief of the city. The epic battle for Paris, which ended not in triumph but rather a shameful deal, was a turning point in the history of France. The efforts of the spacemen, while critical and perhaps decisive, remained consistent with written history, at least that written by Abbo, who they encouraged in his proclivity for exaggeration.

Meanwhile, back at the secret base in Iceland, chosen to stay out of the tangles of European politics and out of the way of their nemesis Harald Fairhair, the first King of Norway, local rivalries intrude upon the desired isolation. It appears another, perhaps disastrous, siege may be in the offing, putting the entire project at risk. And with all of this, one of those knock-you-off-your-feet calamities the author is so fond of throwing at his characters befalls them, forcing yet another redefinition of their project and a breathtaking increase in its ambition and complexity, just as they have to contemplate making new and perilous alliances simply to survive.

The second volume of a trilogy is often the most challenging to write. In the first, everything is new, and the reader gets to meet the characters, the setting, and the challenges to be faced in the story. In the conclusion, everything is pulled together into a satisfying resolution. But in that one in the middle, it's mostly developing characters, plots, introducing new (often subordinate) characters, and generally moving things along—one risks readers' regarding it as “filler”. In this book, the author artfully avoids that risk by making a little-known but epic battle the centrepiece of the story, along with intrigue, a thorny ethical dilemma, and multiple plot threads playing out from Iceland to North Africa to the Dardanelles. You absolutely should read the first volume, The Dream of the Iron Dragon, before starting this one—although there is a one page summary of that book at the start, it isn't remotely adequate to bring you up to speed and avoid your repeatedly exclaiming “Who?”, “What?”, and “How?” as you enjoy this story.

When you finish this volume, the biggest question in your mind will probably be “How in the world is he going to wrap all of this up in just one more book?” The only way to find out is to pick up The Voyage of the Iron Dragon, which I will be reviewing here in due course. This saga (what else can you call an epic with Vikings and spaceships?) will be ranked among the very best of alternative history science fiction, and continues to demonstrate why independent science fiction is creating a new Golden Age for readers and rendering the legacy publishers of tedious “diversity” propaganda impotent and obsolete.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

June 2019

Zubrin, Robert. The Case for Space. Amherst, NY: Prometheus Books, 2019. ISBN 978-1-63388-534-9.
Fifty years ago, with the successful landing of Apollo 11 on the Moon, it appeared that the road to the expansion of human activity from its cradle on Earth into the immensely larger arena of the solar system was open. The infrastructure built for Project Apollo, including that in the original 1963 development plan for the Merritt Island area could support Saturn V launches every two weeks. Equipped with nuclear-powered upper stages (under active development by Project NERVA, and accommodated in plans for a Nuclear Assembly Building near the Vehicle Assembly Building), the launchers and support facilities were more than adequate to support construction of a large space station in Earth orbit, a permanently-occupied base on the Moon, exploration of near-Earth asteroids, and manned landings on Mars in the 1980s.

But this was not to be. Those envisioning this optimistic future fundamentally misunderstood the motivation for Project Apollo. It was not about, and never was about, opening the space frontier. Instead, it was a battle for prestige in the Cold War and, once won (indeed, well before the Moon landing), the budget necessary to support such an extravagant program (which threw away skyscraper-sized rockets with every launch), began to evaporate. NASA was ready to do the Buck Rogers stuff, but Washington wasn't about to come up with the bucks to pay for it. In 1965 and 1966, the NASA budget peaked at over 4% of all federal government spending. By calendar year 1969, when Apollo 11 landed on the Moon, it had already fallen to 2.31% of the federal budget, and with relatively small year to year variations, has settled at around one half of one percent of the federal budget in recent years. Apart from a small band of space enthusiasts, there is no public clamour for increasing NASA's budget (which is consistently over-estimated by the public as a much larger fraction of federal spending than it actually receives), and there is no prospect for a political consensus emerging to fund an increase.

Further, there is no evidence that dramatically increasing NASA's budget would actually accomplish anything toward the goal of expanding the human presence in space. While NASA has accomplished great things in its robotic exploration of the solar system and building space-based astronomical observatories, its human space flight operations have been sclerotic, risk-averse, loath to embrace new technologies, and seemingly more oriented toward spending vast sums of money in the districts and states of powerful representatives and senators than actually flying missions.

Fortunately, NASA is no longer the only game in town (if it can even be considered to still be in the human spaceflight game, having been unable to launch its own astronauts into space without buying seats from Russia since the retirement of the Space Shuttle in 2011). In 2009, the commission headed by Norman Augustine recommended cancellation of NASA's Constellation Program, which aimed at a crewed Moon landing in 2020, because they estimated that the heavy-lift booster it envisioned (although based largely on decades-old Space Shuttle technology) would take twelve years and US$36 billion to develop under NASA's business-as-usual policies; Constellation was cancelled in 2010 (although its heavy-lift booster, renamed. de-scoped, re-scoped, schedule-slipped, and cost-overrun, stumbles along, zombie-like, in the guise of the Space Launch System [SLS] which has, to date, consumed around US$14 billion in development costs without producing a single flight-ready rocket, and will probably cost between one and two billion dollars for each flight, every year or two—this farce will probably continue as long as Richard Shelby, the Alabama Senator who seems to believe NASA stands for “North Alabama Spending Agency”, remains in the World's Greatest Deliberative Body).

In February 2018, SpaceX launched its Falcon Heavy booster, which has a payload capacity to low Earth orbit comparable to the initial version of the SLS, and was developed with private funds in half the time at one thirtieth the cost (so far) of NASA's Big Rocket to Nowhere. Further, unlike the SLS, which on each flight will consign Space Shuttle Main Engines and Solid Rocket Boosters (which were designed to be reusable and re-flown many times on the Space Shuttle) to a watery grave in the Atlantic, three of the four components of the Falcon Heavy (excluding only its upper stage, with a single engine) are reusable and can be re-flown as many as ten times. Falcon Heavy customers will pay around US$90 million for a launch on the reusable version of the rocket, less than a tenth of what NASA estimates for an SLS flight, even after writing off its enormous development costs.

On the heels of SpaceX, Jeff Bezos's Blue Origin is developing its New Glenn orbital launcher, which will have comparable payload capacity and a fully reusable first stage. With competition on the horizon, SpaceX is developing the Super Heavy/Starship completely-reusable launcher with a payload of around 150 tonnes to low Earth orbit: more than any past or present rocket. A fully-reusable launcher with this capacity would also be capable of delivering cargo or passengers between any two points on Earth in less than an hour at a price to passengers no more than a first class ticket on a present-day subsonic airliner. The emergence of such a market could increase the demand for rocket flights from its current hundred or so per year to hundreds or thousands a day, like airline operations, with consequent price reductions due to economies of scale and moving all components of the transportation system down the technological learning curve.

Competition-driven decreases in launch cost, compounded by partially- or fully-reusable launchers, is already dramatically decreasing the cost of getting to space. A common metric of launch cost is the price to launch one kilogram into low Earth orbit. This remained stubbornly close to US$10,000/kg from the 1960s until the entry of SpaceX's Falcon 9 into the market in 2010. Purely by the more efficient design and operations of a profit-driven private firm as opposed to a cost-plus government contractor, the first version of the Falcon 9 cut launch costs to around US$6,000/kg. By reusing the first stage of the Falcon 9 (which costs around three times as much as the expendable second stage), this was cut by another factor of two, to US$3,000/kg. The much larger fully reusable Super Heavy/Starship is projected to reduce launch cost (if its entire payload capacity can be used on every flight, which probably isn't the way to bet) to the vicinity of US$250/kg, and if the craft can be flown frequently, say once a day, as somebody or other envisioned more than a quarter century ago, amortising fixed costs over a much larger number of launches could reduce cost per kilogram by another factor of ten, to something like US$25/kg.

Such cost reductions are an epochal change in the space business. Ever since the first Earth satellites, launch costs have dominated the industry and driven all other aspects of spacecraft design. If you're paying US$10,000 per kilogram to put your satellite in orbit, it makes sense to spend large sums of money not only on reducing its mass, but also making it extremely reliable, since launching a replacement would be so hideously expensive (and with flight rates so low, could result in a delay of a year or more before a launch opportunity became available). But with a hundred-fold or more reduction in launch cost and flights to orbit operating weekly or daily, satellites need no longer be built like precision watches, but rather industrial gear like that installed in telecom facilities on the ground. The entire cost structure is slashed across the board, and space becomes an arena accessible for a wide variety of commercial and industrial activities where its unique characteristics, such as access to free, uninterrupted solar power, high vacuum, and weightlessness are an advantage.

But if humanity is truly to expand beyond the Earth, launching satellites that go around and around the Earth providing services to those on its surface is just the start. People must begin to homestead in space: first hundreds, then thousands, and eventually millions and more living, working, building, raising families, with no more connection to the Earth than immigrants to the New World in the 1800s had to the old country in Europe or Asia. Where will they be living, and what will they be doing?

In order to think about the human future in the solar system, the first thing you need to do is recalibrate how you think about the Earth and its neighbours orbiting the Sun. Many people think of space as something like Antarctica: barren, difficult and expensive to reach, unforgiving, and while useful for some forms of scientific research, no place you'd want to set up industry or build communities where humans would spend their entire lives. But space is nothing like that. Ninety-nine percent or more of the matter and energy resources of the solar system—the raw material for human prosperity—are found not on the Earth, but rather elsewhere in the solar system, and they are free for the taking by whoever gets there first and figures out how to exploit them. Energy costs are a major input to most economic activity on the Earth, and wars are regularly fought over access to scarce energy resources on the home planet. But in space, at the distance Earth orbits the Sun, 1.36 kilowatts of free solar power are available for every square metre of collector you set up. And, unlike on the Earth's surface, that power is available 24 hours a day, every day of the year, and will continue to flow for billions of years into the future.

Settling space will require using the resources available in space, not just energy but material. Trying to make a space-based economy work by launching everything from Earth is futile and foredoomed. Regardless of how much you reduce launch costs (even with exotic technologies which may not even be possible given the properties of materials, such as space elevators or launch loops), the vast majority of the mass needed by a space-based civilisation will be dumb bulk materials, not high-tech products such as microchips. Water; hydrogen and oxygen for rocket fuel (which are easily made from water using electricity from solar power); aluminium, titanium, and steel for structural components; glass and silicon; rocks and minerals for agriculture and bulk mass for radiation shielding; these will account for the overwhelming majority of the mass of any settlement in space, whether in Earth orbit, on the Moon or Mars, asteroid mining camps, or habitats in orbit around the Sun. People and low-mass, high-value added material such as electronics, scientific instruments, and the like will launch from the Earth, but their destinations will be built in space from materials found there.

Why? As with most things in space, it comes down to delta-v (pronounced delta-vee), the change in velocity needed to get from one location to another. This, not distance, determines the cost of transportation in space. The Earth's mass creates a deep gravity well which requires around 9.8 km/sec of delta-v to get from the surface to low Earth orbit. It is providing this boost which makes launching payloads from the Earth so expensive. If you want to get to geostationary Earth orbit, where most communication satellites operate, you need another 3.8 km/sec, for a total of 13.6 km/sec launching from the Earth. By comparison, delivering a payload from the surface of the Moon to geostationary Earth orbit requires only 4 km/sec, which can be provided by a simple single-stage rocket. Delivering material from lunar orbit (placed there, for example, by a solar powered electromagnetic mass driver on the lunar surface) to geostationary orbit needs just 2.4 km/sec. Given that just about all of the materials from which geostationary satellites are built are available on the Moon (if you exploit free solar power to extract and refine them), it's clear a mature spacefaring economy will not be launching them from the Earth, and will create large numbers of jobs on the Moon, in lunar orbit, and in ferrying cargos among various destinations in Earth-Moon space.

The author surveys the resources available on the Moon, Mars, near-Earth and main belt asteroids, and, looking farther into the future, the outer solar system where, once humans have mastered controlled nuclear fusion, sufficient Helium-3 is available for the taking to power a solar system wide human civilisation of trillions of people for billions of years and, eventually, the interstellar ships they will use to expand out into the galaxy. Detailed plans are presented for near-term human missions to the Moon and Mars, both achievable within the decade of the 2020s, which will begin the process of surveying the resources available there and building the infrastructure for permanent settlement. These mission plans, unlike those of NASA, do not rely on paper rockets which have yet to fly, costly expendable boosters, or detours to “gateways” and other diversions which seem a prime example of (to paraphrase the author in chapter 14), “doing things in order to spend money as opposed to spending money in order to do things.”

This is an optimistic and hopeful view of the future, one in which the human adventure which began when our ancestors left Africa to explore and settle the far reaches of their home planet continues outward into its neighbourhood around the Sun and eventually to the stars. In contrast to the grim Malthusian vision of mountebanks selling nostrums like a “Green New Deal”, which would have humans huddled on an increasingly crowded planet, shivering in the cold and dark when the Sun and wind did not cooperate, docile and bowed to their enlightened betters who instruct them how to reduce their expectations and hopes for the future again and again as they wait for the asteroid impact to put an end to their misery, Zubrin sketches millions of diverse human (and eventually post-human, evolving in different directions) societies, exploring and filling niches on a grand scale that dwarfs that of the Earth, inventing, building, experimenting, stumbling, and then creating ever greater things just as humans have for millennia. This is a future not just worth dreaming of, but working to make a reality. We have the enormous privilege of living in the time when, with imagination, courage, the willingness to take risks and to discard the poisonous doctrines of those who preach “sustainability” but whose policies always end in resource wars and genocide, we can actually make it happen and see the first steps taken in our lifetimes.

Here is an interview with the author about the topics discussed in the book.

This is a one hour and forty-two minute interview (audio only) from “The Space Show” which goes into the book in detail.

 Permalink

Witzke, Dawn, ed. Planetary: Earth. Narara, NSW, Australia: Superversive Press, 2018. ISBN 978-1-925645-24-8.
This is the fourth book in the publisher's Planetary Anthology series. Each volume contain stories set on, or figuring in the plot, the named planet. Previous collections have featured Mercury, Venus, and Mars. This installment contains stories related in some way to Earth, although in several none of the action occurs on that planet.

Back the day (1930s through 1980s) monthly science fiction magazines were a major venue for the genre and the primary path for aspiring authors to break into print. Sold on newsstands for the price of a few comic books, they were the way generations of young readers (including this one) discovered the limitless universe of science fiction. A typical issue might contain five or six short stories, a longer piece (novella or novelette), and a multi-month serialisation of a novel, usually by an established author known to the readers. For example, Frank Herbert's Dune was serialised in two long runs in Analog in 1963 and 1965 before its hardcover publication in 1965. In addition, there were often book reviews, a column about science fact (Fantasy and Science Fiction published a monthly science column by Isaac Asimov which ran from 1958 until shortly before his death in 1992—a total of 399 in all), a lively letters to the editor section, and an editorial. All of the major science fiction monthlies welcomed unsolicited manuscripts from unpublished authors, and each issue was likely to contain one or two stories from the “slush pile” which the editor decided made the cut for the magazine. Most of the outstanding authors of the era broke into the field this way, and some editors such as John W. Campbell of Astounding (later Analog) invested much time and effort in mentoring promising talents and developing them into a reliable stable of writers to fill the pages of their magazines.

By the 1990s, monthly science fiction magazines were in decline, and the explosion of science fiction novel publication had reduced the market for short fiction. By the year 2000, only three remained in the U.S., and their circulations continued to erode. Various attempts to revive a medium for short fiction have been tried, including Web magazines. This collection is an example of another genre: the original anthology. While most anthologies published in book form in the heyday of the magazines had previously been published in the magazines (authors usually only sold the magazine “first North American serial rights” and retained the right to subsequently sell the story to the publisher of an anthology), original anthologies contain never-before-published stories, usually collected around a theme such as the planet Earth here.

I got this book (I say “got” as opposed to “bought” because the Kindle edition is free to Kindle Unlimited subscribers and I “borrowed” it as one of the ten titles I can check out for reading at a given time) because it contained the short story, “The Hidden Conquest”, by Hans G. Schantz, author of the superb Hidden Truth series of novels (1, 2, 3), which was said to be a revealing prequel to the story in the books. It is, and it is excellent, although you probably won't appreciate how much of a reveal it is unless you've read the books, especially 2018's The Brave and the Bold.

The rest of the stories are…uneven: about what you'd expect from a science fiction magazine in the 1950s or '60s. Some are gimmick stories, others are shoot-em-up action tales, while still others are just disappointing and probably should have remained in the slush pile or returned to their authors with a note attached to the rejection slip offering a few suggestions and encouragement to try again. Copy editing is sloppy, complete with a sprinkling of idiot “its/it's” plus the obligatory “pulled hard on the reigns” “miniscule”, and take your “breathe” away.

But hey, if you got it from Kindle Unlimited, you can hardly say you didn't get your money's worth, and you're perfectly free to borrow it, read the Hans Schantz story, and return it same day. I would not pay the US$4 to buy the Kindle edition outright, and fifteen bucks for a paperback is right out.

 Permalink

Hanson, Victor Davis. The Case for Trump. New York: Basic Books, 2019. ISBN 978-1-5416-7354-0.
The election of Donald Trump as U.S. president in November 2016 was a singular event in the history of the country. Never before had anybody been elected to that office without any prior experience in either public office or the military. Trump, although running as a Republican, had no long-term affiliation with the party and had cultivated no support within its establishment, elected officials, or the traditional donors who support its candidates. He turned his back on the insider consultants and “experts” who had advised GOP candidate after candidate in their “defeat with dignity” at the hands of a ruthless Democrat party willing to burn any bridge to win. From well before he declared his candidacy he established a direct channel to a mass audience, bypassing media gatekeepers via Twitter and frequent appearances in all forms of media, who found him a reliable boost to their audience and clicks. He was willing to jettison the mumbling points of the cultured Beltway club and grab “third rail” issues of which they dared not speak such as mass immigration, predatory trade practices, futile foreign wars, and the exporting of jobs from the U.S. heartland to low-wage sweatshops overseas.

He entered a free-for-all primary campaign as one of seventeen major candidates, including present and former governors, senators, and other well-spoken and distinguished rivals and, one by one, knocked them out, despite resolute and sometimes dishonest bias by the media hosting debates, often through “verbal kill shots” which made his opponents the target of mockery and pinned sobriquets on them (“low energy Jeb”, “little Marco”, “lyin' Ted”) they couldn't shake. His campaign organisation, if one can dignify it with the term, was completely chaotic and his fund raising nothing like the finely-honed machines of establishment favourites like Jeb Bush, and yet his antics resulted in his getting billions of dollars worth of free media coverage even on outlets who detested and mocked him.

One by one, he picked off his primary opponents and handily won the Republican presidential nomination. This unleashed a phenomenon the likes of which had not been seen since the Goldwater insurgency of 1964, but far more virulent. Pillars of the Republican establishment and Conservatism, Inc. were on the verge of cardiac arrest, advancing fantasy scenarios to deny the nomination to its winner, publishing issues of their money-losing and subscription-shedding little magazines dedicated to opposing the choice of the party's voters, and promoting insurgencies such as the candidacy of Egg McMuffin, whose bona fides as a man of the people were evidenced by his earlier stints with the CIA and Goldman Sachs.

Predictions that post-nomination, Trump would become “more presidential” were quickly falsified as the chaos compounded, the tweets came faster and funnier, and the mass rallies became ever more frequent and raucous. One thing that was obvious to anybody looking dispassionately at what was going on, without the boiling blood of hatred and disdain of the New York-Washington establishment, was that the candidate was having the time of his life and so were the people who attended the rallies. But still, all of the wise men of the coastal corridor knew what must happen. On the eve of the general election, polls put the probability of a Trump victory somewhere between 1 and 15 percent. The outlier was Nate Silver, who went out on a limb and went all the way up to 29% chance of Trump's winning to the scorn of his fellow “progressives” and pollsters.

And yet, Trump won, and handily. Yes, he lost the popular vote, but that was simply due to the urban coastal vote for which he could not contend and wisely made no attempt to attract, knowing such an effort would be futile and a waste of his scarce resources (estimates are his campaign spent around half that of Clinton's). This book by classicist, military historian, professor, and fifth-generation California farmer Victor Davis Hanson is an in-depth examination of, in the words of the defeated candidate, “what happened”. There is a great deal of wisdom here.

First of all, a warning to the prospective reader. If you read Dr Hanson's columns regularly, you probably won't find a lot here that's new. This book is not one of those that's obviously Frankenstitched together from previously published columns, but in assembling their content into chapters focussing on various themes, there's been a lot of cut and paste, if not literally at the level of words, at least in terms of ideas. There is value in seeing it all presented in one package, but be prepared to say, from time to time, “Haven't I've read this before?”

That caveat lector aside, this is a brilliant analysis of the Trump phenomenon. Hanson argues persuasively that it is very unlikely any of the other Republican contenders for the nomination could have won the general election. None of them were talking about the issues which resonated with the erstwhile “Reagan Democrat” voters who put Trump over the top in the so-called “blue wall” states, and it is doubtful any of them would have ignored their Beltway consultants and campaigned vigorously in states such as Michigan, Wisconsin, and Pennsylvania which were key to Trump's victory. Given that the Republican defeat which would likely have been the result of a Bush (again?), Rubio, or Cruz candidacy would have put the Clinton crime family back in power and likely tipped the Supreme Court toward the slaver agenda for a generation, that alone should give pause to “never Trump” Republicans.

How will it all end? Nobody knows, but Hanson provides a variety of perspectives drawn from everything from the Byzantine emperor Justinian's battle against the deep state to the archetype of the rough-edged outsider brought in to do what the more civilised can't or won't—the tragic hero from Greek drama to Hollywood westerns. What is certain is that none of what Trump is attempting, whether it ends in success or failure, would be happening if any of his primary opponents or the Democrat in the general election had prevailed.

I believe that Victor Davis Hanson is one of those rare people who have what I call the “Orwell gift”. Like George Orwell, he has the ability to look at the facts, evaluate them, and draw conclusions without any preconceived notions or filtering through an ideology. What is certain is that with the election of Donald Trump in 2016 the U.S. dodged a bullet. Whether that election will be seen as a turning point which reversed the decades-long slide toward tyranny by the administrative state, destruction of the middle class, replacement of the electorate by imported voters dependent upon the state, erosion of political and economic sovereignty in favour of undemocratic global governance, and the eventual financial and moral bankruptcy which are the inevitable result of all of these, or just a pause before the deluge, is yet to be seen. Hanson's book is an excellent, dispassionate, well-reasoned, and thoroughly documented view of where things stand today.

 Permalink

Wood, Fenton. Five Million Watts. Seattle: Amazon Digital Services, 2019. ASIN B07R6X973N.
This is the second short novel/novella (123 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and this sequel turns it all the way up to “enchanting”. As with the first book, you're reading along thinking this is a somewhat nerdy young adult story, then something happens or is mentioned in passing and suddenly, “Whoa—I didn't see that coming!”, and you realise the Yankee Republic is a strange and enchanted place, and that, as in the work of Philip K. Dick, there is a lot more going on than you suspected, and much more to be discovered in future adventures.

This tale begins several years after the events of the first book. Philo Hergenschmidt (the only character from Pirates to appear here) has grown up, graduated from Virginia Tech, and after a series of jobs keeping antiquated equipment at rural radio stations on the air, arrives in the Republic's storied metropolis of Iburakon to seek opportunity, adventure, and who knows what else. (If you're curious where the name of the city came from, here's a hint, but be aware it may be a minor spoiler.) Things get weird from the very start when he stops at an information kiosk and encounters a disembodied mechanical head who says it has a message for him. The message is just an address, and when he goes there he meets a very curious character who goes by a variety of names ranging from Viridios to Mr Green, surrounded by a collection of keyboard instruments including electronic synthesisers with strange designs.

Viridios suggests Philo aim for the very top and seek employment at legendary AM station 2XG, a broadcasting pioneer that went on the air in 1921, before broadcasting was regulated, and which in 1936 increased its power to five million watts. When other stations' maximum power was restricted to 50,000 watts, 2XG was grandfathered and allowed to continue to operate at 100 times more, enough to cover the continent far beyond the borders of the Yankee Republic into the mysterious lands of the West.

Not only does 2XG broadcast with enormous power, it was also permitted to retain its original 15 kHz bandwidth, allowing high-fidelity broadcasting and even, since the 1950s, stereo (for compatible receivers). However, in order to retain its rights to the frequency and power, the station was required to stay on the air continuously, with any outage longer than 24 hours forfeiting its rights to hungry competitors.

The engineers who maintained this unique equipment were a breed apart, the pinnacle of broadcast engineering. Philo manages to secure a job as a junior technician, which means he'll never get near the high power RF gear or antenna (all of which are one-off custom), but sets to work on routine maintenance of studio gear and patching up ancient tube gear when it breaks down. Meanwhile, he continues to visit Viridios and imbibe his tales of 2XG and the legendary Zaros the Electromage who designed its transmitter, the operation of which nobody completely understands today.

As he hears tales of the Old Religion, the gods of the spring and grain, and the time of the last ice age, Philo concludes Viridios is either the most magnificent liar he has ever encountered or—something else again.

Climate change is inexorably closing in on Iburakon. Each year is colder than the last, the growing season is shrinking, and it seems inevitable that before long the glaciers will resume their march from the north. Viridios is convinced that the only hope lies in music, performing a work rooted in that (very) Old Time Religion which caused a riot in its only public performance decades before, broadcast with the power of 2XG and performed with breakthrough electronic music instruments of his own devising.

Viridios is very odd, but also persuasive, and he has a history with 2XG. The concert is scheduled, and Philo sets to work restoring long-forgotten equipment from the station's basement and building new instruments to Viridios' specifications. It is a race against time, as the worst winter storm in memory threatens 2XG and forces Philo to confront one of his deepest fears.

Working on a project on the side, Philo discovers what may be the salvation of 2XG, but also as he looks deeper, possibly the door to a new universe. Once again, we have a satisfying, heroic, and imaginative story, suitable for readers of all ages, that leaves you hungry for more.

At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is inexpensive to buy. Those eagerly awaiting the next opportunity to visit the Yankee Republic will look forward to the publication of volume 3, The Tower of the Bear, in October, 2019.

 Permalink

Manto, Cindy Donze. Michoud Assembly Facility. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-5316-6969-0.
In March, 1763, King Louis XV of France made a land grant of 140 square kilometres to Gilbert Antoine St Maxent, the richest man in Louisiana Territory and commander of the militia. The grant required St Maxent to build a road across the swampy property, develop a plantation, and reserve all the trees in forested areas for the use of the French navy. When the Spanish took over the territory five years later, St Maxent changed his first names to “Gilberto Antonio” and retained title to the sprawling estate. In the decades that followed, the property changed hands and nations several times, eventually, now part of the United States, being purchased by another French immigrant, Antoine Michoud, who had left France after the fall of Napoleon, who his father had served as an official.

Michoud rapidly established himself as a prosperous businessman in bustling New Orleans, and after purchasing the large tract of land set about buying pieces which had been sold off by previous owners, re-assembling most of the original French land grant into one of the largest private land holdings in the United States. The property was mostly used as a sugar plantation, although territory and rights were ceded over the years for construction of a lighthouse, railroads, and telegraph and telephone lines. Much of the land remained undeveloped, and like other parts of southern Louisiana was a swamp or, as they now say, “wetlands”.

The land remained in the Michoud family until 1910, when it was sold in its entirety for US$410,000 in cash (around US$11 million today) to a developer who promptly defaulted, leading to another series of changes of ownership and dodgy plans for the land, which most people continued to refer to as the Michoud Tract. At the start of World War II, the U.S. government bought a large parcel, initially intended for construction of Liberty ships. Those plans quickly fell through, but eventually a huge plant was erected on the site which, starting in 1943, began to manufacture components for cargo aircraft, lifeboats, and components which were used in the Manhattan Project's isotope separation plants in Oak Ridge, Tennessee.

At the end of the war, the plant was declared surplus but, a few years later, with the outbreak of the Korean War, it was re-purposed to manufacture engines for Army tanks. It continued in that role until 1954 when it was placed on standby and, in 1958, once again declared surplus. There things stood until mid-1961 when NASA, charged by the new Kennedy administration to “put a man on the Moon” was faced with the need to build rockets in sizes and quantities never before imagined, and to do so on a tight schedule, racing against the Soviet Union.

In June, 1961, Wernher von Braun, director of the NASA Marshall Space Flight Center in Huntsville, Alabama, responsible for designing and building those giant boosters, visited the then-idle Michoud Ordnance Plant and declared it ideal for NASA's requirements. It had 43 acres (17 hectares) under one roof, the air conditioning required for precision work in the Louisiana climate, and was ready to occupy. Most critically, it was located adjacent to navigable waters which would allow the enormous rocket stages, far too big to be shipped by road, rail, or air, to be transported on barges to and from Huntsville for testing and Cape Canaveral in Florida to be launched.

In September 1961 NASA officially took over the facility, renaming it “Michoud Operations”, to be managed by NASA Marshall as the manufacturing site for the rockets they designed. Work quickly got underway to set up manufacturing of the first stage of the Saturn I and 1B rockets and prepare to build the much larger first stage of the Saturn V Moon rocket. Before long, new buildings dedicated to assembly and test of the new rockets, occupied both by NASA and its contractors, began to spring up around the original plant. In 1965, the installation was renamed the Michoud Assembly Facility, which name it bears to this day.

With the end of the Apollo program, it looked like Michoud might once again be headed for white elephant status, but the design selected for the Space Shuttle included a very large External Tank comparable in size to the first stage of the Saturn V which would be discarded on every flight. Michoud's fabrication and assembly facilities, and its access to shipping by barge were ideal for this component of the Shuttle, and a total of 135 tanks built at Michoud were launched on Shuttle missions between 1981 and 2011.

The retirement of the Space Shuttle once again put the future of Michoud in doubt. It was originally tapped to build the core stage of the Constellation program's Ares V booster, which was similar in size and construction to the Shuttle External Tank. The cancellation of Constellation in 2010 brought that to a halt, but then Congress and NASA rode to the rescue with the absurd-as-a-rocket but excellent-as-a-jobs-program Space Launch System (SLS), whose centre core stage also resembles the External Tank and Ares V. SLS first stage fabrication is presently underway at Michoud. Perhaps when the schedule-slipping, bugget-busting SLS is retired after a few flights (if, in fact, it ever flies at all), bringing to a close the era of giant taxpayer-funded throwaway rockets, the Michoud facility can be repurposed to more productive endeavours.

This book is largely a history of Michoud in photos and captions, with text introducing chapters on each phase of the facility's history. All of the photos are in black and white, and are well-reproduced. In the Kindle edition many can be expanded to show more detail. There are a number of copy-editing and factual errors in the text and captions, but not too many to distract or mislead the reader. The unidentified “visitors” shown touring the Michoud facility in July 1967 (chapter 3, Kindle location 392) are actually the Apollo 7 crew, Walter Schirra, Donn Eisele, and Walter Cunningham, who would fly on a Michoud-built Saturn 1B in October 1968.

For a book of just 130 pages, most of which are black and white photographs, the hardcover is hideously expensive (US$29 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

 Permalink

Wright, Tom and Bradley Hope. Billion Dollar Whale. New York: Hachette Books, 2018. ISBN 978-0-316-43650-2.
Low Taek Jho, who westernised his name to “Jho Low”, which I will use henceforth, was the son of a wealthy family in Penang, Malaysia. The family's fortune had been founded by Low's grandfather who had immigrated to the then British colony of Malaya from China and founded a garment manufacturing company which Low's father had continued to build and recently sold for a sum of around US$ 15 million. The Low family were among the wealthiest in Malaysia and wanted the best for their son. For the last two years of his high school education, Jho was sent to the Harrow School, a prestigious private British boarding school whose alumni include seven British Prime Ministers including Winston Churchill and Robert Peel, and “foreign students” including Jawaharlal Nehru and King Hussein of Jordan. At Harrow, he would meet classmates whose families' wealth was in the billions, and his ambition to join their ranks was fired.

After graduating from Harrow, Low decided the career he wished to pursue would be better served by a U.S. business education than the traditional Cambridge or Oxford path chosen by many Harrovians and enrolled in the University of Pennsylvania's Wharton School undergraduate program. Previous Wharton graduates include Warren Buffett, Walter Annenberg, Elon Musk, and Donald Trump. Low majored in finance, but mostly saw Wharton as a way to make connections. Wharton was a school of choice for the sons of Gulf princes and billionaires, and Low leveraged his connections, while still an undergraduate, into meetings in the Gulf with figures such as Yousef Al Otaiba, foreign policy adviser to the sheikhs running the United Arab Emirates. Otaiba, in turn, introduced him to Khaldoon Khalifa Al Mubarak, who ran a fund called Mubadala Development, which was on the cutting edge of the sovereign wealth fund business.

Since the 1950s resource-rich countries, in particular the petro-states of the Gulf, had set up sovereign wealth funds to invest the surplus earnings from sales of their oil. The idea was to replace the natural wealth which was being extracted and sold with financial assets that would generate income, appreciate over time, and serve as the basis of their economies when the oil finally ran out. By the early 2000s, the total funds under management by sovereign wealth funds were US$3.5 trillion, comparable to the annual gross domestic product of Germany. Sovereign wealth funds were originally run in a very conservative manner, taking few risks—“gentlemen prefer bonds”—but since the inflation and currency crises of the 1970s had turned to more aggressive strategies to protect their assets from the ravages of Western money printing and financial shenanigans.

While some sovereign wealth funds, for example Norway's (with around US$1 trillion in assets the largest in the world) are models of transparency and prudent (albeit often politically correct) investing, others, including some in the Gulf states, are accountable only to autocratic ruler(s) and have been suspected as acting as personal slush funds. On the other hand, managers of Gulf funds must be aware that bad investment decisions may not only cost them their jobs but their heads.

Mubadala was a new kind of sovereign wealth fund. Rather than a conservative steward of assets for future generations, it was run more like a leveraged Wall Street hedge fund: borrowing on global markets, investing in complex transactions, and aiming to develop the industries which would sustain the local economy when the oil inevitably ran out. Jho Low saw Al Mubarak, not yet thirty years old, making billion dollar deals on almost his sole discretion, playing a role on the global stage, driving the development of Abu Dhabi's economy, and being handsomely compensated for his efforts. That's the game Low wanted to be in, and he started working toward it.

Before graduating from Wharton, he set up a British Virgin Islands company he named the “Wynton Group”, which stood for his goal to “win tons” of money. After graduation in 2005 he began to pitch the contacts he'd made through students at Harrow and Wharton on deals he'd identified in Malaysia, acting as an independent development agency. He put together a series of real estate deals, bringing money from his Gulf contacts and persuading other investors that large sovereign funds were on-board by making token investments from offshore companies he'd created whose names mimicked those of well-known funds. This is a trick he would continue to use in the years to come.

Still, he kept his eye on the goal: a sovereign wealth fund, based in Malaysia, that he could use for his own ends. In April 2009 Najib Razak became Malaysia's prime minister. Low had been cultivating a relationship with Najib since he met him through his stepson years before in London. Now it was time to cash in. Najib needed money to shore up his fragile political position and Low was ready to pitch him how to get it.

Shortly after taking office, Najib announced the formation of the 1Malaysia Development Berhad, or 1MDB, a sovereign wealth fund aimed at promoting foreign direct investment in projects to develop the economy of Malaysia and benefit all of its ethnic communities: those of Malay, Chinese, and Indian ancestry (hence “1Malaysia”). Although Jho Low had no official position with the fund, he was the one who promoted it, sold Najib on it, and took the lead in raising its capital, both from his contacts in the Gulf and, leveraging that money, in the international debt markets with the assistance of the flexible ethics and unquenchable greed of Goldman Sachs and its ambitious go-getters in Asia.

Low's pitch to the prime minister, either explicit or nod-nod, wink-wink, went well beyond high-minded goals such as developing the economy, bringing all ethnic groups together, and creating opportunity. In short, what “corporate social responsibility” really meant was using the fund as Najib's personal piggy bank, funded by naïve foreign investors, to reward his political allies and buy votes, shutting out the opposition. Low told Najib that at the price of aligning his policies with those of his benefactors in the Gulf, he could keep the gravy train running and ensure his tenure in office for the foreseeable future.

But what was in it for Low, apart from commissions, finder's fees, and the satisfaction of benefitting his native land? Well, rather more, actually. No sooner did the money hit the accounts of 1MDB than Low set up a series of sham transactions with deceptively-named companies to spirit the money out of the fund and put it into his own pockets. And now it gets a little bit weird for this scribbler. At the centre of all of this skulduggery was a private Swiss bank named BSI. This was my bank. I mean, I didn't own the bank (thank Bob!), but I'd been doing business there (or with its predecessors, before various mergers and acquisitions) since before Jho Low was born. In my dealings with them there were the soul of probity and beyond reproach, but you never know what's going on in the other side of the office, or especially in its branch office in the Wild East of Singapore. Part of the continuo to this financial farce is the battles between BSI's compliance people who kept saying, “Wait, this doesn't make any sense.” and the transaction side people looking at the commissions to be earned for moving the money from who-knows-where to who-knows-whom. But, back to the main story.

Ultimately, Low's looting pipeline worked, and he spirited away most of the proceeds of the initial funding of 1MDB into his own accounts or those he controlled. There is a powerful lesson here, as applicable to security of computer systems or access to physical infrastructure as financial assets. Try to chisel a few pennies from your credit card company and you'll be nailed. Fudge a little on your tax return, and it's hard time, serf. But when you play at the billion dollar level, the system was almost completely undefended against an amoral grifter who was bent not on a subtle and creative form of looting in the Bernie Madoff or Enron mold, but simply brazenly picking the pockets of a massive fund through childishly obvious means such as deceptively named offshore shell corporations, shuffling money among accounts in a modern-day version of check kiting, and appealing to banks' hunger for transaction fees over their ethical obligations to their owners and other customers.

Nobody knows how much Jho Low looted from 1MBD in this and subsequent transactions. Estimates of the total money spirited out of 1MDB range as high as US$4.5 billion, and Low's profligate spending alone as he was riding high may account for a substantial fraction of that.

Much of the book is an account of Low's lifestyle when he was riding high. He was not only utterly amoral when it came to bilking investors, leaving the poor of Malaysia on the hook, but seemingly incapable of looking beyond the next party, gambling spree, or debt repayment. It's like he always thought there'd be a greater fool to fleece, and that there was no degree of wretched excess in his spending which would invite the question “How did he earn this money?” I'm not going to dwell upon this. It's boring. Stylish criminals whose lifestyles are as suave as their crimes are elegant. Grifters who blow money on down-market parties with gutter rappers and supermarket tabloid celebrities aren't. In a marvelous example of meta-irony, Low funded a Hollywood movie production company which made the film The Wolf of Wall Street, about a cynical grifter like Low himself.

And now comes the part where I tell you how it all came undone, everybody got their just deserts, and the egregious perpetrators are languishing behind bars. Sorry, not this time, or at least not yet.

Jho Low escaped pursuit on his luxury super-yacht and now is reputed to be living in China, travelling freely and living off his ill-gotten gains. The “People's Republic” seems quite hospitable to those who loot the people of its neighbours (assuming they adequately grease the palms of its rulers).

Goldman Sachs suffered no sanctions as a result of its complicity in the 1MDB funding and the appropriation of funds.

BSI lost its Swiss banking licence, but was acquired by another bank and most of its employees, except for a few involved in dealing with Low, kept their jobs. (My account was transferred to the successor bank with no problems. They never disclosed the reason for the acquisition.)

This book, by the two Wall Street Journal reporters who untangled what may be the largest one-man financial heist in human history, provides a look inside the deeply corrupt world of paper money finance at its highest levels, and is an illustration of the extent to which people are disinclined to ask obvious questions like “Where is the money coming from?” while the good times are rolling. What is striking is how banal the whole affair is. Jho Low's talents would have made him a great success in legitimate development finance, but instead he managed to steal billions, ultimately from mostly poor people in his native land, and blow the money on wild parties, shallow celebrities, ostentatious real estate, cars, and yachts, and binges of high-stakes gambling in skeevy casinos. The collapse of the whole tawdry business reflects poorly on institutions like multinational investment banks, large accounting and auditing firms, financial regulators, Swiss banks, and the whole “sustainable development” racket in the third world. Jho Low, a crook through and through, looked at these supposedly august institutions and recognised them as kindred spirits and then figured out transparently simple ways to use them to steal billions. He got away with it, and they are still telling governments, corporations, and investors how to manage their affairs and, inexplicably, being taken seriously and handsomely compensated for their “expertise”.

 Permalink

Kroese, Robert. The Voyage of the Iron Dragon. Grand Rapids MI: St. Culain Press, 2019. ISBN 978-1-7982-3431-0.
This is the third and final volume in the Iron Dragon trilogy which began with The Dream of the Iron Dragon (August 2018) and continued in The Dawn of the Iron Dragon (May 2019). When reading a series of books I've discovered, I usually space them out to enjoy them over time, but the second book of this trilogy left its characters in such a dire pickle I just couldn't wait to see how the author managed to wrap up the story in just one more book and dove right in to the concluding volume. It is a satisfying end to the saga, albeit in some places seeming rushed compared to the more deliberate development of the story and characters in the first two books.

First of all, this note. Despite being published in three books, this is one huge, sprawling story which stretches over more than a thousand pages, decades of time, and locations as far-flung as Constantinople, Iceland, the Caribbean, and North America, and in addition to their cultures, we have human spacefarers from the future, Vikings, and an alien race called the Cho-ta'an bent on exterminating humans from the galaxy. You should read the three books in order: Dream, Dawn, and Voyage. If you start in the middle, despite the second and third volumes' having a brief summary of the story so far, you'll be completely lost as to who the characters are, what they're trying to do, and how they ended up pursuing the desperate and seemingly impossible task in which they are engaged (building an Earth-orbital manned spacecraft in the middle ages while leaving no historical traces of their activity which later generations of humans might find). “Read the whole thing,” in order. It's worth it.

With the devastating events which concluded the second volume, the spacemen are faced with an even more daunting challenge than that in which they were previously engaged, and with far less confidence of success in their mission of saving humanity in its war for survival against the Cho-ta'an more than 1500 years in their future. As this book begins, more than two decades have passed since the spacemen crashed on Earth. They have patiently been building up the infrastructure required to build their rocket, establishing mining, logging, materials processing, and manufacturing at a far-flung series of camps all linked together by Viking-built and -crewed oceangoing ships. Just as important as tools and materials is human capital: the spacemen have had to set up an ongoing programme to recruit, educate, and train the scientists, engineers, technicians, drafters, managers, and tradespeople of all kinds needed for a 20th century aerospace project, all in a time when only a tiny fraction of the population is literate, and they have reluctantly made peace with the Viking way of “recruiting” the people they need.

The difficulty of all of this is compounded by the need to operate in absolute secrecy. Experience has taught the spacemen that, having inadvertently travelled into Earth's past, history cannot be changed. Consequently, nothing they do can interfere in any way with the course of recorded human history because that would conflict with what actually happened and would therefore be doomed to failure. And in addition, some Cho-ta'an who landed on Earth may still be alive and bent on stopping their project. While they must work technological miracles to have a slim chance of saving humanity, the Cho-ta'an need only thwart them in any one of a multitude of ways to win. Their only hope is to disappear.

The story is one of dogged persistence, ingenuity in the face of formidable obstacles everywhere; dealing with adversaries as varied as Viking chieftains, the Vatican, Cho-ta'an aliens, and native American tribes; epic battles; disheartening setbacks; and inspiring triumphs. It is a heroic story on a grand scale, worthy of inclusion among the great epics of science fiction's earlier golden ages.

When it comes to twentieth century rocket engineering, there are a number of goofs and misconceptions in the story, almost all of which could have been remedied without any impact on the plot. Although they aren't precisely plot spoilers, I'll take them behind the curtain for space-nerd readers who wish to spot them for themselves without foreknowledge.

Spoiler warning: Plot and/or ending details follow.  
  • In chapter 7, Alma says, “The Titan II rockets used liquid hydrogen for the upper stages, but they used kerosene for the first stage.” This is completely wrong. The Titan II was a two stage rocket and used the same hypergolic propellants (hydrazine fuel and dinitrogen tetroxide oxidiser) in both the first and second stages.
  • In chapter 30 it is claimed “While the first stage of a Titan II rocket could be powered by kerosene, the second and third stages needed a fuel with a higher specific impulse in order to reach escape velocity of 25,000 miles per hour.” Oh dear—let's take this point by point. First of all, the first stage of the Titan II was not and could not be powered by kerosene. It was designed for hypergolic fuels, and its turbopumps and lack of an igniter would not work with kerosene. As described below, the earlier Titan I used kerosene, but the Titan II was a major re-design which could not be adapted for kerosene. Second, the second stage of the Titan II used the same hypergolic propellant as the first stage, and this propellant had around the same specific impulse as kerosene and liquid oxygen. Third, the Titan II did not have a third stage at all. It delivered the Gemini spacecraft into orbit using the same two stage configuration as the ballistic missile. The Titan II was later adapted to use a third stage for unmanned space launch missions, but a third stage was never used in Project Gemini. Finally, the mission of the Iron Dragon, like that of the Titan II launching Gemini, was to place its payload in low Earth orbit with a velocity of around 17,500 miles per hour, not escape velocity of 25,000 miles per hour. Escape velocity would fling the payload into orbit around the Sun, not on an intercept course with the target in Earth orbit.
  • In chapter 45, it is stated that “Later versions of the Titan II rockets had used hypergolic fuels, simplifying their design.” This is incorrect: the Titan I rocket used liquid oxygen and kerosene (not liquid hydrogen), while the Titan II, a substantially different missile, used hypergolic propellants from inception. Basing the Iron Dragon's design upon the Titan II and then using liquid hydrogen and oxygen makes no sense at all and wouldn't work. Liquid hydrogen is much less dense than the hypergolic fuel used in the Titan II and would require a much larger fuel tank of entirely different design, incorporating insulation which was unnecessary on the Titan II. These changes would ripple all through the design, resulting in an entirely different rocket. In addition, the low density of liquid hydrogen would require an entirely different turbopump design and, not being hypergolic with liquid oxygen, would require a different pre-burner to drive the turbopumps.
  • A few sentences later, it is said that “Another difficult but relatively straightforward problem was making the propellant tanks strong enough to be pressurized to 5,000 psi but not so heavy they impeded the rocket's journey to space.” This isn't how tank pressurisation works in liquid fuelled rockets. Tanks are pressurised to increase structural rigidity and provide positive flow into the turbopumps, but pressures are modest. The pressure needed to force propellants into the combustion chamber comes from the boost imparted by the turbopumps, not propellant tank pressurisation. For example, in the Space Shuttle's External Tank, the flight pressure of the liquid hydrogen tank was between 32 and 34 psia, and the liquid oxygen tank 20 to 22 psig, vastly less than “5,000 psi”. A fuel tank capable of withstanding 5,000 psi would be far too heavy to ever get off the ground.
  • In chapter 46 we are told, “The Titan II had been adapted from the Atlas intercontinental ballistic missile….” This is completely incorrect. In fact, the Titan I was developed as a backup to the Atlas in case the latter missile's innovative technologies such as the pressure-stabilised “balloon tanks” could not be made to work. The Atlas and Titan I were developed in parallel and, when the Atlas went into service first, the Titan I was quickly retired and replaced by the hypergolic fuelled Titan II, which provided more secure basing and rapid response to a launch order than the Atlas.
  • In chapter 50, when the Iron Dragon takes off, those viewing it “squinted against the blinding glare”. But liquid oxygen and liquid hydrogen (as well as the hypergolic fuels used by the original Titan II) burn with a nearly invisible flame. Liquid oxygen and kerosene produce a brilliant flame, but these propellants were not used in this rocket.
  • And finally, it's not a matter of the text, but what's with that cover illustration, anyway? The rocket ascending in the background is clearly modelled on a Soviet/Russian R-7/Soyuz rocket, which is nothing like what the Iron Dragon is supposed to be. While Iron Dragon is described as a two stage rocket burning liquid hydrogen and oxygen, Soyuz is a LOX/kerosene rocket (and the illustration has the characteristic bright flame of those propellants), has four side boosters (clearly visible), and the spacecraft has a visible launch escape tower, which Gemini did not have and was never mentioned in connection with the Iron Dragon.

Fixing all of these results in the Iron Dragon's being a two stage (see the start of chapter 51) liquid hydrogen fuel, liquid oxygen oxidiser rocket of essentially novel design, sharing little with the Titan II. The present-day rocket which most resembles it is the Delta IV, which in its baseline (“Medium”) configuration is a two stage LOX/hydrogen rocket with more than adequate payload capacity to place a Gemini capsule in low Earth orbit. Its first stage RS-68 engines were designed to reduce complexity and cost, and would be a suitable choice for a project having to start from scratch. Presumably the database which provided the specifications of the Titan II would also include the Delta IV, and adapting it to their requirements (which would be largely a matter of simplifying and derating the design in the interest of reliability and ease of manufacture) would be much easier than trying to transform the Titan II into a LOX/hydrogen launcher.

Spoilers end here.  

Despite the minor quibbles in the spoiler section (which do not detract in any way from enjoyment of the tale), this is a rollicking good adventure and satisfying conclusion to the Iron Dragon saga. It seemed to me that the last part of the story was somewhat rushed and could have easily occupied another full book, but the author promised us a trilogy and that's what he delivered, so fair enough. In terms of accomplishing the mission upon which the spacemen and their allies had laboured for half a century, essentially all of the action occurs in the last quarter of this final volume, starting in chapter 44. As usual nothing comes easy, and the project must face a harrowing challenge which might undo everything at the last moment, then confront the cold equations of orbital mechanics. The conclusion is surprising and, while definitively ending this tale, leaves the door open to further adventures set in this universe.

This series has been a pure delight from start to finish. It wasn't obvious to this reader at the outset that it would be possible to pull time travel, Vikings, and spaceships together into a story that worked, but the author has managed to do so, while maintaining historical authenticity about a neglected period in European history. It is particularly difficult to craft a time travel yarn in which it is impossible for the characters to change the recorded history of our world, but this is another challenge the author rises to and almost makes it look easy. Independent science fiction is where readers will find the heroes, interesting ideas, and adventure which brought them to science fiction in the first place, and Robert Kroese is establishing himself as a prolific grandmaster of this exciting new golden age.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Yiannopoulos, Milo. Diabolical. New York: Bombardier Books, 2018. ISBN 978-1-64293-163-1.
Milo Yiannopoulos has a well-deserved and hard-earned reputation as a controversialist, inciter of outrage, and offender of all the right people. His acid wit and mockery of those amply deserving it causes some to dismiss what he says when he's deadly serious about something, as he is in this impassioned book about the deep corruption in the Roman Catholic church and its seeming abandonment of its historic mission as a bastion of the Christian values which made the West the West. It is an earnest plea for a new religious revival, from the bottom up, to rid the Church of its ageing, social justice indoctrinated hierarchy which, if not entirely homosexual, has tolerated widespread infiltration of the priesthood by sexually active homosexual men who have indulged their attraction to underage (but almost always post-pubescent) boys, and has been complicit in covering up these scandals and allowing egregious offenders to escape discipline and continue their predatory behaviour for many years.

Ever since emerging as a public figure, Yiannopoulos has had a target on his back. A young, handsome (he may prefer “fabulous”), literate, well-spoken, quick-witted, funny, flaming homosexual, Roman Catholic, libertarian-conservative, pro-Brexit, pro-Trump, prolific author and speaker who can fill auditoriums on college campuses and simultaneously entertain and educate his audiences, willing to debate the most vociferous of opponents, and who has the slaver Left's number and is aware of their vulnerability just at what they imagined was the moment of triumph, is the stuff of nightmares to those who count on ignorant legions of dim followers capable of little more than chanting rhyming slogans and littering. He had to be silenced, and to a large extent, he has been. But, like the Terminator, he's back, and he's aiming higher: for the Vatican.

It was a remarkable judo throw the slavers and their media accomplices on the left and “respectable right” used to rid themselves of this turbulent pest. The virtuosos of victimology managed to use the author's having been a victim of clerical sexual abuse, and spoken candidly about it, to effectively de-platform, de-monetise, disemploy, and silence him in the public sphere by proclaiming him a defender of pædophilia (which has nothing to do with the phenomenon he was discussing and of which he was a victim: homosexual exploitation of post-pubescent boys).

The author devotes a chapter to his personal experience and how it paralleled that of others. At the same time, he draws a distinction between what happened to him and the rampant homosexuality in some seminaries and serial abuse by prelates in positions of authority and its being condoned and covered up by the hierarchy. He traces the blame all the way to the current Pope, whose collectivist and social justice credentials were apparent to everybody before his selection. Regrettably, he concludes, Catholics must simply wait for the Pope to die or retire, while laying the ground for a revival and restoration of the faith which will drive the choice of his successor.

Other chapters discuss the corrosive influence of so-called “feminism” on the Church and how it has corrupted what was once a manly warrior creed that rolled back the scourge of Islam when it threatened civilisation in Europe and is needed now more than ever after politicians seemingly bent on societal suicide have opened the gates to the invaders; how utterly useless and clueless the legacy media are in covering anything relating to religion (a New York Times reporter asked First Things editor Fr Richard John Neuhaus what he made of the fact that the newly elected pope was “also” going to be named the bishop of Rome); and how the rejection and collapse of Christianity as a pillar of the West risks its replacement with race as the central identity of the culture.

The final chapter quotes Chesterton (from Heretics, 1905),

Everything else in the modern world is of Christian origin, even everything that seems most anti-Christian. The French Revolution is of Christian origin. The newspaper is of Christian origin. The anarchists are of Christian origin. Physical science is of Christian origin. The attack on Christianity is of Christian origin. There is one thing, and one thing only, in existence at the present day which can in any sense accurately be said to be of pagan origin, and that is Christianity.

Much more is at stake than one sect (albeit the largest) of Christianity. The infiltration, subversion, and overt attacks on the Roman Catholic church are an assault upon an institution which has been central to Western civilisation for two millennia. If it falls, and it is falling, in large part due to self-inflicted wounds, the forces of darkness will be coming for the smaller targets next. Whatever your religion, or whether you have one or not, collapse of one of the three pillars of our cultural identity is something to worry about and work to prevent. In the author's words, “What few on the political Right have grasped is that the most important component in this trifecta isn't capitalism, or even democracy, but Christianity.” With all three under assault from all sides, this book makes an eloquent argument to secular free marketeers and champions of consensual government not to ignore the cultural substrate which allowed both to emerge and flourish.

 Permalink

July 2019

Suarez, Daniel. Delta-v. New York: Dutton, 2019. ISBN 978-1-5247-4241-6.
James Tighe is an extreme cave diver, pushing the limits of human endurance and his equipment to go deeper, farther, and into unexplored regions of underwater caves around the world. While exploring the depths of a cavern in China, an earthquake triggers disastrous rockfalls in the cave, killing several members of his expedition. Tighe narrowly escapes with his life, leading the survivors to safety, and the video he recorded with his helmet camera has made him an instant celebrity. He is surprised and puzzled when invited by billionaire and serial entrepreneur Nathan Joyce to a party on Joyce's private island in the Caribbean. Joyce meets privately with Tighe and explains that his theory of economics predicts a catastrophic collapse of the global debt bubble in the near future, with the potential to destroy modern civilisation.

Joyce believes that the only way to avert this calamity is to jump start the human expansion into the solar system, thus creating an economic expansion into a much larger sphere of activity than one planet and allowing humans to “grow out” of the crushing debt their profligate governments have run up. In particular, he believes that asteroid mining is the key to opening the space frontier, as it will provide a source of raw materials which do not have to be lifted at prohibitive cost out of Earth's deep gravity well. Joyce intends to use part of his fortune to bootstrap such a venture, and invites Tighe to join a training program to select a team of individuals ready to face the challenges of long-term industrial operations in deep space.

Tighe is puzzled, “Why me?” Joyce explains that much more important than a background in aerospace or mining is the ability to make the right decisions under great pressure and uncertainty. Tighe's leadership in rescuing his dive companions demonstrated that ability and qualified him to try out for Joyce's team.

By the year 2033, the NewSpace companies founded in the early years of the 21st century have matured and, although taking different approaches, have come to dominate the market for space operations, mostly involving constellations of Earth satellites. The so-called “NewSpace Titans” (names have been changed, but you'll recognise them from their styles) have made their billions developing this industry, and some have expressed interest in asteroid mining, but mostly via robotic spacecraft and on a long-term time scale. Nathan Joyce wants to join their ranks and advance the schedule by sending humans to do the job. Besides, he argues, if the human destiny is to expand into space, why not get on with it, deploying their versatility and ability to improvise on this difficult challenge?

The whole thing sounds rather dodgy to Tighe, but cave diving does not pay well, and the signing bonus and promised progress payments if he meets various milestones in the training programme sound very attractive, so he signs on the dotted line. Further off-putting were a draconian non-disclosure agreement and an “Indemnity for Accidental Death and Dismemberment” which was sprung on candidates only after arriving at the remote island training facility. There were surveillance cameras and microphones everywhere, and Tighe and others speculated they may be part of an elaborate reality TV show staged by Joyce, not a genuine space project.

The other candidates were from all kinds of backgrounds: ex-military, former astronauts, BASE jumpers, mountaineers, scientists, and engineers. There were almost all on the older side for adventurers: mid-thirties to mid-forties—something about cosmic rays. And most of them had the hallmarks of DRD4-7R adventurers.

As the programme gets underway, the candidates discover it resembles Special Forces training more than astronaut candidate instruction, with a series of rigorous tests evaluating personal courage, endurance, psychological stability, problem-solving skills, tolerance for stress, and the ability to form and work as a team. Predictably, their numbers are winnowed as they approach the milestone where a few will be selected for orbital training and qualification for the deep space mission.

Tighe and the others discover that their employer is anything but straightforward, and they begin to twig to the fact that the kind of people who actually open the road to human settlement of the solar system may resemble the ruthless railroad barons of the 19th century more than the starry-eyed dreamers of science fiction. These revelations continue as the story unfolds.

After gut-wrenching twists and turns, Tighe finds himself part of a crew selected to fly to and refine resources from a near-Earth asteroid first reconnoitered by the Japanese Hayabusa2 mission in the 2010s. Risks are everywhere, and not just in space: corporate maneuvering back on Earth can kill the crew just as surely as radiation, vacuum, explosions, and collisions in space. Their only hope may be a desperate option recalling one of the greatest feats of seamanship in Earth's history.

This is a gripping yarn in which the author confronts his characters with one seemingly insurmountable obstacle and disheartening setback after another, then describes how these carefully selected and honed survivors deal with it. There are no magical technologies: all of the technical foundations exist today, at least at the scale of laboratory demonstrations, and could plausibly be scaled up to those in the story by the mid-2030s. The intricate plot is a salutary reminder that deception, greed, dodgy finances, corporate hijinks, bureaucracy, and destructively hypertrophied egos do not stop at the Kármán line. The conclusion is hopeful and a testament to the place for humans in the development of space.

A question and answer document about the details underlying the story is available on the author's Web site.

 Permalink

Murray, Charles and Catherine Bly Cox. Apollo. Burkittsville, MD: South Mountain Books, [1989, 2004] 2010. ISBN 978-0-9760008-0-8.
On November 5, 1958, NASA, only four months old at the time, created the Space Task Group (STG) to manage its manned spaceflight programs. Although there had been earlier military studies of manned space concepts and many saw eventual manned orbital flights growing out of the rocket plane projects conducted by NASA's predecessor, the National Advisory Committee for Aeronautics (NACA) and the U.S. Air Force, at the time of the STG's formation the U.S. had no formal manned space program. The initial group numbered 45 in all, including eight secretaries and “computers”—operators of electromechanical desk calculators, staffed largely with people from the NACA's Langley Research Center and initially headquartered there. There were no firm plans for manned spaceflight, no budget approved to pay for it, no spacecraft, no boosters, no launch facilities, no mission control centre, no astronauts, no plans to select and train them, and no experience either with human flight above the Earth's atmosphere or with more than a few seconds of weightlessness. And yet this team, the core of an effort which would grow to include around 400,000 people at NASA and its 20,000 industry and academic contractors, would, just ten years and nine months later, on July 20th, 1969, land two people on the surface of the Moon and then return them safely to the Earth.

Ten years is not a long time when it comes to accomplishing a complicated technological project. Development of the Boeing 787, a mid-sized commercial airliner which flew no further, faster, or higher than its predecessors, and was designed and built using computer-aided design and manufacturing technologies, took eight years from project launch to entry into service, and the F-35 fighter plane only entered service and then only in small numbers of one model a full twenty-three years after the start of its development.

In November, 1958, nobody in the Space Task Group was thinking about landing on the Moon. Certainly, trips to the Moon had been discussed in fables from antiquity to Jules Verne's classic De la terre à la lune of 1865, and in 1938 members of the British Interplanetary Society published a (totally impractical) design for a Moon rocket powered by more than two thousand solid rocket motors bundled together, which would be discarded once burned out, but only a year since the launch of the first Earth satellite and when nothing had been successfully returned from Earth orbit to the Earth, talk of manned Moon ships sounded like—lunacy.

The small band of stalwarts at the STG undertook the already daunting challenge of manned space flight with an incremental program they called Project Mercury, whose goal was to launch a single man into Earth orbit in a capsule (unable to change its orbit once released from the booster rocket, it barely deserved the term “spacecraft”) atop a converted Atlas intercontinental ballistic missile. In essence, the idea was to remove the warhead, replace it with a tiny cone-shaped can with a man in it, and shoot him into orbit. At the time the project began, the reliability of the Atlas rocket was around 75%, so NASA could expect around one in four launches to fail, with the Atlas known for spectacular explosions on the ground or on the way to space. When, in early 1960, the newly-chosen Mercury astronauts watched a test launch of the rocket they were to ride, it exploded less than a minute after launch. This was the fifth consecutive failure of an Atlas booster (although not all were so spectacular).

Doing things which were inherently risky on tight schedules with a shoestring budget (compared to military projects) and achieving an acceptable degree of safety by fanatic attention to detail and mountains of paperwork (NASA engineers quipped that no spacecraft could fly until the mass of paper documenting its construction and test equalled that of the flight hardware) became an integral part of the NASA culture. NASA was proceeding on its deliberate, step-by-step development of Project Mercury, and in 1961 was preparing for the first space flight by a U.S. astronaut, not into orbit on an Atlas, just a 15 minute suborbital hop on a version of the reliable Redstone rocket that launched the first U.S. satellite in 1958 when, on April 12, 1961, they were to be sorely disappointed when the Soviet Union launched Yuri Gagarin into orbit on Vostok 1. Not only was the first man in space a Soviet, they had accomplished an orbital mission, which NASA hadn't planned to attempt until at least the following year.

On May 5, 1961, NASA got back into the game, or at least the minor league, when Alan Shepard was launched on Mercury-Redstone 3. Sure, it was just a 15 minute up and down, but at least an American had been in space, if only briefly, and it was enough to persuade a recently-elected, young U.S. president smarting from being scooped by the Soviets to “take longer strides”. On May 25, less than three weeks after Shepard's flight, before a joint session of Congress, President Kennedy said, “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth.” Kennedy had asked his vice president, Lyndon Johnson, what goal the U.S. could realistically hope to achieve before the Soviets, and after consulting with the NASA administrator, James Webb, a Texas oil man and lawyer, and no other NASA technical people other than Wernher von Braun, he reported that a manned Moon landing was the only milestone the Soviets, with their heavy boosters and lead in manned space flight, were unlikely to do first. So, to the Moon it was.

The Space Task Group people who were, ultimately going to be charged with accomplishing this goal and had no advance warning until they heard Kennedy's speech or got urgent telephone calls from colleagues who had also heard the broadcast were, in the words of their leader, Robert Gilruth, who had no more warning than his staff, “aghast”. He and his team had, like von Braun in the 1950s, envisioned a deliberate, step-by-step development of space flight capability: manned orbital flight, then a more capable spacecraft with a larger crew able to maneuver in space, a space station to explore the biomedical issues of long-term space flight and serve as a base to assemble craft bound farther into space, perhaps a reusable shuttle craft to ferry crew and cargo to space without (wastefully and at great cost) throwing away rockets designed as long-range military artillery on every mission,followed by careful reconnaissance of the Moon by both unmanned and manned craft to map its surface, find safe landing zones, and then demonstrate the technologies that would be required to get people there and back safely.

All that was now clearly out the window. If Congress came through with the massive funds it would require, going to the Moon would be a crash project like the Manhattan Project to build the atomic bomb in World War II, or the massive industrial mobilisation to build Liberty Ships or the B-17 and B-29 bombers. The clock was ticking: when Kennedy spoke, there were just 3142 days until December 31, 1969 (yes, I know the decade actually ends at the end of 1970, since there was no year 0 in the Gregorian calendar, but explaining this to clueless Americans is a lost cause), around eight years and seven months. What needed to be done? Everything. How much time was there to do it? Not remotely enough. Well, at least the economy was booming, politicians seemed willing to pay the huge bills for what needed to be done, and there were plenty of twenty-something newly-minted engineering graduates ready and willing to work around the clock without a break to make real what they'd dreamed of since reading science fiction in their youth.

The Apollo Project was simultaneously one of the most epochal and inspiring accomplishments of the human species, far more likely to be remembered a thousand years hence than anything else that happened in the twentieth century, and at the same time a politically-motivated blunder which retarded human expansion into the space frontier. Kennedy's speech was at the end of May 1961. Perhaps because the Space Task Group was so small, it and NASA were able to react with a speed which is stunning to those accustomed to twenty year development projects for hardware far less complicated than Apollo.

In June and July [1961], detailed specifications for the spacecraft hardware were completed. By the end of July, the Requests for Proposals were on the street.

In August, the first hardware contract was awarded to M.I.T.'s Instrumentation Laboratory for the Apollo guidance system. NASA selected Merritt Island, Florida, as the site for a new spaceport and acquired 125 square miles of land.

In September, NASA selected Michoud, Louisiana, as the production facility for the Saturn rockets, acquired a site for the Manned Spacecraft Center—the Space Task Group grown up—south of Houston, and awarded the contract for the second stage of the Saturn [V] to North American Aviation.

In October, NASA acquired 34 square miles for a Saturn test facility in Mississippi.

In November, the Saturn C-1 was successfully launched with a cluster of eight engines, developing 1.3 million pounds of thrust. The contract for the command and service module was awarded to North American Aviation.

In December, the contract for the first stage of the Saturn [V] was awarded to Boeing and the contract for the third stage was awarded to Douglas Aircraft.

By January of 1962, construction had begun at all of the acquired sites and development was under way at all of the contractors.

Such was the urgency with which NASA was responding to Kennedy's challenge and deadline that all of these decisions and work were done before deciding on how to get to the Moon—the so-called “mission mode”. There were three candidates: direct-ascent, Earth orbit rendezvous (EOR), and lunar orbit rendezvous (LOR). Direct ascent was the simplest, and much like idea of a Moon ship in golden age science fiction. One launch from Earth would send a ship to the Moon which would land there, then take off and return directly to Earth. There would be no need for rendezvous and docking in space (which had never been attempted, and nobody was sure was even possible), and no need for multiple launches per mission, which was seen as an advantage at a time when rockets were only marginally reliable and notorious for long delays from their scheduled launch time. The downside of direct-ascent was that it would require an enormous rocket: planners envisioned a monster called Nova which would have dwarfed the Saturn V eventually used for Apollo and required new manufacturing, test, and launch facilities to accommodate its size. Also, it is impossible to design a ship which is optimised both for landing under rocket power on the Moon and re-entering Earth's atmosphere at high speed. Still, direct-ascent seemed to involve the least number of technological unknowns. Ever wonder why the Apollo service module had that enormous Service Propulsion System engine? When it was specified, the mission mode had not been chosen, and it was made powerful enough to lift the entire command and service module off the lunar surface and return them to the Earth after a landing in direct-ascent mode.

Earth orbit rendezvous was similar to what Wernher von Braun envisioned in his 1950s popular writings about the conquest of space. Multiple launches would be used to assemble a Moon ship in low Earth orbit, and then, when it was complete, it would fly to the Moon, land, and then return to Earth. Such a plan would not necessarily even require a booster as large as the Saturn V. One might, for example, launch the lunar landing and return vehicle on one Saturn I, the stage which would propel it to the Moon on a second, and finally the crew on a third, who would board the ship only after it was assembled and ready to go. This was attractive in not requiring the development of a giant rocket, but required on-time launches of multiple rockets in quick succession, orbital rendezvous and docking (and in some schemes, refuelling), and still had the problem of designing a craft suitable both for landing on the Moon and returning to Earth.

Lunar orbit rendezvous was originally considered a distant third in the running. A single large rocket (but smaller than Nova) would launch two craft toward the Moon. One ship would be optimised for flight through the Earth's atmosphere and return to Earth, while the other would be designed solely for landing on the Moon. The Moon lander, operating only in vacuum and the Moon's weak gravity, need not be streamlined or structurally strong, and could be potentially much lighter than a ship able to both land on the Moon and return to Earth. Finally, once its mission was complete and the landing crew safely back in the Earth return ship, it could be discarded, meaning that all of the hardware needed solely for landing on the Moon need not be taken back to the Earth. This option was attractive, requiring only a single launch and no gargantuan rocket, and allowed optimising the lander for its mission (for example, providing better visibility to its pilots of the landing site), but it not only required rendezvous and docking, but doing it in lunar orbit which, if they failed, would strand the lander crew in orbit around the Moon with no hope of rescue.

After a high-stakes technical struggle, in the latter part of 1962, NASA selected lunar orbit rendezvous as the mission mode, with each landing mission to be launched on a single Saturn V booster, making the decision final with the selection of Grumman as contractor for the Lunar Module in November of that year. Had another mission mode been chosen, it is improbable in the extreme that the landing would have been accomplished in the 1960s.

The Apollo architecture was now in place. All that remained was building machines which had never been imagined before, learning to do things (on-time launches, rendezvous and docking in space, leaving spacecraft and working in the vacuum, precise navigation over distances no human had ever travelled before, and assessing all of the “unknown unknowns” [radiation risks, effects of long-term weightlessness, properties of the lunar surface, ability to land on lunar terrain, possible chemical or biological threats on the Moon, etc.]) and developing plans to cope with them.

This masterful book is the story of how what is possibly the largest collection of geeks and nerds ever assembled and directed at a single goal, funded with the abundant revenue from an economic boom, spurred by a geopolitical competition against the sworn enemy of liberty, took on these daunting challenges and, one by one, overcame them, found a way around, or simply accepted the risk because it was worth it. They learned how to tame giant rocket engines that randomly blew up by setting off bombs inside them. They abandoned the careful step-by-step development of complex rockets in favour of “all-up testing” (stack all of the untested pieces the first time, push the button, and see what happens) because “there wasn't enough time to do it any other way”. People were working 16–18–20 hours a day, seven days a week. Flight surgeons in Mission Control handed out “go and whoa pills”—amphetamines and barbiturates—to keep the kids on the console awake at work and asleep those few hours they were at home—hey, it was the Sixties!

This is not a tale of heroic astronauts and their exploits. The astronauts, as they have been the first to say, were literally at the “tip of the spear” and would not have been able to complete their missions without the work of almost half a million uncelebrated people who made them possible, not to mention the hundred million or so U.S. taxpayers who footed the bill.

This was not a straight march to victory. Three astronauts died in a launch pad fire the investigation of which revealed shockingly slapdash quality control in the assembly of their spacecraft and NASA's ignoring the lethal risk of fire in a pure oxygen atmosphere at sea level pressure. The second flight of the Saturn V was a near calamity due to multiple problems, some entirely avoidable (and yet the decision was made to man the next flight of the booster and send the crew to the Moon). Neil Armstrong narrowly escaped death in May 1968 when the Lunar Landing Research Vehicle he was flying ran out of fuel and crashed. And the division of responsibility between the crew in the spacecraft and mission controllers on the ground had to be worked out before it would be tested in flight where getting things right could mean the difference between life and death.

What can we learn from Apollo, fifty years on? Other than standing in awe at what was accomplished given the technology and state of the art of the time, and on a breathtakingly short schedule, little or nothing that is relevant to the development of space in the present and future. Apollo was the product of a set of circumstances which happened to come together at one point in history and are unlikely to ever recur. Although some of those who worked on making it a reality were dreamers and visionaries who saw it as the first step into expanding the human presence beyond the home planet, to those who voted to pay the forbidding bills (at its peak, NASA's budget, mostly devoted to Apollo, was more than 4% of all Federal spending; in recent years, it has settled at around one half of one percent: a national commitment to space eight times smaller as a fraction of total spending) Apollo was seen as a key battle in the Cold War. Allowing the Soviet Union to continue to achieve milestones in space while the U.S. played catch-up or forfeited the game would reinforce the Soviet message to the developing world that their economic and political system was the wave of the future, leaving decadent capitalism in the dust.

A young, ambitious, forward-looking president, smarting from being scooped once again by Yuri Gagarin's orbital flight and the humiliation of the débâcle at the Bay of Pigs in Cuba, seized on a bold stroke that would show the world the superiority of the U.S. by deploying its economic, industrial, and research resources toward a highly visible goal. And, after being assassinated two and a half years later, his successor, a space enthusiast who had directed a substantial part of NASA's spending to his home state and those of his political allies, presented the program as the legacy of the martyred president and vigorously defended it against those who tried to kill it or reduce its priority. The U.S. was in an economic boom which would last through most of the Apollo program until after the first Moon landing, and was the world's unchallenged economic powerhouse. And finally, the federal budget had not yet been devoured by uncontrollable “entitlement” spending and national debt was modest and manageable: if the national will was there, Apollo was affordable.

This confluence of circumstances was unique to its time and has not been repeated in the half century thereafter, nor is it likely to recur in the foreseeable future. Space enthusiasts who look at Apollo and what it accomplished in such a short time often err in assuming a similar program: government funded, on a massive scale with lavish budgets, focussed on a single goal, and based on special-purpose disposable hardware suited only for its specific mission, is the only way to open the space frontier. They are not only wrong in this assumption, but they are dreaming if they think there is the public support and political will to do anything like Apollo today. In fact, Apollo was not even particularly popular in the 1960s: only at one point in 1965 did public support for funding of human trips to the Moon poll higher than 50% and only around the time of the Apollo 11 landing did 50% of the U.S. population believe Apollo was worth what was being spent on it.

In fact, despite being motivated as a demonstration of the superiority of free people and free markets, Project Apollo was a quintessentially socialist space program. It was funded by money extracted by taxation, its priorities set by politicians, and its operations centrally planned and managed in a top-down fashion of which the Soviet functionaries at Gosplan could only dream. Its goals were set by politics, not economic benefits, science, or building a valuable infrastructure. This was not lost on the Soviets. Here is Soviet Minister of Defence Dmitriy Ustinov speaking at a Central Committee meeting in 1968, quoted by Boris Chertok in volume 4 of Rockets and People.

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

This kind of socialist operation works fine in a wartime crash program driven by time pressure, where unlimited funds and manpower are available, and where there is plenty of capital which can be consumed or borrowed to pay for it. But it does not create sustainable enterprises. Once the goal is achieved, the war won (or lost), or it runs out of other people's money to spend, the whole thing grinds to a halt or stumbles along, continuing to consume resources while accomplishing little. This was the predictable trajectory of Apollo.

Apollo was one of the noblest achievements of the human species and we should celebrate it as a milestone in the human adventure, but trying to repeat it is pure poison to the human destiny in the solar system and beyond.

This book is a superb recounting of the Apollo experience, told mostly about the largely unknown people who confronted the daunting technical problems and, one by one, found solutions which, if not perfect, were good enough to land on the Moon in 1969. Later chapters describe key missions, again concentrating on the problem solving which went on behind the scenes to achieve their goals or, in the case of Apollo 13, get home alive. Looking back on something that happened fifty years ago, especially if you were born afterward, it may be difficult to appreciate just how daunting the idea of flying to the Moon was in May 1961. This book is the story of the people who faced that challenge, pulled it off, and are largely forgotten today.

Both the 1989 first edition and 2004 paperback revised edition are out of print and available only at absurd collectors' prices. The Kindle edition, which is based upon the 2004 edition with small revisions to adapt to digital reader devices is available at a reasonable price, as is an unabridged audio book, which is a reading of the 2004 edition. You'd think there would have been a paperback reprint of this valuable book in time for the fiftieth anniversary of the landing of Apollo 11 (and the thirtieth anniversary of its original publication), but there wasn't.

Project Apollo is such a huge, sprawling subject that no book can possibly cover every aspect of it. For those who wish to delve deeper, here is a reading list of excellent sources. I have read all of these books and recommend every one. For those I have reviewed, I link to my review; for others, I link to a source where you can obtain the book.

If you wish to commemorate the landing of Apollo 11 in a moving ceremony with friends, consider hosting an Evoloterra celebration.

 Permalink

Egan, Greg. Schild's Ladder. New York: Night Shade Books, [2002, 2004, 2013] 2015. ISBN 978-1-59780-544-5.
Greg Egan is one of the most eminent contemporary authors in the genre of “hard” science fiction. By “hard”, one means not that it is necessarily difficult to read, but that the author has taken care to either follow the laws of known science or, if the story involves alternative laws (for example, a faster than light drive, anti-gravity, or time travel) to define those laws and then remain completely consistent with them. This needn't involve tedious lectures—masters of science fiction, like Greg Egan, “show, don't tell”—but the reader should be able to figure out the rules and the characters be constrained by them as the story unfolds. Egan is also a skilled practitioner of “world building” which takes hard science fiction to the next level by constructing entire worlds or universes in which an alternative set of conditions are worked out in a logical and consistent way.

Whenever a new large particle collider is proposed, fear-mongers prattle on about the risk of its unleashing some new physical phenomenon which might destroy the Earth or, for those who think big, the universe by, for example, causing it to collapse into a black hole or causing the quantum vacuum to tunnel to a lower energy state where the laws of physics are incompatible with the existence of condensed matter and life. This is, of course, completely absurd. We have observed cosmic rays, for example the Oh-My-God particle detected by an instrument in Utah in 1991, with energies more than twenty million times greater than those produced by the Large Hadron Collider, the most powerful particle accelerator in existence today. These natural cosmic rays strike the Earth, the Moon, the Sun, and everything else in the universe all the time and have been doing so for billions of years and, if you look around, you'll see that the universe is still here. If a high energy particle was going to destroy it, it would have been gone long ago.

No, if somebody's going to destroy the universe, I'd worry about some quiet lab in the physics building where somebody is exploring very low temperatures, trying to beat the record which stands at, depending upon how you define it, between 0.006 degrees Kelvin (for a large block of metal) and 100 picokelvin (for nuclear spins). These temperatures, and the physical conditions they may create, are deeply unnatural and, unless there are similar laboratories and apparatus created by alien scientists on other worlds, colder than have ever existed anywhere in our universe ever since the Big Bang.

The cosmic microwave background radiation pervades the universe, and has an energy at the present epoch which corresponds to a temperature of about 2.73 degrees Kelvin. Every natural object in the universe is bathed in this radiation so, even in the absence of other energy sources such as starlight, anything colder than that will heated by the background radiation until it reaches that temperature and comes into equilibrium. (There are a few natural processes in the universe which can temporarily create lower temperatures, but nothing below 1° K has ever been observed.) The temperature of the universe has been falling ever since the Big Bang, so no lower temperature has ever existed in the past. The only way to create a lower temperature is to expend energy in what amounts to a super-refrigerator that heats up something else in return for artificially cooling its contents. In doing so, it creates a region like none other in the known natural universe.

Whenever you explore some physical circumstance which is completely new, you never know what you're going to find, and researchers have been surprised many times in the past. Prior to 1911, nobody imagined that it was possible for an electrical current to flow with no resistance at all, and yet in early experiments with liquid helium, the phenomenon of superconductivity was discovered. In 1937, it was discovered that liquid helium could flow with zero viscosity: superfluidity. What might be discovered at temperatures a tiny fraction of those where these phenomena became manifest? Answering that question is why researchers strive to approach ever closer to the (unattainable) absolute zero. Might one of those phenomena destroy the universe? Could be: you'll never know until you try.

This is the premise of this book, which is hard science fiction but also difficult. For twenty thousand years the field of fundamental physics has found nothing new beyond the unification of quantum mechanics and general relativity called “Sarumpaet's rules” or Quantum Graph Theory (QGT). The theory explained the fabric of space and time and all of the particles and forces within it as coarse-grained manifestations of transformations of a graph at the Planck scale. Researchers at Mimosa Station, 370 light years from Earth, have built an experimental apparatus, the Quietener, to explore conditions which have never existed before in the universe and test Sarumpaet's Rules at the limits. Perhaps the currently-observed laws of physics were simply a random choice made by the universe an unimaginably short time after the Big Bang and frozen into place by decoherence due to interactions with the environment, analogous to the quantum Zeno effect. The Quietener attempts to null out every possible external influence, even gravitational waves by carefully positioned local cancelling sources, in the hope of reproducing the conditions in which the early universe made its random choice and to create, for a fleeting instant, just trillionths of a second, a region of space with entirely different laws of physics. Sarumpaet's Rules guaranteed that this so-called novo-vacuum would quickly collapse, as it would have a higher energy and decay into the vacuum we inhabit.

Oops.

Six hundred and five years after the unfortunate event at Mimosa, the Mimosa novo-vacuum, not just stable but expanding at half the speed of light, has swallowed more than two thousand inhabited star systems, and is inexorably expanding through the galaxy, transforming everything in its path to—nobody knows. The boundary emits only an unstructured “borderlight” which provides no clue as to what lies within. Because the interstellar society has long ago developed the ability to create backups of individuals, run them as computer emulations, transmit them at light speed from star to star, and re-instantiate them in new bodies for fuddy-duddies demanding corporeal existence, loss of life has been minimal, but one understands how an inexorably growing sphere devouring everything in its path might be disturbing. The Rindler is a research ship racing just ahead of the advancing novo-vacuum front, providing close-up access to it for investigators trying to figure out what it conceals.

Humans (who, with their divergently-evolved descendants, biological and digitally emulated, are the only intelligent species discovered so far in the galaxy) have divided, as they remain wont to do, into two factions: Preservationists, who view the novo-vacuum as an existential threat to the universe and seek ways to stop its expansion and, ideally, recover the space it has occupied; and Yielders, who believe the novo-vacuum to be a phenomenon so unique and potentially important that destroying it before understanding its nature and what is on the other side of the horizon would be unthinkable. Also, being (post-)human, the factions are willing to resort to violence to have their way.

This leads to an adventure spanning time and space, and eventually a mission into a region where the universe is making it up as it goes along. This is one of the most breathtakingly ambitious attempts at world (indeed, universe) building ever attempted in science fiction. But for this reader, it didn't work. First of all, when all of the principal characters have backups stored in safe locations and can reset, like a character in a video game with an infinite number of lives cheat, whenever anything bad happens, it's difficult to create dramatic tension. Humans have transcended biological substrates, yet those still choosing them remain fascinated with curious things about bumping their adaptive uglies. When we finally go and explore the unknown, it's mediated through several levels of sensors, translation, interpretation, and abstraction, so what is described comes across as something like a hundred pages of the acid trip scene at the end of 2001.

In the distance, glistening partitions, reminiscent of the algal membranes that formed the cages in some aquatic zoos, swayed back and forth gently, as if in time to mysterious currents. Behind each barrier the sea changed color abruptly, the green giving way to other bright hues, like a fastidiously segregated display of bioluminescent plankton.

Oh, wow.

And then, it stops. I don't mean ends, as that would imply that everything that's been thrown up in the air is somehow resolved. There is an attempt to close the circle with the start of the story, but a whole universe of questions are left unanswered. The human perspective is inadequate to describe a place where Planck length objects interact in Planck time intervals and the laws of physics are made up on the fly. Ultimately, the story failed for me since it never engaged me with the characters—I didn't care what happened to them. I'm a fan of hard science fiction, but this was just too adamantine to be interesting.

The title, Schild's Ladder, is taken from a method in differential geometry which is used to approximate the parallel transport of a vector along a curve.

 Permalink

Thor, Brad. Backlash. New York: Atria Books, 2019. ISBN 978-1-9821-0403-0.
This is the nineteenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). This is a very different kind of story from the last several Harvath outings, which involved high-stakes international brinkmanship, uncertain loyalties, and threats of mass terror attacks. This time it's up close and personal. Harvath, paying what may be his last visit to Reed Carlton, his dying ex-CIA mentor and employer, is the object of a violent kidnapping attack which kills those to whom he is closest and spirits him off, drugged and severely beaten, to Russia, where he is to be subjected to the hospitality of the rulers whose nemesis he has been for many years (and books) until he spills the deepest secrets of the U.S. intelligence community.

After being spirited out of the U.S., the Russian cargo plane transporting him to the rendition resort where he is to be “de-briefed” crashes, leaving him…somewhere. About all he knows is that it's cold, that nobody knows where he is or that he is alive, and that he has no way to contact anybody, anywhere who might help.

This is a spare, stark tale of survival. Starting only with what he can salvage from the wreck of the plane and the bodies of its crew (some of whom he had to assist in becoming casualties), he must overcome the elements, predators (quadripedal and bipedal), terrain, and uncertainty about his whereabouts and the knowledge and intentions of his adversaries, to survive and escape.

Based upon what has been done to him, it is also a tale of revenge. To Harvath, revenge was not a low state: it was a necessity,

In his world, you didn't let wrongs go unanswered—not wrongs like this, and especially when you had the ability to do something. Vengeance was a necessary function of a civilized world, particularly at its margins, in its most remote and wild regions. Evildoers, unwilling to submit to the rule of law, needed to lie awake in their beds at night worried about when justice would eventually come for them. If laws and standards were not worth enforcing, then they certainly couldn't be worth following.

Harvath forms tenuous alliances with those he encounters, and then must confront an all-out assault by élite mercenaries who, apparently unsatisfied with the fear induced by fanatic Russian operatives, model themselves on the Nazi SS.

Then, after survival, it's time for revenge. Harvath has done his biochemistry homework and learned well the off-label applications of suxamethonium chloride. Sux to be you, Boris.

This is a tightly-crafted thriller which is, in my opinion, one of best of Brad Thor's novels. There is no political message or agenda nor any of the Washington intrigue which has occupied recent books. Here it is a pure struggle between a resourceful individual, on his own against amoral forces of pure evil, in an environment as deadly as his human adversaries.

 Permalink

Dick, Philip K. The Man in the High Castle. New York: Mariner Books, [1962] 2011. ISBN 978-0-547-57248-2.
The year is 1962. Following the victory of Nazi Germany and Imperial Japan in World War II, North America is divided into spheres of influence by the victors, with the west coast Pacific States of America controlled by Japan, the territory east of the Mississippi split north and south between what is still called the United States of America and the South, where slavery has been re-instituted, both puppet states of Germany. In between are the Rocky Mountain states, a buffer zone between the Japanese and German sectors with somewhat more freedom from domination by them.

The point of departure where this alternative history diverges from our timeline is in 1934, when Franklin D. Roosevelt is assassinated in Miami, Florida. (In our history, Roosevelt was uninjured in an assassination attempt in Miami in 1933 that killed the mayor of Chicago, Anton Cermak.) Roosevelt's vice president, John Nance Garner, succeeds to the presidency and is re-elected in 1936. In 1940, the Republican party retakes the White House, with John W. Bricker elected president. Garner and Bricker pursue a policy of strict neutrality and isolation, which allows Germany, Japan, and Italy to divide up the most of the world and coerce other nations into becoming satellites or client states. Then, Japan and Germany mount simultaneous invasions of the east and west coasts of the U.S., resulting in a surrender in 1947 and the present division of the continent.

By 1962, the victors are secure in their domination of the territories they have subdued. Germany has raced ahead economically and in technology, draining the Mediterranean to create new farmland, landing on the Moon and Mars, and establishing high-speed suborbital rocket transportation service throughout their far-flung territories. There is no serious resistance to the occupation in the former United States: its residents seem to be more or less resigned to second-class status under their German or Japanese overlords.

In the Pacific States the Japanese occupiers have settled in to a comfortable superiority over the vanquished, and many have become collectors of artefacts of the vanished authentic America. Robert Childan runs a shop in San Francisco catering to this clientèle, and is contacted by an official of the Japanese Trade Mission, seeking a gift to impress a visiting Swedish industrialist. This leads into a maze of complexity and nothing being as it seems as only Philip K. Dick (PKD) can craft. Is the Swede really a Swede or a German, and is he a Nazi agent or something else? Who is the mysterious Japanese visitor he has come to San Francisco to meet? Is Childan a supplier of rare artefacts or a swindler exploiting gullible Japanese rubes with fakes?

Many characters in the book are reading a novel called The Grasshopper Lies Heavy, banned in areas under German occupation but available in the Pacific States and other territories, which is an alternative history tale written by an elusive author named Hawthorne Abendsen, about a world in which the Allies defeated Germany and Japan in World War II and ushered in a golden age of peace, prosperity, and freedom. Abendsen is said to have retreated to a survivalist compound called the High Castle in the Rocky Mountain states. Characters we meet become obsessed with tracking down and meeting Abendsen. Who are they, and what are their motives? Keep reminding yourself, this is a PKD novel! We're already dealing with a fictional mysterious author of an alternative history of World War II within an alternative history novel of World War II by an author who is himself a grand illusionist.

It seems like everybody in the Pacific States, regardless of ethnicity or nationality, is obsessed with the I Ching. They are constantly consulting “the oracle” and basing their decisions upon it. Not just the westerners but even the Japanese are a little embarrassed by this, as the latter are aware that is it an invention of the Chinese, who they view as inferior, yet they rely upon it none the less. Again, the PKD shimmering reality distortion field comes into play as the author says that he consulted the I Ching to make decisions while plotting the novel, as does Hawthorne Abendsen in writing the novel within the novel.

This is quintessential PKD: the story is not so much about what happens (indeed, there is little resolution of any of the obvious conflicts in the circumstances of the plot) but rather instilling in the reader a sense that nothing is what it appears to be and, at the meta (or meta meta) level, that our history and destiny are ruled as much by chance (exemplified here by the I Ching) as by our intentions, will, and actions. At the end of the story, little or nothing has been resolved, and we are left only with questions and uncertainty. (PKD said that he intended a sequel, but despite efforts in that direction, never completed one.)

I understand that some kind of television adaptation loosely based upon the novel has been produced by one of those streaming services which are only available to people who live in continental-scale, railroad-era, legacy empires. I have not seen it, and have no interest in doing so. PKD is notoriously difficult to adapt to visual media, and today's Hollywood is, shall we say, not strong on nuance and ambiguity, which is what his fiction is all about.

Nuance and ambiguity…. Here's the funny thing. When I finished this novel, I was unimpressed and disappointed. I expected it to be great: I have enjoyed the fiction of PKD since I started to read his stories in the 1960s, and this novel won the Hugo Award for Best Novel in 1963, then the highest honour in science fiction. But the story struck me as only an exploration of a tiny corner of this rich alternative history. Little of what happens affects events in the large and, if it did, only long after the story ends. It was only while writing this that I appreciated that this may have been precisely what PKD was trying to achieve: that this is all about the contingency of history—that random chance matters much more than what we, or “great figures” do, and that the best we can hope for is to try to do what we believe is right when presented with the circumstances and events that confront us as we live our lives. I have no idea if you'll like this. I thought I would, and then I didn't, and now I, in retrospect, I do. Welcome to the fiction of Philip K. Dick.

 Permalink

Rothbard, Murray. What Has Government Done to Our Money? Auburn, AL: Ludwig von Mises Institute, [1963, 1985, 1990, 2010] 2015. ISBN 978-1-61016-645-4.
This slim book (just 119 pages of main text in this edition) was originally published in 1963 when the almighty gold-backed United States dollar was beginning to crack up under the pressure of relentless deficit spending and money printing by the Federal Reserve. Two years later, as the crumbling of the edifice accelerated, amidst a miasma of bafflegab about fantasies such as a “silver shortage” by Keynesian economists and other charlatans, the Coinage Act of 1965 would eliminate sliver from most U.S. coins, replacing them with counterfeit slugs craftily designed to fool vending machines into accepting them. (The little-used half dollar had its silver content reduced from 90% to 40%, and would be silverless after 1970.) In 1968, the U.S. Treasury would default upon its obligation to redeem paper silver certificates in silver coin or bullion, breaking the link between the U.S. currency and precious metal entirely.

All of this was precisely foreseen in this clear-as-light exposition of monetary theory and forty centuries of government folly by libertarian thinker and Austrian School economist Murray Rothbard. He explains the origin of money as societies progress from barter to indirect exchange, why most (but not all) cultures have settled on precious metals such as gold and silver as a medium of intermediate exchange (they do not deteriorate over time, can be subdivided into arbitrarily small units, and are relatively easy to check for authenticity). He then describes the sorry progression by which those in authority seize control over this free money and use it to fleece their subjects. First, they establish a monopoly over the ability to coin money, banning private mints and the use of any money other than their own coins (usually adorned with a graven image of some tyrant or another). They give this coin and its subdivisions a name, such as “dollar”, “franc”, “mark” or some such, which is originally defined as a unit of mass of some precious metal (for example, the U.S. dollar, prior to its debasement, was defined as 23.2 grains [1.5033 grams, or about 1/20 troy ounce] of pure gold). (Rothbard, as an economist rather than a physicist, and one working in English customary units, confuses mass with weight throughout the book. They aren't the same thing, and the quantity of gold in a coin doesn't vary depending on whether you weigh it at the North Pole or the summit of Chimborazo.)

Next, the rulers separate the concept of the unit of money from the mass of precious metal which it originally defined. A key tool in this are legal tender laws which require all debts to be settled in the state-defined monetary unit. This opens the door to debasement of the currency: replacing coins bearing the same unit of money with replacements containing less precious metal. In ancient Rome, the denarius originally contained around 4.5 grams of pure silver. By the third century A.D., its silver content had been reduced to about 2%, and was intrinsically almost worthless. Of course, people aren't stupid, and when the new debased coins show up, they will save the old, more valuable ones, and spend the new phoney money. This phenomenon is called “Gresham's law”, by which bad money chases out good. But this is entirely the result of a coercive government requiring its subjects to honour a monetary unit which it has arbitrarily reduced in intrinsic value.

This racket has been going on since antiquity, but as the centuries have passed, it has become ever more sophisticated and effective. Rothbard explains the origin of paper money, first as what were essentially warehouse receipts for real money (precious metal coins or bullion stored by its issuer and payable on demand), then increasingly abstract assets “backed” by only a fraction of the total value in circulation, and finally, with the advent of central banking, a fiction totally under the control of those who print the paper and their political masters. The whole grand racket of fractional reserve banking and the government inflationary engine it enables is explained in detail.

In the 1985 expanded edition, Rothbard adds a final twenty page chapter chronicling “The Monetary Breakdown of the West”, a tragedy in nine acts beginning with the classical gold standard of 1815–1914 and ending with the total severing of world currencies from any anchor to gold in March, 1973, ushering in the monetary chaos of endlessly fluctuating exchange rates, predatory currency manipulation, and a towering (and tottering) pyramid of completely unproductive financial speculation. He then explores the monetary utopia envisioned by the economic slavers: a world paper currency managed by a World Central Bank. There would no longer be any constraint upon the ability of those in power to pick the pockets of their subjects by depreciating the unit of account of the only financial assets they were permitted to own. Of course, this would lead to a slow-motion catastrophe, destroying enterprise, innovation, and investment, pauperising the population, and leading inevitably to civil unrest and demagogic political movements. Rothbard saw all of this coming, and those of us who understood his message knew exactly what was going to happen when they rolled out the Euro and a European Central Bank in 1991, which is just a regional version of the same Big Con.

This book remains, if I dare say, the gold standard when it comes to a short, lucid, and timeless explanation of monetary theory, history, the folly of governments, and its sad consequences. Is there any hope of restoring sanity in this age of universal funny money? Perhaps—the same technology which permits the establishment of cryptocurrencies such as Bitcoin radically reduces the transaction costs of using any number of competing currencies in a free market. While Gresham's Law holds that in a coercive un-free market bad money will drive out good, in a totally free market, where participants are able to use any store of value, unit of account, and medium of exchange they wish (free of government coercion through legal tender laws or taxation of currency exchanges), the best money will drive out its inferior competitors, and the quality of a given money will be evaluated based upon the transparency of its issuer and its performance for those who use it.

This book may be purchased from Amazon in either a print or Kindle edition, and is also available for free from the publisher, the Ludwig von Mises Institute, in HTML, PDF, and EPUB formats or as an audio book. The PDF edition is available in the English, Spanish, Danish, and Hungarian languages. The book is published under the Creative Commons Attribution License 3.0 and may be redistributed pursuant to the terms of that license.

 Permalink

Brennan, Gerald. Island of Clouds. Chicago: Tortoise Books, 2017. ISBN 978-0-9860922-9-9.
This is the third book, and the first full-length novel, in the author's “Altered Space” series of alternative histories of the cold war space race. Each stand-alone story explores a space mission which did not take place, but could have, given the technology and political circumstances at the time. The first, Zero Phase (October 2016), asks what might have happened had Apollo 13's service module oxygen tank waited to explode until after the lunar module had landed on the Moon. The present book describes a manned Venus fly-by mission performed in 1972 using modified Apollo hardware launched by a single Saturn V.

“But, wait…”, you exclaim, ”that's crazy!” Why would you put a crew of three at risk for a mission lasting a full year for just a few minutes of close-range fly-by of a planet whose surface is completely obscured by thick clouds? Far from Earth, any failure of their life support systems, spacecraft systems, a medical emergency, or any number of other mishaps could kill them; they'd be racking up a radiation dose from cosmic rays and solar particle emissions every day in the mission; and the inexorable laws of orbital mechanics would provide them no option to come home early if something went wrong.

Well, crazy it may have been, but in the mid-1960s, precisely such a mission was the subject of serious study by NASA and its contractors as a part of the Apollo Applications Program planned to follow the Apollo lunar landings. Here is a detailed study of a manned Venus flyby [PDF] by NASA contractor Bellcomm, Inc. from February 1967. In addition to observing Venus during the brief fly-by, the astronauts would deploy multiple robotic probes which would explore the atmosphere and surface of Venus and relay their findings either via the manned spacecraft or directly to Earth.

It was still crazy. For a tiny fraction of the cost of a Saturn V, Apollo spacecraft, and all the modifications and new development to support such a long-term mission, and at no risk to humans, an armada of robotic probes could have been launched on smaller, far less expensive rockets such as Delta, Atlas, and Titan, which would have returned all of the science proposed for the manned fly-by and more. But in the mid-sixties, with NASA's budget reaching 4% of all federal spending, a level by that metric eight times higher than in recent years, NASA was “feeling its oats” and planning as if the good times were just going to roll on forever.

In this novel, they did. After his re-election in 1968, where Richard Nixon and George Wallace split the opposition vote, and the triumphant Moon landing by Ed White and Buzz Aldrin, President Johnson opts to keep the momentum of Apollo going and uses his legendary skills in getting what he wants from Congress to secure the funds for a Venus fly-by in 1972. Deke Slayton chooses his best friend, just back from the Moon, Alan Shepard, to command the mission, with the second man on the Moon Buzz Aldrin and astronaut-medical doctor Joe Kerwin filling out the crew. Aldrin is sorely disappointed at not being given command, but accepts the assignment for the adventure and opportunity to get back into the game after the post flight let-down of returning from the Moon to a desk job.

The mission in the novel is largely based upon the NASA plans from the 1960s with a few modifications to simplify the story (for example, the plan to re-fit the empty third stage of the Saturn V booster as living quarters for the journey, as was also considered in planning for Skylab, is replaced here by a newly-developed habitation module launched by the Saturn V in place of the lunar module). There are lots of other little departures from the timeline in our reality, many just to remind the reader that this is a parallel universe.

After the mission gets underway, a number of challenges confront the crew: the mission hardware, space environment, one other, and the folks back on Earth. The growing communication delay as the distance increases from Earth poses difficulties no manned spaceflight crew have had to deal with before. And then, one of those things that can happen in space (and could have occurred on any of the Apollo lunar missions) happens, and the crew is confronted by existential problems on multiple fronts, must make difficult and unpleasant decisions, and draw on their own resources and ingenuity and courage to survive.

This is a completely plausible story which, had a few things gone the other way, could have happened in the 1970s. The story is narrated by Buzz Aldrin, which kind of lets you know at least he got back from the mission. The characters are believable, consistent with what we know of their counterparts in our reality, and behave as you'd expect from such consummate professionals under stress. I have to say, however, as somebody who has occasionally committed science fiction, that I would be uncomfortable writing a story in which characters based upon and bearing the names of those of people in the real world, two of whom are alive at this writing, have their characters and personal lives bared to the extent they are in this fiction. In the first book in the series, Zero Phase, Apollo 13 commander James Lovell, whose fictional incarnation narrates the story, read and endorsed the manuscript before publication. I was hoping to find a similar note in this novel, but it wasn't there. These are public figures, and there's nothing unethical or improper about having figures based upon them in an alternative history narrative behaving as the author wishes, and the story works very well. I'm just saying I wouldn't have done it that way without clearing it with the individuals involved.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

August 2019

Taleb, Nassim Nicholas. Skin in the Game. New York: Random House, 2018. ISBN 978-0-425-28462-9.
This book is volume four in the author's Incerto series, following Fooled by Randomness (February 2011), The Black Swan (January 2009), and Antifragile (April 2018). In it, he continues to explore the topics of uncertainty, risk, decision making under such circumstances, and how both individuals and societies winnow out what works from what doesn't in order to choose wisely among the myriad alternatives available.

The title, “Skin in the Game”, is an aphorism which refers to an individual's sharing the risks and rewards of an undertaking in which they are involved. This is often applied to business and finance, but it is, as the author demonstrates, a very general and powerful concept. An airline pilot has skin in the game along with the passengers. If the plane crashes and kills everybody on board, the pilot will die along with them. This insures that the pilot shares the passengers' desire for a safe, uneventful trip and inspires confidence among them. A government “expert” putting together a “food pyramid” to be vigorously promoted among the citizenry and enforced upon captive populations such as school children or members of the armed forces, has no skin in the game. If his or her recommendations create an epidemic of obesity, type 2 diabetes, and cardiovascular disease, that probably won't happen until after the “expert” has retired and, in any case, civil servants are not fired or demoted based upon the consequences of their recommendations.

Ancestral human society was all about skin in the game. In a small band of hunter/gatherers, everybody can see and is aware of the actions of everybody else. Slackers who do not contribute to the food supply are likely to be cut loose to fend for themselves. When the hunt fails, nobody eats until the next kill. If a conflict develops with a neighbouring band, those who decide to fight instead of running away or surrendering are in the front line of the battle and will be the first to suffer in case of defeat.

Nowadays we are far more “advanced”. As the author notes, “Bureaucracy is a construction by which a person is conveniently separated from the consequences of his or her actions.” As populations have exploded, layers and layers of complexity have been erected, removing authority ever farther from those under its power. We have built mechanisms which have immunised a ruling class of decision makers from the consequences of their decisions: they have little or no skin in the game.

Less than a third of all Roman emperors died in their beds. Even though they were at the pinnacle of the largest and most complicated empire in the West, they regularly paid the ultimate price for their errors either in battle or through palace intrigue by those dissatisfied with their performance. Today the geniuses responsible for the 2008 financial crisis, which destroyed the savings of hundreds of millions of innocent people and picked the pockets of blameless taxpayers to bail out the institutions they wrecked, not only suffered no punishment of any kind, but in many cases walked away with large bonuses or golden parachute payments and today are listened to when they pontificate on the current scene, rather than being laughed at or scorned as they would be in a rational world. We have developed institutions which shift the consequences of bad decisions from those who make them to others, breaking the vital feedback loop by which we converge upon solutions which, if not perfect, at least work well enough to get the job done without the repeated catastrophes that result from ivory tower theories being implemented on a grand scale in the real world.

Learning and Evolution

Being creatures who have evolved large brains, we're inclined to think that learning is something that individuals do, by observing the world, drawing inferences, testing hypotheses, and taking on knowledge accumulated by others. But the overwhelming majority of creatures who have ever lived, and of those alive today, do not have large brains—indeed, many do not have brains at all. How have they learned to survive and proliferate, filling every niche on the planet where environmental conditions are compatible with biochemistry based upon carbon atoms and water? How have they, over the billions of years since life arose on Earth, inexorably increased in complexity, most recently producing a species with a big brain able to ponder such questions?

The answer is massive parallelism, exhaustive search, selection for survivors, and skin in the game, or, putting it all together, evolution. Every living creature has skin in the ultimate game of whether it will produce offspring that inherit its characteristics. Every individual is different, and the process of reproduction introduces small variations in progeny. Change the environment, and the characteristics of those best adapted to reproduce in it will shift and, eventually, the population will consist of organisms adapted to the new circumstances. The critical thing to note is that while each organism has skin in the game, many may, and indeed must, lose the game and die before reproducing. The individual organism does not learn, but the species does and, stepping back another level, the ecosystem as a whole learns and adapts as species appear, compete, die out, or succeed and proliferate. This simple process has produced all of the complexity we observe in the natural world, and it works because every organism and species has skin in the game: its adaptation to its environment has immediate consequences for its survival.

None of this is controversial or new. What the author has done in this book is to apply this evolutionary epistemology to domains far beyond its origins in biology—in fact, to almost everything in the human experience—and demonstrate that both success and wisdom are generated when this process is allowed to work, but failure and folly result when it is thwarted by institutions which take the skin out of the game.

How does this apply in present-day human society? Consider one small example of a free market in action. The restaurant business is notoriously risky. Restaurants come and go all the time, and most innovations in the business fall flat on their face and quickly disappear. And yet most cities have, at any given time, a broad selection of restaurants with a wide variety of menus, price points, ambiance, and service to appeal to almost any taste. Each restaurant has skin in the game: those which do not attract sufficient customers (or, having once been successful, fail to adapt when customers' tastes change) go out of business and are replaced by new entrants. And yet for all the churning and risk to individual restaurants, the restaurant “ecosystem” is remarkably stable, providing customers options closely aligned with their current desires.

To a certain kind of “expert” endowed with a big brain (often crammed into a pointy head), found in abundance around élite universities and government agencies, all of this seems messy, chaotic, and (the horror!) inefficient. Consider the money lost when a restaurant fails, the cooks and waiters who lose their jobs, having to find a new restaurant to employ them, the vacant building earning nothing for its owner until a new tenant is found—certainly there must be a better way. Why, suppose instead we design a standardised set of restaurants based upon a careful study of public preferences, then roll out this highly-optimised solution to the problem. They might be called “public feeding centres”. And they would work about as well as the name implies.

Survival and Extinction

Evolution ultimately works through extinction. Individuals who are poorly adapted to their environment (or, in a free market, companies which poorly serve their customers) fail to reproduce (or, in the case of a company, survive and expand). This leaves a population better adapted to its environment. When the environment changes, or a new innovation appears (for example, electricity in an age dominated by steam power), a new sorting out occurs which may see the disappearance of long-established companies that failed to adapt to the new circumstances. It is a tautology that the current population consists entirely of survivors, but there is a deep truth within this observation which is at the heart of evolution. As long as there is a direct link between performance in the real world and survival—skin in the game—evolution will work to continually optimise and refine the population as circumstances change.

This evolutionary process works just as powerfully in the realm of ideas as in biology and commerce. Ideas have consequences, and for the process of selection to function, those consequences, good or ill, must be borne by those who promulgate the idea. Consider inventions: an inventor who creates something genuinely useful and brings it to market (recognising that there are many possible missteps and opportunities for bad luck or timing to disrupt this process) may reap great rewards which, in turn, will fund elaboration of the original invention and development of related innovations. The new invention may displace existing technologies and cause them, and those who produce them, to become obsolete and disappear (or be relegated to a minor position in the market). Both the winner and loser in this process have skin in the game, and the outcome of the game is decided by the evaluation of the customers expressed in the most tangible way possible: what they choose to buy.

Now consider an academic theorist who comes up with some intellectual “innovation” such as “Modern Monetary Theory” (which basically says that a government can print as much paper money as it wishes to pay for what it wants without collecting taxes or issuing debt as long as full employment has not been achieved). The theory and the reputation of those who advocate it are evaluated by their peers: other academics and theorists employed by institutions such as national treasuries and central banks. Such a theory is not launched into a market to fend for itself among competing theories: it is “sold” to those in positions of authority and imposed from the top down upon an economy, regardless of the opinions of those participating in it. Now, suppose the brilliant new idea is implemented and results in, say, total collapse of the economy and civil society? What price do those who promulgated the theory and implemented it pay? Little or nothing, compared to the misery of those who lost their savings, jobs, houses, and assets in the calamity. Many of the academics will have tenure and suffer no consequences whatsoever: they will refine the theory, or else publish erudite analyses of how the implementation was flawed and argue that the theory “has never been tried”. Some senior officials may be replaced, but will doubtless land on their feet and continue to pull down large salaries as lobbyists, consultants, or pundits. The bureaucrats who patiently implemented the disastrous policies are civil servants: their jobs and pensions are as eternal as anything in this mortal sphere. And, before long, another bright, new idea will bubble forth from the groves of academe.

(If you think this hypothetical example is unrealistic, see the career of one Robert Rubin. “Bob”, during his association with Citigroup between 1999 and 2009, received total compensation of US$126 million for his “services” as a director, advisor, and temporary chairman of the bank, during which time he advocated the policies which eventually brought it to the brink of collapse in 2008 and vigorously fought attempts to regulate the financial derivatives which eventually triggered the global catastrophe. During his tenure at Citigroup, shareholders of its stock lost 70% of their investment, and eventually the bank was bailed out by the federal government using money taken by coercive taxation from cab drivers and hairdressers who had no culpability in creating the problems. Rubin walked away with his “winnings” and paid no price, financial, civil, or criminal, for his actions. He is one of the many poster boys and girls for the “no skin in the game club”. And lest you think that, chastened, the academics and pointy-heads in government would regain their grounding in reality, I have just one phrase for you, “trillion dollar coin”, which “Nobel Prize” winner Paul Krugman declared to be “the most important fiscal policy debate of our lifetimes”.)

Intellectual Yet Idiot

A cornerstone of civilised society, dating from at least the Code of Hammurabi (c. 1754 B.C.), is that those who create risks must bear those risks: an architect whose building collapses and kills its owner is put to death. This is the fundamental feedback loop which enables learning. When it is broken, when those who create risks (academics, government policy makers, managers of large corporations, etc.) are able to transfer those risks to others (taxpayers, those subject to laws and regulations, customers, or the public at large), the system does not learn; evolution breaks down; and folly runs rampant. This phenomenon is manifested most obviously in the modern proliferation of the affliction the author calls the “intellectual yet idiot” (IYI). These are people who are evaluated by their peers (other IYIs), not tested against the real world. They are the equivalent of a list of movies chosen based upon the opinions of high-falutin' snobbish critics as opposed to box office receipts. They strive for the approval of others like themselves and, inevitably, spiral into ever more abstract theories disconnected from ground truth, ascending ever higher into the sky.

Many IYIs achieve distinction in one narrow field and then assume that qualifies them to pronounce authoritatively on any topic whatsoever. As was said by biographer Roy Harrod of John Maynard Keynes,

He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.

Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,

Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains.

The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.

Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few.

One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.

The Lindy Effect

How can you sift the good and the enduring from the mass of ephemeral fads and bad ideas that swirl around us every day? The Lindy effect is a powerful tool. Lindy's delicatessen in New York City was a favoured hangout for actors who observed that the amount of time a show had been running on Broadway was the best predictor of how long it would continue to run. A show that has run for three months will probably last for at least three months more. A show that has made it to the one year mark probably has another year or more to go. In other words, the best test for whether something will stand the test of time is whether it has already withstood the test of time. This may, at first, seem counterintuitive: a sixty year old person has a shorter expected lifespan remaining than a twenty year old. The Lindy effect applies only to nonperishable things such as “ideas, books, technologies, procedures, institutions, and political systems”.

Thus, a book which has been in print continuously for a hundred years is likely to be in print a hundred years from now, while this season's hot best-seller may be forgotten a few years hence. The latest political or economic theory filling up pages in the academic journals and coming onto the radar of the IYIs in the think tanks, media punditry, and (shudder) government agencies, is likely to be forgotten and/or discredited in a few years while those with a pedigree of centuries or millennia continue to work for those more interested in results than trendiness.

Religion is Lindy. If you disregard all of the spiritual components to religion, long-established religions are powerful mechanisms to transmit accumulated wisdom, gained through trial-and-error experimentation and experience over many generations, in a ready-to-use package for people today. One disregards or scorns this distilled experience at one's own great risk. Conversely, one should be as sceptical about “innovation” in ancient religious traditions and brand-new religions as one is of shiny new ideas in any other field.

(A few more technical notes…. As I keep saying, “Once Pareto gets into your head, you'll never get him out.” It's no surprise to find that the Lindy effect is deeply related to the power-law distribution of many things in human experience. It's simply another way to say that the lifetime of nonperishable goods is distributed according to a power law just like incomes, sales of books, music, and movie tickets, use of health care services, and commission of crimes. Further, the Lindy effect is similar to J. Richard Gott's Copernican statement of the Doomsday argument, with the difference that Gott provides lower and upper bounds on survival time for a given confidence level predicted solely from a random observation that something has existed for a known time.)

Uncertainty, Risk, and Decision Making

All of these observations inform dealing with risk and making decisions based upon uncertain information. The key insight is that in order to succeed, you must first survive. This may seem so obvious as to not be worth stating, but many investors, including those responsible for blow-ups which make the headlines and take many others down with them, forget this simple maxim. It is deceptively easy to craft an investment strategy which will yield modest, reliable returns year in and year out—until it doesn't. Such strategies tend to be vulnerable to “tail risks”, in which an infrequently-occurring event (such as 2008) can bring down the whole house of cards and wipe out the investor and the fund. Once you're wiped out, you're out of the game: you're like the loser in a Russian roulette tournament who, after the gun goes off, has no further worries about the probability of that event. Once you accept that you will never have complete information about a situation, you can begin to build a strategy which will prevent your blowing up under any set of circumstances, and may even be able to profit from volatility. This is discussed in more detail in the author's earlier Antifragile.

The Silver Rule

People and institutions who have skin in the game are likely to act according to the Silver Rule: “Do not do to others what you would not like them to do to you.” This rule, combined with putting the skin of those “defence intellectuals” sitting in air-conditioned offices into the games they launch in far-off lands around the world, would do much to save the lives and suffering of the young men and women they send to do their bidding.

 Permalink

Shlaes, Amity. Coolidge. New York: Harper Perennial, [2013] 2014. ISBN 978-0-06-196759-7.
John Calvin Coolidge, Jr. was born in 1872 in Plymouth Notch, Vermont. His family were among the branch of the Coolidge clan who stayed in Vermont while others left its steep, rocky, and often bleak land for opportunity in the Wild West of Ohio and beyond when the Erie canal opened up these new territories to settlement. His father and namesake made his living by cutting wood, tapping trees for sugar, and small-scale farming on his modest plot of land. He diversified his income by operating a general store in town and selling insurance. There was a long tradition of public service in the family. Young Coolidge's great-grandfather was an officer in the American Revolution and his grandfather was elected to the Vermont House of Representatives. His father was justice of the peace and tax collector in Plymouth Notch, and would later serve in the Vermont House of Representatives and Senate.

Although many in the cities would consider their rural life far from the nearest railroad terminal hard-scrabble, the family was sufficiently prosperous to pay for young Calvin (the name he went by from boyhood) to attend private schools, boarding with families in the towns where they were located and infrequently returning home. He followed a general college preparatory curriculum and, after failing the entrance examination the first time, was admitted on his second attempt to Amherst College as a freshman in 1891. A loner, and already with a reputation for being taciturn, he joined none of the fraternities to which his classmates belonged, nor did he participate in the athletics which were a part of college life. He quickly perceived that Amherst had a class system, where the scions of old money families from Boston who had supported the college were elevated above nobodies from the boonies like himself. He concentrated on his studies, mastering Greek and Latin, and immersing himself in the works of the great orators of those cultures.

As his college years passed, Coolidge became increasingly interested in politics, joined the college Republican Club, and worked on the 1892 re-election campaign of Benjamin Harrison, whose Democrat opponent, Grover Cleveland, was seeking to regain the presidency he had lost to Harrison in 1888. Writing to his father after Harrison's defeat, his analysis was that “the reason seems to be in the never satisfied mind of the American and in the ever desire to shift in hope of something better and in the vague idea of the working and farming classes that somebody is getting all the money while they get all the work.”

His confidence growing, Coolidge began to participate in formal debates, finally, in his senior year, joined a fraternity, and ran for and won the honour of being an orator at his class's graduation. He worked hard on the speech, which was a great success, keeping his audience engaged and frequently laughing at his wit. While still quiet in one-on-one settings, he enjoyed public speaking and connecting with an audience.

After graduation, Coolidge decided to pursue a career in the law and considered attending law school at Harvard or Columbia University, but decided he could not afford the tuition, as he was still being supported by his father and had no prospects for earning sufficient money while studying the law. In that era, most states did not require a law school education; an aspiring lawyer could, instead, become an apprentice at an established law firm and study on his own, a practice called reading the law. Coolidge became an apprentice at a firm in Northampton, Massachusetts run by two Amherst graduates and, after two years, in 1897, passed the Massachusetts bar examination and was admitted to the bar. In 1898, he set out on his own and opened a small law office in Northampton; he had embarked on the career of a country lawyer.

While developing his law practice, Coolidge followed in the footsteps of his father and grandfather and entered public life as a Republican, winning election to the Northampton City Council in 1898. In the following years, he held the offices of City Solicitor and county clerk of courts. In 1903 he married Grace Anna Goodhue, a teacher at the Clarke School for the Deaf in Northampton. The next year, running for the local school board, he suffered the only defeat of his political career, in part because his opponents pointed out he had no children in the schools. Coolidge said, “Might give me time.” (The Coolidges went on to have two sons, John, born in 1906, and Calvin Jr., in 1908.)

In 1906, Coolidge sought statewide office for the first time, running for the Massachusetts House of Representatives and narrowly defeating the Democrat incumbent. He was re-elected the following year, but declined to run for a third term, returning to Northampton where he ran for mayor, won, and served two one year terms. In 1912 he ran for the State Senate seat of the retiring Republican incumbent and won. In the presidential election of that year, when the Republican party split between the traditional wing favouring William Howard Taft and progressives backing Theodore Roosevelt, Coolidge, although identified as a progressive, having supported women's suffrage and the direct election of federal senators, among other causes, stayed with the Taft Republicans and won re-election. Coolidge sought a third term in 1914 and won, being named President of the State Senate with substantial influence on legislation in the body.

In 1915, Coolidge moved further up the ladder by running for the office of Lieutenant Governor of Massachusetts, balancing the Republican ticket led by a gubernatorial candidate from the east of the state with his own base of support in the rural west. In Massachusetts, the Lieutenant Governor does not preside over the State Senate, but rather fulfils an administrative role, chairing executive committees. Coolidge presided over the finance committee, which provided him experience in managing a budget and dealing with competing demands from departments that was to prove useful later in his career. After being re-elected to the office in 1915 and 1916 (statewide offices in Massachusetts at the time had a term of only one year), with the governor announcing his retirement, Coolidge was unopposed for the Republican nomination for governor and narrowly defeated the Democrat in the 1918 election.

Coolidge took office at a time of great unrest between industry and labour. Prices in 1918 had doubled from their 1913 level; nothing of the kind had happened since the paper money inflation during the Civil War and its aftermath. Nobody seemed to know why: it was usually attributed to the war, but nobody understood the cause and effect. There doesn't seem to have been a single mainstream voice who observed that the rapid rise in prices (which was really a depreciation of the dollar) began precisely at the moment the Creature from Jekyll Island was unleashed upon the U.S. economy and banking system. What was obvious, however, was that in most cases industrial wages had not kept pace with the rise in the cost of living, and that large companies which had raised their prices had not correspondingly increased what they paid their workers. This gave a powerful boost to the growing union movement. In early 1919 an ugly general strike in Seattle idled workers across the city, and the United Mine Workers threatened a nationwide coal strike for November 1919, just as the maximum demand for coal in winter would arrive. In Boston, police officers voted to unionise and affiliate with the American Federation of Labor, ignoring an order from the Police Commissioner forbidding officers to join a union. On September 9th, a majority of policemen defied the order and walked off the job.

Those who question the need for a police presence on the street in big cities should consider the Boston police strike as a cautionary tale, at least as things were in the city of Boston in the year 1919. As the Sun went down, the city erupted in chaos, mayhem, looting, and violence. A streetcar conductor was shot for no apparent reason. There were reports of rapes, murders, and serious injuries. The next day, more than a thousand residents applied for gun permits. Downtown stores were boarding up their display windows and hiring private security forces. Telephone operators and employees at the electric power plant threatened to walk out in sympathy with the police. From Montana, where he was campaigning in favour of ratification of the League of Nations treaty, President Woodrow Wilson issued a mealy-mouthed statement saying, “There is no use in talking about political democracy unless you have also industrial democracy”.

Governor Coolidge acted swiftly and decisively. He called up the Guard and deployed them throughout the city, fired all of the striking policemen, and issued a statement saying “The action of the police in leaving their posts of duty is not a strike. It is a desertion. … There is nothing to arbitrate, nothing to compromise. In my personal opinion there are no conditions under which the men can return to the force.” He directed the police commissioner to hire a new force to replace the fired men. He publicly rebuked American Federation of Labor chief Samuel Gompers in a telegram released to the press which concluded, “There is no right to strike against the public safety by anybody, anywhere, any time.”

When the dust settled, the union was broken, peace was restored to the streets of Boston, and Coolidge had emerged onto the national stage as a decisive leader and champion of what he called the “reign of law.” Later in 1919, he was re-elected governor with seven times the margin of his first election. He began to be spoken of as a potential candidate for the Republican presidential nomination in 1920.

Coolidge was nominated at the 1920 Republican convention, but never came in above sixth in the balloting, in the middle of the pack of regional and favourite son candidates. On the tenth ballot, Warren G. Harding of Ohio was chosen, and party bosses announced their choice for Vice President, a senator from Wisconsin. But when time came for delegates to vote, a Coolidge wave among rank and file tired of the bosses ordering them around gave him the nod. Coolidge did not attend the convention in Chicago; he got the news of his nomination by telephone. After he hung up, Grace asked him what it was all about. He said, “Nominated for vice president.” She responded, “You don't mean it.” “Indeed I do”, he answered. “You are not going to accept it, are you?” “I suppose I shall have to.”

Harding ran on a platform of “normalcy” after the turbulence of the war and Wilson's helter-skelter progressive agenda. He expressed his philosophy in a speech several months earlier,

America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.

The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them.

The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920.

At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years.

Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.”

As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort.

Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%.

Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.”

When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes.

Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred.

In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to.

Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace.

Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis.

This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.

 Permalink

Carr, Jack. True Believer. New York: Atria Books, 2019. ISBN 978-1-5011-8084-2.
Jack Carr, a former U.S. Navy SEAL, burst into the world of thriller authors with 2018's stunning success, The Terminal List (September 2018). In it, he introduced James Reece, a SEAL whose team was destroyed by a conspiracy reaching into the highest levels of the U.S. government and, afflicted with a brain tumour by a drug tested on him and his team without their knowledge or consent, which he expected to kill him, set out for revenge upon those responsible. As that novel concluded, Reece, a hunted man, took to the sea in a sailboat, fully expecting to die before he reached whatever destination he might choose.

This sequel begins right where the last book ended. James Reece is aboard the forty-eight foot sailboat Bitter Harvest braving the rough November seas of the North Atlantic and musing that as a Lieutenant Commander in the U.S. Navy he knew very little about sailing a boat in the open ocean. With supplies adequate to go almost anywhere he desires, and not necessarily expecting to live until his next landfall anyway, he decides on an ambitious voyage to see an old friend far from the reach of the U.S. government.

While Reece is at sea, a series of brazen and bloody terrorist attacks in Europe against civilian and military targets send analysts on both sides of the Atlantic digging through their resources to find common threads which might point back to whoever is responsible, as their populace becomes increasingly afraid of congregating in public.

Reece eventually arrives at a hunting concession in Mozambique, in southeast Africa, and signs on as an apprentice professional hunter, helping out in tracking and chasing off poachers who plague the land during the off-season. This suits him just fine: he's about as far off the grid as one can get in this over-connected world, among escapees from Rhodesia who understand what it's like to lose their country, surrounded by magnificent scenery and wildlife, and actively engaged in putting his skills to work defending them from human predators. He concludes he could get used to this life, for however long as he has to live.

This idyll comes to an end when he is tracked down by another former SEAL, now in the employ of the CIA, who tells Reece that a man he trained in Iraq is suspected of being involved in the terrorist attacks and that if Reece will join in an effort to track him down and get him to flip on his terrorist masters, the charges pending against Reece will be dropped and he can stop running and forever looking over his shoulder. After what the U.S. government has done to him, his SEAL team, and his family, Reece's inclination is to tell them to pound sand. Then, as always, the eagle flashes its talons and Reece is told that if he fails to co-operate the Imperium will go after all of those who helped him avenge the wrongs it inflicted upon him and escape its grasp. With that bit of Soviet-style recruiting out of the way, Reece is off to a CIA black site in the REDACTED region of REDACTED to train with REDACTED for his upcoming mission. (In this book, like the last, passages which are said to have been required to have been struck during review of the manuscript by the Department of Defense Office of Prepublication and Security Review are blacked out in the text. This imparted a kind of frisson and authenticity the first time out, but now it's getting somewhat tedious—just change the details, Jack, and get on with it!)

As Reece prepares for his mission, events lead him to believe he is not just confronting an external terrorist threat but, once again, forces within the U.S. government willing to kill indiscriminately to get their way. Finally, the time comes to approach his former trainee and get to the bottom of what is going on. From this point on, the story is what you'd expect of a thriller, with tradecraft, intrigue, betrayal, and discovery of a dire threat with extreme measures taken under an imminent deadline to avoid catastrophe.

The pacing of the story is…odd. The entire first third of the book is largely occupied by Reece sailing his boat and working at the game reserve. Now, single-handedly sailing a sailboat almost halfway around the globe is challenging and an adventure, to be sure, and a look inside the world of an African hunting reserve is intriguing, but these are not what thriller readers pay for, nor do they particularly develop the character of James Reece, employ his unique skills, or reveal things about him we don't already know. We're half way through the book before Reece achieves his first goal of making contact with his former trainee, and it's only there that the real mission gets underway. And as the story ends, although a number of villains have been dispatched in satisfying ways, two of those involved in the terrorist plot (but not its masterminds) remain at large, for Reece to hunt down, presumably in the next book, in a year or so. Why not finish it here, then do something completely different next time?

I hope international agents don't take their tax advice from this novel. The CIA agent who “recruits” Reece tells him “It's a contracted position. You won't pay taxes on most of it as long as you're working overseas.” Wrong! U.S. citizens (which Reece, more fool him, remains) owe U.S. taxes on all of their worldwide income, regardless of the source. There is an exclusion for salary income from employment overseas, but this would not apply for payments by the CIA to an independent contractor. Later in the book, Reece receives a large cash award from a foreign government for dispatching a terrorist, which he donates to support the family of a comrade killed in the operation. He would owe around 50% of the award as federal and California state income taxes (since his last U.S. domicile was the once-golden state) off the top, and unless he was extraordinarily careful (which there is no evidence he was), he'd get whacked again with gift tax as punishment for his charity. Watch out, Reece, if you think having the FBI, CIA, and Naval Criminal Investigative Service on your tail is bad, be glad you haven't yet crossed the IRS or the California Franchise Tax Board!

The Kindle edition does not have the attention to detail you'd expect from a Big Five New York publisher (Simon and Schuster) in a Kindle book selling for US$13. In five places in the text, HTML character entity codes like “&8201;” (the code for the thin space used between adjacent single and double quote marks) appear in the text. What this says to me is that nobody at this professional publishing house did a page-by-page proof of the Kindle edition before putting it on sale. I don't know of a single independently-published science fiction author selling works for a fraction of this price who would fail to do this.

This is a perfectly competent thriller, but to this reader it does not come up to the high standard set by the debut novel. You should not read this book without reading The Terminal List first; if you don't, you'll miss most of the story of what made James Reece who he is here.

 Permalink

Griffin, G. Edward. The Creature from Jekyll Island. Westlake Village, CA: American Media, [1994, 1995, 1998, 2002] 2010. ISBN 978-0-912986-45-6.
Almost every time I review a book about or discuss the U.S. Federal Reserve System in a conversation or Internet post, somebody recommends this book. I'd never gotten around to reading it until recently, when a couple more mentions of it pushed me over the edge. And what an edge that turned out to be. I cannot recommend this book to anybody; there are far more coherent, focussed, and persuasive analyses of the Federal Reserve in print, for example Ron Paul's excellent book End the Fed (October 2009). The present book goes well beyond a discussion of the Federal Reserve and rambles over millennia of history in a chaotic manner prone to induce temporal vertigo in the reader, discussing the history of money, banking, political manipulation of currency, inflation, fractional reserve banking, fiat money, central banking, cartels, war profiteering, bailouts, monetary panics and bailouts, nonperforming loans to “developing” nations, the Rothschilds and Rockefellers, booms and busts, and more.

The author is inordinately fond of conspiracy theories. As we pursue our random walk through history and around the world, we encounter:

  • The sinking of the Lusitania
  • The assassination of Abraham Lincoln
  • The Order of the Knights of the Golden Circle, the Masons, and the Ku Klux Klan
  • The Bavarian Illuminati
  • Russian Navy intervention in the American Civil War
  • Cecil Rhodes and the Round Table Groups
  • The Council on Foreign Relations
  • The Fabian Society
  • The assassination of John F. Kennedy
  • Theodore Roosevelt's “Bull Moose” run for the U.S. presidency in 1912
  • The Report from Iron Mountain
  • The attempted assassination of Andrew Jackson in 1835
  • The Bolshevik Revolution in Russia

I've jumped around in history to give a sense of the chaotic, achronological narrative here. “What does this have to do with the Federal Reserve?”, you might ask. Well, not very much, except as part of a worldview in which almost everything is explained by the machinations of bankers assisted by the crooked politicians they manipulate.

Now, I agree with the author, on those occasions he actually gets around to discussing the Federal Reserve, that it was fraudulently sold to Congress and the U.S. population and has acted, from the very start, as a self-serving cartel of big New York banks enriching themselves at the expense of anybody who holds assets denominated in the paper currency they have been inflating away ever since 1913. But you don't need to invoke conspiracies stretching across the centuries and around the globe to explain this. The Federal Reserve is (despite how it was deceptively structured and promoted) a central bank, just like the Bank of England and the central banks of other European countries upon which it was modelled, and creating funny money out of thin air and looting the population by the hidden tax of inflation is what central banks do, always have done, and always will, as long as they are permitted to exist. Twice in the history of the U.S. prior to the establishment of the Federal Reserve, central banks were created, the first in 1791 by Alexander Hamilton, and the second in 1816. Each time, after the abuses of such an institution became apparent, the bank was abolished, the first in 1811, and the second in 1836. Perhaps, after the inevitable crack-up which always results from towering debt and depreciating funny money, the Federal Reserve will follow the first two central banks into oblivion, but so deeply is it embedded in the status quo it is difficult to see how that might happen today.

In addition to the rambling narrative, the production values of the book are shoddy. For a book which has gone through five editions and 33 printings, nobody appears to have spent the time giving the text even the most cursory of proofreading. Without examining it with the critical eye I apply when proofing my own work or that of others, I noted 137 errors of spelling, punctuation, and formatting in the text. Paragraph breaks are inserted seemingly at random, right in the middle of sentences, and other words are run together. Words which are misspelled include “from”, “great”, “fourth”, and “is”. This is not a freebie or dollar special, but a paperback which sells for US$20 at Amazon, or US$18 for the Kindle edition. And as I always note, if the author and publisher cannot be bothered to get simple things like these correct, how likely is it that facts and arguments in the text can be trusted?

Don't waste your money or your time. Ron Paul's End the Fed is much better, only a third the length, and concentrates on the subject without all of the whack-a-doodle digressions. For a broader perspective on the history of money, banking, and political manipulation of currency, see Murray Rothbard's classic What Has Government Done to Our Money? (July 2019).

 Permalink

Butler, Smedley D. War Is a Racket. San Diego, CA: Dauphin Publications, [1935] 2018. ISBN 978-1-939438-58-4.
Smedley Butler knew a thing or two about war. In 1898, a little over a month before his seventeenth birthday, he lied about his age and enlisted in the U.S. Marine Corps, which directly commissioned him a second lieutenant. After completing training, he was sent to Cuba, arriving shortly after the end of the Spanish-American War. Upon returning home, he was promoted to first lieutenant and sent to the Philippines as part of the American garrison. There, he led Marines in combat against Filipino rebels. In 1900 he was deployed to China during the Boxer Rebellion and was wounded in the Gaselee Expedition, being promoted to captain for his bravery.

He then served in the “Banana Wars” in Central America and the Caribbean. In 1914, during a conflict in Mexico, he carried out an undercover mission in support of a planned U.S. intervention. For his command in the battle of Veracruz, he was awarded the Medal of Honor. Next, he was sent to Haiti, where he commanded Marines and Navy troops in an attack on Fort Rivière in November 1915. For this action, he won a second Medal of Honor. To this day, he is only one of nineteen people to have twice won the Medal of Honor.

In World War I he did not receive a combat command, but for his work in commanding the debarkation camp in France for American troops, he was awarded both the Army and Navy Distinguished Service Medals. Returning to the U.S. after the armistice, he became commanding general of the Marine training base at Quantico, Virginia. Between 1927 and 1929 he commanded the Marine Expeditionary Force in China, and returning to Quantico in 1929, he was promoted to Major General, then the highest rank available in the Marine Corps (which was subordinate to the Navy), becoming the youngest person in the Corps to attain that rank. He retired from the Marine Corps in 1931.

In this slim pamphlet (just 21 pages in the Kindle edition I read), Butler demolishes the argument that the U.S. military actions in which he took part in his 33 years as a Marine had anything whatsoever to do with the defence of the United States. Instead, he saw lives and fortune squandered on foreign adventures largely in the interest of U.S. business interests, with those funding and supplying the military banking large profits from the operation. With the introduction of conscription in World War I, the cynical exploitation of young men reached a zenith with draftees paid US$30 a month, with half taken out to support dependants, and another bite for mandatory insurance, leaving less than US$9 per month for putting their lives on the line. And then, in a final insult, there was powerful coercion to “invest” this paltry sum in “Liberty Bonds” which, after the war, were repaid well below the price of purchase and/or in dollars which had lost half their purchasing power.

Want to put an end to endless, futile, and tragic wars? Forget disarmament conferences and idealistic initiatives, Butler says,

The only way to smash this racket is to conscript capital and industry and labor before the nations [sic] manhood can be conscripted. One month before the Government can conscript the young men of the nation—it must conscript capital and industry. Let the officers and the directors and the high-powered executives of our armament factories and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted—to get $30 a month, the same wage as the lads in the trenches get.

Let the workers in these plants get the same wages—all the workers, all presidents, all directors, all managers, all bankers—yes, and all generals and all admirals and all officers and all politicians and all government office holders—everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches!

Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors [I think “mayors” was intended —JW] pay half their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds.

Why shouldn't they?

Butler goes on to recommend that any declaration of war require approval by a national plebiscite in which voting would be restricted to those subject to conscription in a military conflict. (Writing in 1935, he never foresaw that young men and women would be sent into combat without so much as a declaration of war being voted by Congress.) Further, he would restrict all use of military force to genuine defence of the nation, in particular, limiting the Navy to operating no more than 200 miles (320 km) from the coastline.

This is an impassioned plea against the folly of foreign wars by a man whose career was as a warrior. One can argue that there is a legitimate interest in, say assuring freedom of navigation in international waters, but looking back on the results of U.S. foreign wars in the 21st century, it is difficult to argue they can be justified any more than the “Banana Wars” Butler fought in his time.

 Permalink

September 2019

Chittum, Thomas. Civil War Two. Seattle: Amazon Digital Services, [1993, 1996] 2018. ASIN B07FCWD7C4.
This book was originally published in 1993 with a revised edition in 1996. This Kindle edition, released in 2018, and available for free to Kindle Unlimited subscribers, appears to be identical to the last print edition, although the number of typographical, punctuation, grammatical, and formatting errors (I counted 78 in 176 pages of text, and I wasn't reading with a particularly critical eye) makes me wonder if the Kindle edition was made by optical character recognition of a print copy and never properly copy edited before publication. The errors are so frequent and egregious that readers will get the impression that the publisher couldn't be bothered to read over the text before it reached their eyes.

Sometimes, a book with mediocre production values can be rescued by its content, but that is not the case here. The author, who served two tours as a rifleman with the U.S. Army in Vietnam (1965 and 1966), then fought with the Rhodesian Territorials in the early 1970s and the Croatian Army in 1991–1992, argues that the U.S. has been transformed from a largely homogeneous republic in which minorities and newcomers were encouraged and provided a path to assimilate, and is now a multi-ethnic empire in which each group (principally, whites and those who, like most East Asians, have assimilated to the present majority's culture; blacks; and Hispanics) sees itself engaged in a zero-sum contest against the others for power and the wealth of the empire.

So far, this is a relatively common and non-controversial observation, at least among those on the dissident right who have been observing the deliberate fracturing of the society into rival interest groups along ethnic lines by cynical politicians aiming to assemble a “coalition of the aggrieved” into a majority. But from this starting point the author goes on to forecast increasingly violent riots along ethnic lines, initially in the large cities and then, as people flee areas in which they are an ethnic minority and flock together with others of their tribe, at borders between the emerging territories.

He then sees a progression toward large-scale conventional warfare proceeding in four steps: an initial Foundational Phase where the present Cold Civil War heats up as street gangs align on ethnic lines, new irregular forces spring up to defend against the others, and the police either divide among the factions or align themselves with that dominant in their territory. Next, in a protracted Terrorist Phase, the rival forces will increasingly attack one another and carry out strikes against the forces of the empire who try to suppress them. This will lead to increasing flight and concentration of each group in a territory where it is the majority, and then demands for more autonomy for that territory. He estimates (writing in the first half of the 1990s) that this was the present phase and could be expected to last for another five to twenty-five years (which would put its conclusion no later than 2020).

The Terrorist Phase will then give way to Guerilla Warfare, with street gangs and militia groups evolving into full-time paramilitary forces like the Viet Cong and Irish Republican Army. The empire will respond with an internal security force similar to that of the Soviet Union, and, as chaos escalates, most remaining civil liberties will be suspended “for the duration of the emergency”. He forecasts this phase as lasting between ten and twenty years. Finally, the situation will progress to All-Out, Continuous Warfare, where groups will unite and align along ethnic lines, bringing into play heavy weapons (artillery, rocket powered grenades, armour, etc.) seized from military depots or provided by military personnel defecting to the factional forces. The economy will collapse, and insurgent forces will fund their operations by running the black market that replaces it. For this phase, think the ex-Yugoslavia in the 1990s.

When the dust settles, possibly involving the intervention of United Nations or other “peacekeeping” troops, the result will be a partition of the United States into three ethnically-defined nations. The upper U.S., from coast to coast, will have a larger white (plus East Asian, and other assimilated groups) majority than today. The Old South extending through east Texas will be a black majority nation, and the Southwest, from central Texas through coastal California north of the San Francisco area will be a Hispanic majority nation, possibly affiliated or united with Mexico. The borders will be sharp, defended, and prone to occasional violence.

My problem with this is that it's…ridiculous. Just because a country has rival ethnic groups doesn't mean you'll end up with pitched warfare and partition. Yes, that's what happened in ex-Yugoslavia, but that was a case where centuries-long ethnic tensions and hatred upon which the lid had been screwed down for fifty years by an authoritarian communist regime were released into the open when it collapsed. Countries including Canada, Ireland/Northern Ireland, and Belgium have long-standing ethnic disputes, tension, and occasional violence, and yet they have not progressed to tanks in the street and artillery duels across defended frontiers.

The divide in the U.S. does not seem to be so much across ethnic lines as between a coastal and urban élite and a heartland productive population which has been looted at the expense of the ruling class. The ethnic groups, to the extent they have been organised as factions with a grievance agenda, seem mostly interested in vying for which can extract the most funds from the shrinking productive population for the benefit of their members. This divide, often called “blue/red” or “globalist/nationalist” goes right down the middle of a number of highly controversial and divisive issues such as immigration, abortion, firearms rights, equality before the law vs. affirmative action, free trade vs. economic nationalism, individual enterprise vs. socialism and redistribution, and many others. (The polarisation can be seen clearly by observing that if you know on which side an individual comes down on one of these issues, you can predict, with a high probability, their view on all the others.)

To my mind, a much more realistic (not to mention far better written) scenario for the U.S. coming apart at the seams is Kurt Schlichter's People's Republic (November 2018) which, although fiction, seems an entirely plausible extrapolation of present trends and the aftermath of two incompatible worldviews going their separate ways.

 Permalink

Brennan, Gerald. Public Loneliness. Chicago: Tortoise Books, [2014] 2017. ISBN 978-0-9986325-1-3.
This is the second book in the author's “Altered Space” series of alternative histories of the cold war space race. Each stand-alone story explores a space mission which did not take place, but could have, given the technology and political circumstances at the time. The first, Zero Phase (October 2016), asks what might have happened had Apollo 13's service module oxygen tank waited to explode until after the lunar module had landed on the Moon. The third, Island of Clouds (July 2019), tells the story of a Venus fly-by mission using Apollo-derived hardware in 1972.

The present short book (120 pages in paperback edition) is the tale of a Soviet circumlunar mission piloted by Yuri Gagarin in October 1967, to celebrate the 50th anniversary of the Bolshevik revolution and the tenth anniversary of the launch of Sputnik. As with all of the Altered Space stories, this could have happened: in the 1960s, the Soviet Union had two manned lunar programmes, each using entirely different hardware. The lunar landing project was based on the N1 rocket, a modified Soyuz spacecraft called the 7K-LOK, and the LK one-man lunar lander. The Zond project aimed at a manned lunar fly-by mission (the spacecraft would loop around the Moon and return to Earth on a “free return trajectory” without entering lunar orbit). Zond missions would launch on the Proton booster with a crew of one or two cosmonauts flying around the Moon in a spacecraft designated Soyuz 7K-L1, which was stripped down by removal of the orbital module (forcing the crew to endure the entire trip in the cramped launch/descent module) and equipped for the lunar mission by the addition of a high gain antenna, navigation system, and a heat shield capable of handling the velocity of entry from a lunar mission.

In our timeline, the Zond programme was plagued by problems. The first four unmanned lunar mission attempts, launched between April and November 1967, all failed due to problems with the Proton booster. Zond 4, in March of 1968, flew out to a lunar distance, but was deliberately launched 180° away from the Moon (perhaps to avoid the complexity of lunar gravity). It returned to Earth, but off-course, and was blown up by its self-destruct mechanism to avoid it falling into the hands of another country. Two more Zond launches in April and July 1968 failed from booster problems, with the second killing three people when its upper stage exploded on the launch pad. In September 1968 Zond 5 became the first spacecraft to circle the Moon and return to Earth, carrying a “crew” of two tortoises, fruit fly eggs, and plant seeds. The planned “double dip” re-entry failed, and the spacecraft made a ballistic re-entry with deceleration which might have killed a human cosmonaut, but didn't seem to faze the tortoises. Zond 6 performed a second circumlunar mission in November 1968, again with tortoises and other biological specimens. During the return to Earth, the capsule depressurised, killing all of the living occupants. After a successful re-entry, the parachute failed and the capsule crashed to Earth. This was followed by three more launch failures and then, finally, in August 1969, a completely successful unmanned flight which was the first in which a crew, if onboard, would have survived. By this time, of course, the U.S. had not only orbited the Moon (a much more ambitious mission than Zond's fly-by), but landed on the surface, so even a successful Zond mission would have been an embarrassing afterthought. After one more unmanned test in October 1970, the Zond programme was cancelled.

In this story, the Zond project encounters fewer troubles and with the anniversary of the October revolution approaching in 1967, the go-ahead was given for a piloted flight around the Moon. Yuri Gagarin, who had been deeply unhappy at being removed from flight status and paraded around the world as a cultural ambassador, used his celebrity status to be assigned to the lunar mission which, given weight constraints and the cramped Soyuz cabin, was to be flown by a single cosmonaut.

The tale is narrated by Gagarin himself. The spacecraft is highly automated, so there isn't much for him to do other than take pictures of the Earth and Moon, and so he has plenty of time to reflect upon his career and the experience of being transformed overnight from an unknown 27 year old fighter pilot into a global celebrity and icon of Soviet technological prowess. He seems to have a mild case of impostor syndrome, being acutely aware that he was entirely a passive passenger on his Vostok 1 flight, never once touching the controls, and that the credit he received for the accomplishment belonged to the engineers and technicians who built and operated the craft, who continued to work in obscurity. There are extensive flashbacks to the flight, his experiences afterward, and the frustration at seeing his flying career come to an end.

But this is Soviet hardware, and not long into the flight problems occur which pose increasing risks to the demanding mission profile. Although the planned trajectory will sling the spacecraft around the Moon and back to Earth, several small trajectory correction maneuvers will be required to hit the narrow re-entry corridor in the Earth's atmosphere: too steep and the capsule will burn up, too shallow and it will skip off the atmosphere into a high elliptical orbit in which the cosmonaut's life support consumables may run out before it returns to Earth.

The compounding problems put these course corrections at risk, and mission control decides not to announce the flight to the public while it is in progress. As the book concludes, Gagarin does not know his ultimate fate, and neither does the reader.

This is a moving story, well told, and flawless in its description of the spacecraft and Zond mission plan. One odd stylistic choice is that in Gagarin's narration, he speaks of the names of spacecraft as their English translation of the Russian names: “East” instead of “Vostok”, “Union” as opposed to “Soyuz”, etc. This might seem confusing, but think about it: that's how a Russian would have heard those words, so it's correct to translate them into English along with his other thoughts. There is a zinger on the last page that speaks to the nature of the Soviet propaganda machine—I'll not spoil it for you.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Snowden, Edward. Permanent Record. New York: Metropolitan Books, 2019. ISBN 978-1-250-23723-1.
The revolution in communication and computing technologies which has continually accelerated since the introduction of integrated circuits in the 1960s and has since given rise to the Internet, ubiquitous mobile telephony, vast data centres with formidable processing and storage capacity, and technologies such as natural language text processing, voice recognition, and image analysis, has created the potential, for the first time in human history, of mass surveillance to a degree unimagined even in dystopian fiction such as George Orwell's 1984 or attempted by the secret police of totalitarian regimes like the Soviet Union, Nazi Germany, or North Korea. But, residents of enlightened developed countries such as the United States thought, they were protected, by legal safeguards such as the Fourth Amendment to the U.S. Constitution, from having their government deploy such forbidding tools against its own citizens. Certainly, there was awareness, from disclosures such as those in James Bamford's 1982 book The Puzzle Palace, that agencies such as the National Security Agency (NSA) were employing advanced and highly secret technologies to spy upon foreign governments and their agents who might attempt to harm the United States and its citizens, but their activities were circumscribed by a legal framework which strictly limited the scope of their domestic activities.

Well, that's what most people believed until the courageous acts by Edward Snowden, a senior technical contractor working for the NSA, revealed, in 2013, multiple programs of indiscriminate mass surveillance directed against, well, everybody in the world, U.S. citizens most definitely included. The NSA had developed and deployed a large array of hardware and software tools whose mission was essentially to capture all the communications and personal data of everybody in the world, scan it for items of interest, and store it forever where it could be accessed in future investigations. Data were collected through a multitude of means: monitoring traffic across the Internet, collecting mobile phone call and location data (estimated at five billion records per day in 2013), spidering data from Web sites, breaking vulnerable encryption technologies, working with “corporate partners” to snoop data passing through their facilities, and fusing this vast and varied data with query tools such as XKEYSCORE, which might be thought of as a Google search engine built by people who from the outset proclaimed, “Heck yes, we're evil!”

How did Edward Snowden, over his career a contractor employee for companies including BAE Systems, Dell Computer, and Booz Allen Hamilton, and a government employee of the CIA, obtain access to such carefully guarded secrets? What motivated him to disclose this information to the media? How did he spirit the information out of the famously security-obsessed NSA and get it into the hands of the media? And what were the consequences of his actions? All of these questions are answered in this beautifully written, relentlessly candid, passionately argued, and technologically insightful book by the person who, more than anyone else, is responsible for revealing the malignant ambition of the government of the United States and its accomplices in the Five Eyes (Australia, Canada, New Zealand, and the United Kingdom) to implement and deploy a global panopticon which would shrink the scope of privacy of individuals to essentially zero—in the words of an NSA PowerPoint (of course) presentation from 2011, “Sniff It All, Know It All, Collect It All, Process It All, Exploit It All, Partner It All”. They didn't mention “Store It All Forever”, but with the construction of the US$1.5 billion Utah Data Center which consumes 65 megawatts of electricity, it's pretty clear that's what they're doing.

Edward Snowden was born in 1983 and grew up along with the personal computer revolution. His first contact with computers was when his father brought home a Commodore 64, on which father and son would play many games. Later, just seven years old, his father introduced him to programming on a computer at the Coast Guard base where he worked, and, a few years later, when the family had moved to the Maryland suburbs of Washington DC after his father had been transferred to Coast Guard Headquarters, the family got a Compaq 486 PC clone which opened the world of programming and exploration of online groups and the nascent World Wide Web via the narrow pipe of a dial-up connection to America Online. In those golden days of the 1990s, the Internet was mostly created by individuals for individuals, and you could have any identity, or as many identities as you wished, inventing and discarding them as you explored the world and yourself. This was ideal for a youth who wasn't interested in sports and tended to be reserved in the presence of others. He explored the many corners of the Internet and, like so many with the talent for understanding complex systems, learned to deduce the rules governing systems and explore ways of using them to his own ends. Bob Bickford defines a hacker as “Any person who derives joy from discovering ways to circumvent limitations.” Hacking is not criminal, and it has nothing to do with computers. As his life progressed, Snowden would learn how to hack school, the job market, and eventually the oppressive surveillance state.

By September 2001, Snowden was working for an independent Web site developer operating out of her house on the grounds of Fort Meade, Maryland, the home of the NSA (for whom, coincidentally, his mother worked in a support capacity). After the attacks on the World Trade Center and Pentagon, he decided, in his family's long tradition of service to their country (his grandfather is a Rear Admiral in the Coast Guard, and ancestors fought in the Revolution, Civil War, and both world wars), that his talents would be better put to use in the intelligence community. His lack of a four year college degree would usually be a bar to such employment, but the terrorist attacks changed all the rules, and military veterans were being given a fast track into such jobs, so, after exploring his options, Snowden enlisted in the Army, under a special program called 18 X-Ray, which would send qualifying recruits directly into Special Forces training after completing their basic training.

His military career was to prove short. During a training exercise, he took a fall in the forest which fractured the tibia bone in both legs and was advised he would never be able to qualify for Special Forces. Given the option of serving out his time in a desk job or taking immediate “administrative separation” (in which he would waive the government's liability for the injury), he opted for the latter. Finally, after a circuitous process, he was hired by a government contractor and received the exclusive Top Secret/Sensitive Compartmented Information security clearance which qualified him to work at the CIA.

A few words are in order about contractors at government agencies. In some media accounts of the Snowden disclosures, he has been dismissed as “just a contractor”, but in the present-day U.S. government where nothing is as it seems and much of everything is a scam, in fact many of the people working in the most sensitive capacities in the intelligence community are contractors supplied by the big “beltway bandit” firms which have sprung up like mushrooms around the federal swamp. You see, agencies operate under strict limits on the number of pure government (civil service) employees they can hire and, of course, government employment is almost always forever. But, if they pay a contractor to supply a body to do precisely the same job, on site, they can pay the contractor from operating funds and bypass the entire civil service mechanism and limits and, further, they're free to cut jobs any time they wish and to get rid of people and request a replacement from the contractor without going through the arduous process of laying off or firing a “govvy”. In all of Snowden's jobs, the blue badged civil servants worked alongside the green badge contractors without distinction in job function. Contractors would rarely ever visit the premises of their nominal “employers” except for formalities of hiring and employee benefits. One of Snowden's co-workers said “contracting was the third biggest scam in Washington after the income tax and Congress.”

His work at the CIA was in system administration, and he rapidly learned that regardless of classification levels, compartmentalisation, and need to know, the person in a modern organisation who knows everything, or at least has the ability to find out if interested, is the system administrator. In order to keep a system running, ensure the integrity of the data stored on it, restore backups when hardware, software, or user errors cause things to be lost, and the myriad other tasks that comprise the work of a “sysadmin”, you have to have privileges to access pretty much everything in the system. You might not be able to see things on other systems, but the ones under your control are an open book. The only safeguard employers have over rogue administrators is monitoring of their actions, and this is often laughably poor, especially as bosses often lack the computer savvy of the administrators who work for them.

After nine months on the job, an opening came up for a CIA civil servant job in overseas technical support. Attracted to travel and exotic postings abroad, Snowden turned in his green badge for a blue one and after a training program, was sent to exotic…Geneva as computer security technician, under diplomatic cover. As placid as it may seem, Geneva was on the cutting edge of CIA spying technology, with the United Nations, numerous international agencies, and private banks all prime targets for snooping.

Two years later Snowden was a contractor once again, this time with Dell Computer, who placed him with the NSA, first in Japan, then back in Maryland, and eventually in Hawaii as lead technologist of the Office of Information Sharing, where he developed a system called “Heartbeat” which allowed all of NSA's sites around the world to share their local information with others. It can be thought of as an automated blog aggregator for Top Secret information. This provided him personal access to just about everything the NSA was up to, world-wide. And he found what he read profoundly disturbing and dismaying.

Once he became aware of the scope of mass surveillance, he transferred to another job in Hawaii which would allow him to personally verify its power by gaining access to XKEYSCORE. His worst fears were confirmed, and he began to patiently, with great caution, and using all of his insider's knowledge, prepare to bring the archives he had spirited out from the Heartbeat system to the attention of the public via respected media who would understand the need to redact any material which might, for example, put agents in the field at risk. He discusses why, based upon his personal experience and that of others, he decided the whistleblower approach within the chain of command was not feasible: the unconstitutional surveillance he had discovered had been approved at the highest levels of government—there was nobody who could stop it who had not already approved it.

The narrative then follows preparing for departure, securing the data for travel, taking a leave of absence from work, travelling to Hong Kong, and arranging to meet the journalists he had chosen for the disclosure. There is a good deal of useful tradecraft information in this narrative for anybody with secrets to guard. Then, after the stories began to break in June, 2013, the tale of his harrowing escape from the long reach of Uncle Sam is recounted. Popular media accounts of Snowden “defecting to Russia” are untrue. He had planned to seek asylum in Ecuador, and had obtained a laissez-passer from the Ecuadoran consul and arranged to travel to Quito from Hong Kong via Moscow, Havana, and Caracas, as that was the only routing which did not pass through U.S. airspace or involve stops in countries with extradition treaties with the U.S. Upon arrival in Moscow, he discovered that his U.S. passport had been revoked while en route from Hong Kong, and without a valid passport he could neither board an onward flight nor leave the airport. He ended up trapped in the Moscow airport for forty days while twenty-seven countries folded to U.S. pressure and denied him political asylum. After spending so long in the airport he even became tired of eating at the Burger King there, on August 1st, 2013 Russia granted him temporary asylum. At this writing, he is still in Moscow, having been joined in 2017 by Lindsay Mills, the love of his life he left behind in Hawaii in 2013, and who is now his wife.

This is very much a personal narrative, and you will get an excellent sense for who Edward Snowden is and why he chose to do what he did. The first thing that struck me is that he really knows his stuff. Some of the press coverage presented him as a kind of low-level contractor systems nerd, but he was principal architect of EPICSHELTER, NSA's worldwide backup and archiving system, and sole developer of the Heartbeat aggregation system for reports from sites around the globe. At the time he left to make his disclosures, his salary was US$120,000 per year, hardly the pay of a humble programmer. His descriptions of technologies and systems in the book are comprehensive and flawless. He comes across as motivated entirely by outrage at the NSA's flouting of the constitutional protections supposed to be afforded U.S. citizens and its abuses in implementing mass surveillance, sanctioned at the highest levels of government across two administrations from different political parties. He did not seek money for his disclosures, and did not offer them to foreign governments. He took care to erase all media containing the documents he removed from the NSA before embarking on his trip from Hong Kong, and when approached upon landing in Moscow by agents from the Russian FSB (intelligence service) with what was obviously a recruitment pitch, he immediately cut it off, saying,

Listen, I understand who you are, and what this is. Please let me be clear that I have no intention to cooperate with you. I'm not going to cooperate with any intelligence service. I mean no disrespect, but this isn't going to be that kind of meeting. If you want to search my bag, it's right here. But I promise you, there's nothing in it that can help you.

And that was that.

Edward Snowden could have kept quiet, done his job, collected his handsome salary, continued to live in a Hawaiian paradise, and share his life with Lindsay, but he threw it all away on a matter of principle and duty to his fellow citizens and the Constitution he had sworn to defend when taking the oath upon joining the Army and the CIA. On the basis of the law, he is doubtless guilty of the three federal crimes with which he has been charged, sufficient to lock him up for as many as thirty years should the U.S. lay its hands on him. But he believes he did the correct thing in an attempt to right wrongs which were intolerable. I agree, and can only admire his courage. If anybody is deserving of a Presidential pardon, it is Edward Snowden.

There is relatively little discussion here of the actual content of the documents which were disclosed and the surveillance programs they revealed. For full details, visit the Snowden Surveillance Archive, which has copies of all of the documents which have been disclosed by the media to date. U.S. government employees and contractors should read the warning on the site before viewing this material.

 Permalink

Yates, Raymond F. The Boys' Book of Model Railroading. New York: Harper & Row, 1951. ISBN 978-1-127-46606-1.
In the years before World War II, Lionel was the leader in the U.S. in manufacturing of model railroad equipment, specialising in “tinplate” models which were often unrealistic in scale, painted in garish colours, and appealing to young children and the mothers who bought them as gifts. During the war, the company turned to production of items for the U.S. Navy. After the war, the company returned to the model railroad market, remaking their product line with more realistic models. This coincided with the arrival of the baby boom generation, which, as the boys grew up, had an unlimited appetite for ever more complicated and realistic model railroads, which Lionel was eager to meet with simple, rugged, and affordable gear which set the standard for model railroading for a generation.

This book, published in 1951, just as Lionel was reaching the peak of its success, was written by Raymond F. Yates, author of earlier classics such as A Boy and a Battery and A Boy and a Motor, which were perennially wait-listed at the public library when I was a kid during the 1950s. The book starts with the basics of electricity, then moves on to a totally Lionel-based view of the model railroading hobby. There are numerous do-it-yourself projects, ranging from building simple scenery to complex remote-controlled projects with both mechanical and electrical actuation. There is even a section on replacing the unsightly centre third rail of Lionel O-gauge track with a subtle third rail located to the side of the track which the author notes “should be undertaken only if you are prepared to do a lot of work and if you know how to use a soldering iron.” Imagine what this requires for transmitting current across switches and crossovers! Although I read this book, back in the day, I'm glad I never went that deeply down the rabbit hole.

I learned a few things here I never stumbled across while running my Lionel oval layout during the Eisenhower administration or in engineering school many years later. For example: why did Lionel opt for AC power and a three rail system rather than the obvious approach of DC motors and two rails, which makes it easier, for example, to reverse trains and looks more like the real thing? The answer is that a three rail system with AC power is symmetrical, and allows all kinds of complicated geometries in layouts without worrying about cross-polarity connections on junctions. AC power allows using inexpensive transformers to run the layout from mains power without rectifiers which, in the 1950s, would have meant messy and inefficient selenium stacks prone to blowing up into toxic garlic-smelling fumes if mistreated. But many of the Lionel remote control gizmos, such as the knuckle couplers, switches, semaphore signals, and that eternal favourite, the giraffe car, used solenoids as actuators. How could that work with AC power? Well, think about it—if you have a soft iron plunger within the coil, but not at its centre, when current is applied to the coil, the induced magnetic field will pull it into the centre of the coil. This force is independent of the direction of the current. So an alternating current will create a varying magnetic field which, averaged over the mechanical inertia of the plunger, will still pull it in as long as the solenoid is energised. In practice, running a solenoid on AC may result in a hum, buzz, or chatter, which can be avoided by including a shading coil, in which an induced current creates a magnetic field 90° out of phase to the alternating current in the main coil and smooths the magnetic field actuating the plunger. I never knew that; did you?

This is a book for boys. There is only a hint of the fanaticism to which the hobby of model railroading can be taken. We catch a whiff of it in the chapter about running the railroad on a published schedule, with telegraph connections between dispatchers and clocks modified to keep “scale time”. All in all, it was great fun then, and great fun to recall now. To see how far off the deep end O-gauge model railroading has gone since 1951, check out the Lionel Trains 2019 Catalogue.

This book is out of print, but used copies are readily available at a reasonable price.

 Permalink

October 2019

Mills, Kyle. Lethal Agent. New York: Atria Books, 2019. ISBN 978-1-5011-9062-9.
This is the fifth novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. On the cover, Vince Flynn still gets top billing (he is now the “brand”, not the author).

In the third Mitch Rapp novel by Kyle Mills, Enemy of the State (June 2018), Rapp decapitated the leadership of ISIS by detonating a grenade in a cave where they were meeting and barely escaped with his life when the cavern collapsed. As the story concluded, it was unknown whether the leader of ISIS, Mullah Sayid Halabi, was killed in the cave-in. Months later, evidence surfaces that Halabi survived, and may be operating in chaotic, war-torn Yemen. Rapp tracks him to a cave in the Yemeni desert but finds only medical equipment apparently used to treat his injuries: Halabi has escaped again.

A Doctors Without Borders team treating victims of a frighteningly contagious and virulent respiratory disease which has broken out in a remote village in Yemen is attacked and its high-profile microbiologist is kidnapped, perhaps by Halabi's people to work on bioweapons. Meanwhile, by what amounts to pure luck, a shipment of cocaine from Mexico is intercepted and found to contain, disguised among the packets of the drug, a brick of weaponised anthrax, leading authorities to suspect the nightmare scenario in which one or more Mexican drug cartels are cooperating with Islamic radicals to smuggle terrorists and weapons across the porous southern border of the U.S.

In Washington, a presidential election is approaching, and President Alexander, who will be leaving after two terms, seems likely to be replaced by the other party's leading contender, the ruthless and amoral Senator Christine Barnett, who is a sworn enemy of CIA director Irene Kennedy and operative Mitch Rapp, and, if elected, is likely to, at best, tie them up in endless congressional hearings and, at worst, see them both behind bars. Barnett places zero priority on national security or the safety of the population, and is willing to risk either to obtain political advantage.

Halabi's plans become evident when a slickly-produced video appears on the Internet, featuring a very much alive Halabi saying, “Now I have your biological weapons experts. Now I have the power to use your weapons against you.” The only way to track down Halabi, who has relocated to parts unknown, is by infiltrating the Mexican cartel behind the intercepted shipment. Rapp devises a plan to persuade the cartel boss he has gone rogue and is willing to sign on as an enforcer. Having no experience operating in Mexico or more than a few words of Spanish, and forced to operate completely on his own, he must somehow convince the cartel to let him inside its inner circle and then find the connection to Halabi and thwart his plans, which Rapp and others suspect may be far more sinister than sprinkling some anthrax around. (You don't need an expert microbiologist to weaponise anthrax, after all.)

This thriller brings back the old, rough-edged, and unrelenting Mitch Rapp of some of Vince Flynn's early novels. And this is a Rapp who has seen enough of the Washington swamp and the creatures who inhabit it to have outgrown any remaining dewy-eyed patriotism. In chapter 22, he says,

But what I do know is that the U.S. isn't ready. If Halabi's figured out a way to hit us with something big—something biological—what's our reaction going to be? The politicians will run for the hills and point fingers at each other. And the American people…. They faint if someone uses insensitive language in their presence and half of them couldn't run up a set of stairs if you put a gun to their head. What'll happen if the real s*** hits the fan? What are they going to do if they're faced with something that can't be fixed by a Facebook petition?

So Rapp is as ruthless with his superiors as with the enemy, and obtains the free hand he needs to get the job done. Eventually Rapp and his team identify what is a potentially catastrophic threat and must swing into action, despite the political and diplomatic repercussions, to avert disaster. And then it is time to settle some scores.

Kyle Mills has delivered another thriller which is both in the tradition of Mitch Rapp and also further develops his increasingly complex character in new ways.

 Permalink

Wood, Fenton. The Tower of the Bear. Seattle: Amazon Digital Services, 2019. ASIN B07XB8XWNF.
This is the third short novel/novella (145 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and the second, Five Million Watts (June 2019), “enchanting”. In this volume, the protagonist, Philo Hergenschmidt, embarks upon a hero's journey to locate a treasure dating from the origin of the Earth which may be the salvation of radio station 2XG and the key to accomplishing the unrealised dream of the wizard who built it, Zaros the Electromage.

Philo's adventures take him into the frozen Arctic where he meets another Old One, to the depths of the Arctic Ocean in the fabulous submarine of the eccentric Captain Kolodziej, into the lair of a Really Old One where he almost seizes the prize he seeks, and then on an epic road trip. After the Partition of North America, the West, beyond the Mississippi, was ceded by the Republic to the various aboriginal tribes who lived there, and no Yankee dare enter this forbidden territory except to cross it on the Tyrant's Road, which remained Yankee territory with travellers given free passage by the tribes—in theory. In fact, no white man was known to have ventured West on the Road in a century.

Philo has come to believe that the “slow iron” he seeks may be found in the fabled City of the Future, said to be near the Pacific coast at the end of the Tyrant's Road. The only way to get there is to cross the continent, and the only practical means, there being no gas stations or convenience stores along the way, is by bicycle. Viridios helps Philo obtain a superb bicycle and trailer, and equip himself with supplies for the voyage. Taking leave of Viridios at the Mississippi and setting out alone, he soon discovers everything is not what it was said to be, and that the West is even more mysterious, dangerous, and yet enchanted than the stories he's heard since boyhood.

It is, if nothing else, diverse. In its vast emptiness there are nomadic bands pursuing the vast herds of bison on horseback with bows and arrows, sedentary tribes who prefer to ride the range in Japanese mini-pickup trucks, a Universal Library which is an extreme outlier even among the exotic literature of universal libraries, a hidden community that makes Galt's Gulch look like a cosmopolitan crossroads, and a strange people who not only time forgot, but who seem to have forgotten time. Philo's native mechanical and electrical knack gets him out of squeezes and allows him to trade know-how for information and assistance with those he encounters.

Finally, near the shore of the ocean, he comes to a great Tree, beyond imagining in its breadth and height. What is there to be learned here, and what challenges will he face as he continues his quest?

This is a magnificent continuation of one of the best young adult alternative history tales I've encountered in many years. Don't be put off by the “young adult” label—while you can hand this book to any youngster from age nine on up and be assured they'll be enthralled by the adventure and not distracted by the superfluous grunge some authors feel necessary to include when trying to appeal to a “mature” audience, the author never talks down to the reader, and even engineers and radio amateurs well versed in electronics will learn arcana such as the generation and propagation of extremely low frequency radio waves. This is a story which genuinely works for all ages.

This book is currently available only in a Kindle edition. Note that you don't need a physical electronic book reader, tablet, or mobile phone to read Kindle books. Free Kindle applications are available which let you read on Macintosh and Windows machines, and a Kindle Cloud Reader allows reading Kindle books on any machine with a modern Web browser, including all Linux platforms. The fourth volume, The City of Illusions, is scheduled to be published in December, 2019.

 Permalink

Crossfield, Albert Scott and Clay Blair. Always Another Dawn. Seattle, CreateSpace, [1960] 2018. ISBN 978-1-7219-0050-3.
The author was born in 1921 and grew up in Southern California. He was obsessed with aviation from an early age, wangling a ride in a plane piloted by a friend of his father (an open cockpit biplane) at age six. He built and flew many model airplanes and helped build the first gasoline-powered model plane in Southern California, with a home-built engine. The enterprising lad's paper route included a local grass field airport, and he persuaded the owner to trade him a free daily newspaper (delivery boys always received a few extra) for informal flying lessons. By the time he turned thirteen, young Scott (he never went by his first name, “Albert”) had accumulated several hours of flying time.

In the midst of the Great Depression, his father's milk processing business failed, and he decided to sell out everything in California, buy a 120 acre run-down dairy farm in rural Washington state, and start over. Patiently, taking an engineer's approach to the operation: recording everything, controlling costs, optimising operations, and with the entire family pitching in on the unceasing chores, the ramshackle property was built into a going concern and then a showplace.

Crossfield never abandoned his interest in aviation, and soon began to spend some of his scarce free time at the local airport, another grass field operation, where he continued to take flight lessons from anybody who would give them for the meagre pocket change he could spare. Finally, with a total of seven or eight hours dual control time, one of the pilots invited him to “take her up and try a spin.” This was highly irregular and, in fact, illegal: he had no student pilot certificate, but things were a lot more informal in those days, so off he went. Taking the challenge at its words, he proceeded to perform three spins and spin recoveries during his maiden solo flight.

In 1940, at age eighteen, Scott left the farm. His interest in aviation had never flagged, and he was certain he didn't want to be a farmer. His initial goal was to pursue an engineering degree at the University of Washington and then seek employment in the aviation industry, perhaps as an engineering test pilot. But the world was entering a chaotic phase, and this chaos perturbed his well-drawn plans. “[B]y the time I was twenty I had entered the University, graduated from a civilian aviation school, officially soloed, and obtained my private pilot's license, withdrawn from the University, worked for Boeing Aircraft Company, quit to join the Air Force briefly, worked for Boeing again, and quit again to join the Navy.” After the U.S. entered World War II, the Navy was desperate for pilots and offered immediate entry to flight training to those with the kind of experience Crossfield had accumulated.

Despite having three hundred flight hours in his logbook, Crossfield, like many military aviators, had to re-learn flying the Navy way. He credits it for making him a “professional, disciplined aviator.” Like most cadets, he had hoped for assignment to the fleet as a fighter pilot, but upon completing training he was immediately designated an instructor and spent the balance of the war teaching basic and advanced flying, gunnery, and bombing to hundreds of student aviators. Toward the end of the war, he finally received his long-awaited orders for fighter duty, but while in training the war ended without his ever seeing combat.

Disappointed, he returned to his original career plan and spent the next four years at the University of Washington, obtaining Bachelor of Science and Master of Science degrees in Aeronautical Engineering. Maintaining his commission in the Naval Reserve, he organised a naval stunt flying team and used it to hone his precision formation flying skills. As a graduate student, he supported himself as chief operator of the university's wind tunnel, then one of the most advanced in the country, and his work brought him into frequent contact with engineers from aircraft companies who contracted time on the tunnel for tests on their designs.

Surveying his prospects in 1950, Crossfield decided he didn't want to become a professor, which would be the likely outcome if he continued his education toward a Ph.D. The aviation industry was still in the postwar lull, but everything changed with the outbreak of the Korean War in June 1950. Suddenly, demand for the next generation of military aircraft, which had been seen as years in the future, became immediate, and the need for engineers to design and test them was apparent. Crossfield decided the most promising opportunity for someone with his engineering background and flight experience was as an “aeronautical research pilot” with the National Advisory Committee for Aeronautics (NACA), a U.S. government civilian agency founded in 1915 and chartered with performing pure and applied research in aviation, which was placed in the public domain and made available to all U.S. aircraft manufacturers. Unlike returning to the military, where his flight assignments would be at the whim of the service, at NACA he would be assured of working on the cutting edge of aviation technology.

Through a series of personal contacts, he eventually managed to arrange an interview with the little-known NACA High Speed Flight Test Station at Edwards Air Force Base in the high desert of Southern California. Crossfield found himself at the very Mecca of high speed flight, where Chuck Yeager had broken the sound barrier in October 1947 and a series of “X-planes” were expanding the limits of flight in all directions.

Responsibility for flying the experimental research aircraft at Edwards was divided three ways. When a new plane was delivered, its first flights would usually be conducted by company test pilots from its manufacturer. These pilots would have been involved in the design process and worked closely with the engineers responsible for the plane. During this phase, the stability, maneuverability, and behaviour of the plane in various flight regimes would be tested, and all of its component systems would be checked out. This would lead to “acceptance” by the Air Force, at which point its test pilots would acquaint themselves with the new plane and then conduct flights aimed at expanding its “envelope”: pushing parameters such as speed and altitude to those which the experimental plane had been designed to explore. It was during this phase that records would be set, often trumpeted by the Air Force. Finally, NACA pilots would follow up, exploring the fine details of the performance of the plane in the new flight regimes it opened up. Often the plane would be instrumented with sensors to collect data as NACA pilots patiently explored its flight envelope. NACA's operation at Edwards was small, and it played second fiddle to the Air Force (and Navy, who also tested some of its research planes there). The requirements for the planes were developed by the military, who selected the manufacturer, approved the design, and paid for its construction. NACA took advantage of whatever was developed, when the military made it available to them.

However complicated the structure of operations was at Edwards, Crossfield arrived squarely in the middle of the heroic age of supersonic flight, as chronicled (perhaps a bit too exuberantly) by Tom Wolfe in The Right Stuff. The hangars were full of machines resembling those on the covers of the pulp science fiction magazines of Crossfield's youth, and before them were a series of challenges seemingly without end: Mach 2, 3, and beyond, and flight to the threshold of space.

It was a heroic time, and a dangerous business. Writing in 1960, Crossfield notes, “Death is the handmaiden of the pilot. Sometimes it comes by accident, sometimes by an act of God. … Twelve out of the sixteen members of my original class at Seattle were eventually killed in airplanes. … Indeed, come to think of it, three-quarters of all the pilots I ever knew are dead.” As an engineer, he has no illusions or superstitions about the risks he is undertaking: sometimes the machine breaks and there's nothing that can be done about it. But he distinguishes being startled with experiencing fear: “I have been startled in an airplane many times. This, I may say, is almost routine for the experimental test pilot. But I can honestly say I have never experienced real fear in the air. The reason is that I have never run out of things to do.”

Crossfield proceeded to fly almost all of the cutting-edge aircraft at Edwards, including the rocket powered X-1 and the Navy's D-558-2 Skyrocket. By 1955, he had performed 99 flights under rocket power, becoming the most experienced rocket pilot in the world (there is no evidence the Soviet Union had any comparable rocket powered research aircraft). Most of Crossfield's flights were of the patient, data-taking kind in which the NACA specialised, albeit with occasional drama when these finicky, on-the-edge machines malfunctioned. But sometimes, even at staid NACA, the blood would be up, and in 1953, NACA approved taking the D-558-2 to Mach 2, setting a new world speed record. This was more than 25% faster than the plane had been designed to fly, and all the stops were pulled out for the attempt. The run was planned for a cold day, when the speed of sound would be lower at the planned altitude and cold-soaking the airframe would allow loading slightly more fuel and oxidiser. The wings and fuselage were waxed and polished to a high sheen to reduce air friction. Every crack was covered by masking tape. The stainless steel tubes used to jettison propellant in an emergency before drop from the carrier aircraft were replaced by aluminium which would burn away instants after the rocket engine was fired, saving a little bit of weight. With all of these tweaks, on November 20, 1953, at an altitude of 72,000 feet (22 km), the Skyrocket punched through Mach 2, reaching a speed of Mach 2.005. Crossfield was the Fastest Man on Earth.

By 1955, Crossfield concluded that the original glory days of Edwards were coming to an end. The original rocket planes had reached the limits of their performance, and the next generation of research aircraft, the X-15, would be a project on an entirely different scale, involving years of development before it was ready for its first flight. Staying at NACA would, in all likelihood, mean a lengthy period of routine work, with nothing as challenging as his last five years pushing the frontiers of flight. He concluded that the right place for an engineering test pilot, one with such extensive experience in rocket flight, was on the engineering team developing the next generation rocket plane, not sitting around at Edwards waiting to see what they came up with. He resigned from NACA and took a job as chief engineering test pilot at North American Aviation, developer of the X-15. He would provide a pilot's perspective throughout the protracted gestation of the plane, including cockpit layout, control systems, life support and pressure suit design, simulator development, and riding herd on the problem-plagued engine.

Ever wonder why the space suits used in the X-15 and by the Project Mercury astronauts were silver coloured? They said it was something about thermal management, but in fact when Crossfield was visiting the manufacturer he saw a sample of aluminised fabric and persuaded them the replace the original khaki coverall outer layer with it because it “looked like a real space suit.” And they did.

When the X-15 finally made its first flight in 1959, Crossfield was at the controls. He would go on to make 14 X-15 flights before turning the ship over to Air Force and NASA (the successor agency to the NACA) pilots. This book, originally published in 1960, concludes before the record-breaking period of the X-15, conducted after Crossfield's involvement with it came to an end.

This is a personal account of a period in the history of aviation in which records fell almost as fast as they were set and rocket pilots went right to the edge and beyond, feeling out the treacherous boundaries of the frontier.

A Kindle edition is available, at this writing, for just US$0.99. The Kindle edition appears to have been prepared by optical character recognition with only a rudimentary and slapdash job of copy editing. There are numerous errors including many involving the humble apostrophe. But, hey, it's only a buck.

 Permalink

November 2019

Eyles, Don. Sunburst and Luminary. Boston: Fort Point Press, 2018. ISBN 978-0-9863859-3-3.
In 1966, the author graduated from Boston University with a bachelor's degree in mathematics. He had no immediate job prospects or career plans. He thought he might be interested in computer programming due to a love of solving puzzles, but he had never programmed a computer. When asked, in one of numerous job interviews, how he would go about writing a program to alphabetise a list of names, he admitted he had no idea. One day, walking home from yet another interview, he passed an unimpressive brick building with a sign identifying it as the “MIT Instrumentation Laboratory”. He'd heard a little about the place and, on a lark, walked in and asked if they were hiring. The receptionist handed him a long application form, which he filled out, and was then immediately sent to interview with a personnel officer. Eyles was amazed when the personnel man seemed bent on persuading him to come to work at the Lab. After reference checking, he was offered a choice of two jobs: one in the “analysis group” (whatever that was), and another on the team developing computer software for landing the Apollo Lunar Module (LM) on the Moon. That sounded interesting, and the job had another benefit attractive to a 21 year old just graduating from university: it came with deferment from the military draft, which was going into high gear as U.S. involvement in Vietnam deepened.

Near the start of the Apollo project, MIT's Instrumentation Laboratory, led by the legendary “Doc” Charles Stark Draper, won a sole source contract to design and program the guidance system for the Apollo spacecraft, which came to be known as the “Apollo Primary Guidance, Navigation, and Control System” (PGNCS, pronounced “pings”). Draper and his laboratory had pioneered inertial guidance systems for aircraft, guided missiles, and submarines, and had in-depth expertise in all aspects of the challenging problem of enabling the Apollo spacecraft to navigate from the Earth to the Moon, land on the Moon, and return to the Earth without any assistance from ground-based assets. In a normal mission, it was expected that ground-based tracking and computers would assist those on board the spacecraft, but in the interest of reliability and redundancy it was required that completely autonomous navigation would permit accomplishing the mission.

The Instrumentation Laboratory developed an integrated system composed of an inertial measurement unit consisting of gyroscopes and accelerometers that provided a stable reference from which the spacecraft's orientation and velocity could be determined, an optical telescope which allowed aligning the inertial platform by taking sightings on fixed stars, and an Apollo Guidance Computer (AGC), a general purpose digital computer which interfaced to the guidance system, thrusters and engines on the spacecraft, the astronauts' flight controls, and mission control, and was able to perform the complex calculations for en route maneuvers and the unforgiving lunar landing process in real time.

Every Apollo lunar landing mission carried two AGCs: one in the Command Module and another in the Lunar Module. The computer hardware, basic operating system, and navigation support software were identical, but the mission software was customised due to the different hardware and flight profiles of the Command and Lunar Modules. (The commonality of the two computers proved essential in getting the crew of Apollo 13 safely back to Earth after an explosion in the Service Module cut power to the Command Module and disabled its computer. The Lunar Module's AGC was able to perform the critical navigation and guidance operations to put the spacecraft back on course for an Earth landing.)

By the time Don Eyles was hired in 1966, the hardware design of the AGC was largely complete (although a revision, called Block II, was underway which would increase memory capacity and add some instructions which had been found desirable during the initial software development process), the low-level operating system and support libraries (implementing such functionality as fixed point arithmetic, vector, and matrix computations), and a substantial part of the software for the Command Module had been written. But the software for actually landing on the Moon, which would run in the Lunar Module's AGC, was largely just a concept in the minds of its designers. Turning this into hard code would be the job of Don Eyles, who had never written a line of code in his life, and his colleagues. They seemed undaunted by the challenge: after all, nobody knew how to land on the Moon, so whoever attempted the task would have to make it up as they went along, and they had access, in the Instrumentation Laboratory, to the world's most experienced team in the area of inertial guidance.

Today's programmers may be amazed it was possible to get anything at all done on a machine with the capabilities of the Apollo Guidance Computer, no less fly to the Moon and land there. The AGC had a total of 36,864 15-bit words of read-only core rope memory, in which every bit was hand-woven to the specifications of the programmers. As read-only memory, the contents were completely fixed: if a change was required, the memory module in question (which was “potted” in a plastic compound) had to be discarded and a new one woven from scratch. There was no way to make “software patches”. Read-write storage was limited to 2048 15-bit words of magnetic core memory. The read-write memory was non-volatile: its contents were preserved across power loss and restoration. (Each memory word was actually 16 bits in length, but one bit was used for parity checking to detect errors and not accessible to the programmer.) Memory cycle time was 11.72 microseconds. There was no external bulk storage of any kind (disc, tape, etc.): everything had to be done with the read-only and read-write memory built into the computer.

The AGC software was an example of “real-time programming”, a discipline with which few contemporary programmers are acquainted. As opposed to an “app” which interacts with a user and whose only constraint on how long it takes to respond to requests is the user's patience, a real-time program has to meet inflexible constraints in the real world set by the laws of physics, with failure often resulting in disaster just as surely as hardware malfunctions. For example, when the Lunar Module is descending toward the lunar surface, burning its descent engine to brake toward a smooth touchdown, the LM is perched atop the thrust vector of the engine just like a pencil balanced on the tip of your finger: it is inherently unstable, and only constant corrections will keep it from tumbling over and crashing into the surface, which would be bad. To prevent this, the Lunar Module's AGC runs a piece of software called the digital autopilot (DAP) which, every tenth of a second, issues commands to steer the descent engine's nozzle to keep the Lunar Module pointed flamy side down and adjusts the thrust to maintain the desired descent velocity (the thrust must be constantly adjusted because as propellant is burned, the mass of the LM decreases, and less thrust is needed to maintain the same rate of descent). The AGC/DAP absolutely must compute these steering and throttle commands and send them to the engine every tenth of a second. If it doesn't, the Lunar Module will crash. That's what real-time computing is all about: the computer has to deliver those results in real time, as the clock ticks, and if it doesn't (for example, it decides to give up and flash a Blue Screen of Death instead), then the consequences are not an irritated or enraged user, but actual death in the real world. Similarly, every two seconds the computer must read the spacecraft's position from the inertial measurement unit. If it fails to do so, it will hopelessly lose track of which way it's pointed and how fast it is going. Real-time programmers live under these demanding constraints and, especially given the limitations of a computer such as the AGC, must deploy all of their cleverness to meet them without fail, whatever happens, including transient power failures, flaky readings from instruments, user errors, and completely unanticipated “unknown unknowns”.

The software which ran in the Lunar Module AGCs for Apollo lunar landing missions was called LUMINARY, and in its final form (version 210) used on Apollo 15, 16, and 17, consisted of around 36,000 lines of code (a mix of assembly language and interpretive code which implemented high-level operations), of which Don Eyles wrote in excess of 2,200 lines, responsible for the lunar landing from the start of braking from lunar orbit through touchdown on the Moon. This was by far the most dynamic phase of an Apollo mission, and the most demanding on the limited resources of the AGC, which was pushed to around 90% of its capacity during the final landing phase where the astronauts were selecting the landing spot and guiding the Lunar Module toward a touchdown. The margin was razor-thin, and that's assuming everything went as planned. But this was not always the case.

It was when the unexpected happened that the genius of the AGC software and its ability to make the most of the severely limited resources at its disposal became apparent. As Apollo 11 approached the lunar surface, a series of five program alarms: codes 1201 and 1202, interrupted the display of altitude and vertical velocity being monitored by Buzz Aldrin and read off to guide Neil Armstrong in flying to the landing spot. These codes both indicated out-of-memory conditions in the AGC's scarce read-write memory. The 1201 alarm was issued when all five of the 44-word vector accumulator (VAC) areas were in use when another program requested to use one, and 1202 signalled exhaustion of the eight 12-word core sets required by each running job. The computer had a single processor and could execute only one task at a time, but its operating system allowed lower priority tasks to be interrupted in order to service higher priority ones, such as the time-critical autopilot function and reading the inertial platform every two seconds. Each suspended lower-priority job used up a core set and, if it employed the interpretive mathematics library, a VAC, so exhaustion of these resources usually meant the computer was trying to do too many things at once. Task priorities were assigned so the most critical functions would be completed on time, but computer overload signalled something seriously wrong—a condition in which it was impossible to guarantee all essential work was getting done.

In this case, the computer would throw up its hands, issue a program alarm, and restart. But this couldn't be a lengthy reboot like customers of personal computers with millions of times the AGC's capacity tolerate half a century later. The critical tasks in the AGC's software incorporated restart protection, in which they would frequently checkpoint their current state, permitting them to resume almost instantaneously after a restart. Programmers estimated around 4% of the AGC's program memory was devoted to restart protection, and some questioned its worth. On Apollo 11, it would save the landing mission.

Shortly after the Lunar Module's landing radar locked onto the lunar surface, Aldrin keyed in the code to monitor its readings and immediately received a 1202 alarm: no core sets to run a task; the AGC restarted. On the communications link Armstrong called out “It's a 1202.” and Aldrin confirmed “1202.”. This was followed by fifteen seconds of silence on the “air to ground” loop, after which Armstrong broke in with “Give us a reading on the 1202 Program alarm.” At this point, neither the astronauts nor the support team in Houston had any idea what a 1202 alarm was or what it might mean for the mission. But the nefarious simulation supervisors had cranked in such “impossible” alarms in earlier training sessions, and controllers had developed a rule that if an alarm was infrequent and the Lunar Module appeared to be flying normally, it was not a reason to abort the descent.

At the Instrumentation Laboratory in Cambridge, Massachusetts, Don Eyles and his colleagues knew precisely what a 1202 was and found it was deeply disturbing. The AGC software had been carefully designed to maintain a 10% safety margin under the worst case conditions of a lunar landing, and 1202 alarms had never occurred in any of their thousands of simulator runs using the same AGC hardware, software, and sensors as Apollo 11's Lunar Module. Don Eyles' analysis, in real time, just after a second 1202 alarm occurred thirty seconds later, was:

Again our computations have been flushed and the LM is still flying. In Cambridge someone says, “Something is stealing time.” … Some dreadful thing is active in our computer and we do not know what it is or what it will do next. Unlike Garman [AGC support engineer for Mission Control] in Houston I know too much. If it were in my hands, I would call an abort.

As the Lunar Module passed 3000 feet, another alarm, this time a 1201—VAC areas exhausted—flashed. This is another indication of overload, but of a different kind. Mission control immediately calls up “We're go. Same type. We're go.” Well, it wasn't the same type, but they decided to press on. Descending through 2000 feet, the DSKY (computer display and keyboard) goes blank and stays blank for ten agonising seconds. Seventeen seconds later another 1202 alarm, and a blank display for two seconds—Armstrong's heart rate reaches 150. A total of five program alarms and resets had occurred in the final minutes of landing. But why? And could the computer be trusted to fly the return from the Moon's surface to rendezvous with the Command Module?

While the Lunar Module was still on the lunar surface Instrumentation Laboratory engineer George Silver figured out what happened. During the landing, the Lunar Module's rendezvous radar (used only during return to the Command Module) was powered on and set to a position where its reference timing signal came from an internal clock rather than the AGC's master timing reference. If these clocks were in a worst case out of phase condition, the rendezvous radar would flood the AGC with what we used to call “nonsense interrupts” back in the day, at a rate of 800 per second, each consuming one 11.72 microsecond memory cycle. This imposed an additional load of more than 13% on the AGC, which pushed it over the edge and caused tasks deemed non-critical (such as updating the DSKY) not to be completed on time, resulting in the program alarms and restarts. The fix was simple: don't enable the rendezvous radar until you need it, and when you do, put the switch in the position that synchronises it with the AGC's clock. But the AGC had proved its excellence as a real-time system: in the face of unexpected and unknown external perturbations it had completed the mission flawlessly, while alerting its developers to a problem which required their attention.

The creativity of the AGC software developers and the merit of computer systems sufficiently simple that the small number of people who designed them completely understood every aspect of their operation was demonstrated on Apollo 14. As the Lunar Module was checked out prior to the landing, the astronauts in the spacecraft and Mission Control saw the abort signal come on, which was supposed to indicate the big Abort button on the control panel had been pushed. This button, if pressed during descent to the lunar surface, immediately aborted the landing attempt and initiated a return to lunar orbit. This was a “one and done” operation: no Microsoft-style “Do you really mean it?” tea ceremony before ending the mission. Tapping the switch made the signal come and go, and it was concluded the most likely cause was a piece of metal contamination floating around inside the switch and occasionally shorting the contacts. The abort signal caused no problems during lunar orbit, but if it should happen during descent, perhaps jostled by vibration from the descent engine, it would be disastrous: wrecking a mission costing hundreds of millions of dollars and, coming on the heels of Apollo 13's mission failure and narrow escape from disaster, possibly bring an end to the Apollo lunar landing programme.

The Lunar Module AGC team, with Don Eyles as the lead, was faced with an immediate challenge: was there a way to patch the software to ignore the abort switch, protecting the landing, while still allowing an abort to be commanded, if necessary, from the computer keyboard (DSKY)? The answer to this was obvious and immediately apparent: no. The landing software, like all AGC programs, ran from read-only rope memory which had been woven on the ground months before the mission and could not be changed in flight. But perhaps there was another way. Eyles and his colleagues dug into the program listing, traced the path through the logic, and cobbled together a procedure, then tested it in the simulator at the Instrumentation Laboratory. While the AGC's programming was fixed, the AGC operating system provided low-level commands which allowed the crew to examine and change bits in locations in the read-write memory. Eyles discovered that by setting the bit which indicated that an abort was already in progress, the abort switch would be ignored at the critical moments during the descent. As with all software hacks, this had other consequences requiring their own work-arounds, but by the time Apollo 14's Lunar Module emerged from behind the Moon on course for its landing, a complete procedure had been developed which was radioed up from Houston and worked perfectly, resulting in a flawless landing.

These and many other stories of the development and flight experience of the AGC lunar landing software are related here by the person who wrote most of it and supported every lunar landing mission as it happened. Where technical detail is required to understand what is happening, no punches are pulled, even to the level of bit-twiddling and hideously clever programming tricks such as using an overflow condition to skip over an EXTEND instruction, converting the following instruction from double precision to single precision, all in order to save around forty words of precious non-bank-switched memory. In addition, this is a personal story, set in the context of the turbulent 1960s and early ’70s, of the author and other young people accomplishing things no humans had ever before attempted.

It was a time when everybody was making it up as they went along, learning from experience, and improvising on the fly; a time when a person who had never written a line of computer code would write, as his first program, the code that would land men on the Moon, and when the creativity and hard work of individuals made all the difference. Already, by the end of the Apollo project, the curtain was ringing down on this era. Even though a number of improvements had been developed for the LM AGC software which improved precision landing capability, reduced the workload on the astronauts, and increased robustness, none of these were incorporated in the software for the final three Apollo missions, LUMINARY 210, which was deemed “good enough” and the benefit of the changes not worth the risk and effort to test and incorporate them. Programmers seeking this kind of adventure today will not find it at NASA or its contractors, but instead in the innovative “New Space” and smallsat industries.

 Permalink

Howe, Steven D. Wrench and Claw. Seattle: Amazon Digital Services, 2011. ASIN B005JPZ74A.
In the conclusion of the author's Honor Bound Honor Born (May 2014), an explorer on the Moon discovers something that just shouldn't be there, which calls into question the history of the Earth and Moon and humanity's place in it. This short novel (or novella—it's 81 pages in a print edition) explores how that anomaly came to be and presents a brilliantly sketched alternative history which reminds the reader just how little we really know about the vast expanses of time which preceded our own species' appearance on the cosmic stage.

Vesquith is an Army lieutenant assigned to a base on the Moon. The base is devoted to research, exploration, and development of lunar resources to expand the presence on the Moon, but more recently has become a key asset in Earth's defence, as its Lunar Observation Post (LOP) allows monitoring the inner solar system. This has become crucial since the Martian colony, founded with high hopes, has come under the domination of self-proclaimed “King” Rornak, whose religious fanatics infiltrated the settlement and now threaten the Earth with an arsenal of nuclear weapons they have somehow obtained and are using to divert asteroids to exploit their resources for the development of Mars.

Independently, Bob, a field paleontologist whose expedition is running short of funds, is enduring a fundraising lecture at a Denver museum by a Dr Dietlief, a crowd-pleasing science populariser who regales his audiences with illustrations of how little we really know about the Earth's past, stretching for vast expanses of time compared to that since the emergence of modern humans, and wild speculations about what might have come and gone during those aeons, including the rise and fall of advanced technological civilisations whose works may have disappeared without a trace in a million years or so after their demise due to corrosion, erosion, and the incessant shifting of the continents and recycling of the Earth's surface. How do we know that, somewhere beneath our feet, yet to be discovered by paleontologists who probably wouldn't understand what they'd found, lies “something like a crescent wrench clutched in a claw?” Dietlief suggests that even if paleontologists came across what remained of such evidence after dozens of millions of years they'd probably not recognise it because they weren't looking for such a thing and didn't have the specialised equipment needed to detect it.

On the Moon, Vesquith and his crew return to base to find it has been attacked, presumably by an advance party from Mars, wiping out a detachment of Amphibious Marines sent to guard the LOP and disabling it, rendering Earth blind to attack from Mars. The survivors must improvise with the few resources remaining from the attack to meet their needs, try to restore communications with Earth to warn of a possible attack and request a rescue mission, and defend against possible additional assaults on their base. This is put to the test when another contingent of invaders arrives to put the base permanently out of commission and open the way for a general attack on Earth.

Bob, meanwhile, thanks to funds raised by Dr Dietlief's lecture, has been able to extend his fieldwork, add some assistants, and equip his on-site lab with some new analytic equipment….

This is a brilliant story which rewrites the history of the Earth and sets the stage for the second volume in the Earth Rise series, Honor Bound Honor Born. There is so much going on and so many surprises that I can't really say much more without venturing into spoiler territory, so I won't. The only shortcoming is that, like many self-published works, it stumbles over the humble apostrophe, and particularly its shock troops, the “its/it's” brigade.

During the author's twenty year career at the Los Alamos National Laboratory, he worked on a variety of technologies including nuclear propulsion and applications of nuclear power to space exploration and development. Since the 1980s he has been an advocate of a “power rich” approach to space missions, in particular lunar and Mars bases. The lunar base described in the story implements this strategy, but it's not central to the story and doesn't intrude upon the adventure.

This book is presently available only in a Kindle edition, which is free for Kindle Unlimited subscribers.

 Permalink

Smyth, Henry D. Atomic Energy for Military Purposes. Stanford, CA, Stanford University Press, [1945] 1990. ISBN 978-0-8047-1722-9.
This document was released to the general public by the United States War Department on August 12th, 1945, just days after nuclear weapons had been dropped on Japan (Hiroshima on August 6th and Nagasaki on August 9th). The author, Prof. Henry D. Smyth of Princeton University, had worked on the Manhattan Project since early 1941, was involved in a variety of theoretical and practical aspects of the effort, and possessed security clearances which gave him access to all of the laboratories and production facilities involved in the project. In May, 1944, Smyth, who had suggested such a publication, was given the go ahead by the Manhattan Project's Military Policy Committee to prepare an unclassified summary of the bomb project. This would have a dual purpose: to disclose to citizens and taxpayers what had been done on their behalf, and to provide scientists and engineers involved in the project a guide to what they could discuss openly in the postwar period: if it was in the “Smyth Report” (as it came to be called), it was public information, otherwise mum's the word.

The report is a both an introduction to the physics underlying nuclear fission and its use in both steady-state reactors and explosives, production of fissile material (both separation of reactive Uranium-235 from the much more abundant Uranium-238 and production of Plutonium-239 in nuclear reactors), and the administrative history and structure of the project. Viewed as a historical document, the report is as interesting in what it left out as what was disclosed. Essentially none of the key details discovered and developed by the Manhattan Project which might be of use to aspiring bomb makers appear here. The key pieces of information which were not known to interested physicists in 1940 before the curtain of secrecy descended upon anything related to nuclear fission were inherently disclosed by the very fact that a fission bomb had been built, detonated, and produced a very large explosive yield.

  • It was possible to achieve a fast fission reaction with substantial explosive yield.
  • It was possible to prepare a sufficient quantity of fissile material (uranium or plutonium) to build a bomb.
  • The critical mass required by a bomb was within the range which could be produced by a country with the industrial resources of the United States and small enough that it could be delivered by an aircraft.

None of these were known at the outset of the Manhattan Project (which is why it was such a gamble to undertake it), but after the first bombs were used, they were apparent to anybody who was interested, most definitely including the Soviet Union (who, unbeknownst to Smyth and the political and military leaders of the Manhattan Project, already had the blueprints for the Trinity bomb and extensive information on all aspects of the project from their spies.)

Things never disclosed in the Smyth Report include the critical masses of uranium and plutonium, the problem of contamination of reactor-produced plutonium with the Plutonium-240 isotope and the consequent impossibility of using a gun-type design with plutonium, the technique of implosion and the technologies required to achieve it such as explosive lenses and pulsed power detonators (indeed, the word “implosion” appears nowhere in the document), and the chemical processes used to separate plutonium from uranium and fission products irradiated in a production reactor. In many places, it is explicitly said that military security prevents discussion of aspects of the project, but in others nasty surprises which tremendously complicated the effort are simply not mentioned—left for others wishing to follow in its path to discover for themselves.

Reading the first part of the report, you get the sense that it had not yet been decided whether to disclose the existence or scale of the Los Alamos operation. Only toward the end of the work is Los Alamos named and the facilities and tasks undertaken there described. The bulk of the report was clearly written before the Trinity test of the plutonium bomb on July 16, 1945. It is described in an appendix which reproduces verbatim the War Department press release describing the test, which was only issued after the bombs were used on Japan.

This document is of historical interest only. If you're interested in the history of the Manhattan Project and the design of the first fission bombs, more recent works such as Richard Rhodes' The Making of the Atomic Bomb are much better sources. For those aware of the scope and details of the wartime bomb project, the Smyth report is an interesting look at what those responsible for it felt comfortable disclosing and what they wished to continue to keep secret. The forward by General Leslie R. Groves reminds readers that “Persons disclosing or securing additional information by any means whatsoever without authorization are subject to severe penalties under the Espionage Act.”

I read a Kindle edition from another publisher which is much less expensive than the Stanford paperback but contains a substantial number of typographical errors probably introduced by scanning a paper source document with inadequate subsequent copy editing.

 Permalink

December 2019

Klemperer, Victor. I Will Bear Witness. Vol. 2. New York: Modern Library, [1942–1945, 1995, 1999] 2001. ISBN 978-0-375-75697-9.
This is the second volume in Victor Klemperer's diaries of life as a Jew in Nazi Germany. Volume 1 (February 2009) covers the years from 1933 through 1941, in which the Nazis seized and consolidated their power, began to increasingly persecute the Jewish population, and rearm in preparation for their military conquests which began with the invasion of Poland in September 1939.

I described that book as “simultaneously tedious, depressing, and profoundly enlightening”. The author (a cousin of the conductor Otto Klemperer) was a respected professor of Romance languages and literature at the Technical University of Dresden when Hitler came to power in 1933. Although the son of a Reform rabbi, Klemperer had been baptised in a Christian church and considered himself a protestant Christian and entirely German. He volunteered for the German army in World War I and served at the front in the artillery and later, after recovering from a serious illness, in the army book censorship office on the Eastern front. As a fully assimilated German, he opposed all appeals to racial identity politics, Zionist as well as Nazi.

Despite his conversion to protestantism, military service to Germany, exalted rank as a professor, and decades of marriage to a woman deemed “Aryan” under the racial laws promulgated by the Nazis, Klemperer was considered a “full-blooded Jew” and was subject to ever-escalating harassment, persecution, humiliation, and expropriation as the Nazis tightened their grip on Germany. As civil society spiralled toward barbarism, Klemperer lost his job, his car, his telephone, his house, his freedom of movement, the right to shop in “Aryan stores”, access to public and lending libraries, and even the typewriter on which he continued to write in the hope of maintaining his sanity. His world shrank from that of a cosmopolitan professor fluent in many European languages to a single “Jews' house” in Dresden, shared with other once-prosperous families similarly evicted from their homes.

As 1942 begins, it is apparent to many in German, even Jews deprived of the “privilege” of reading newspapers and listening to the radio, not to mention foreign broadcasts, that the momentum of German conquest in the East had stalled and that the Soviet winter counterattack had begun to push the ill-equipped and -supplied German troops back from the lines they held in the fall of 1941. This was reported with euphemisms such as “shortening our line”, but it was obvious to everybody that the Soviets, not long ago reported breathlessly as “annihilated”, were nothing of the sort and that the Nazi hope of a quick victory in the East, like the fall of France in 1940, was not in the cards.

In Dresden, where Klemperer and his wife Eva remained after being forced out of their house (to which, in formalism-obsessed Germany, he retained title and responsibility for maintenance), Jews were subjected to a never-ending ratchet of abuse, oppression, and terror. Klemperer was forced to wear the yellow star (concealing it meant immediate arrest and likely “deportation” to the concentration camps in the East) and was randomly abused by strangers on the street (but would get smiles and quiet words of support from others), with each event shaking or bolstering his confidence in those who, before Hitler, he considered his “fellow Germans”.

He is prohibited from riding the tram, and must walk long distances, avoiding crowded streets where the risk of abuse from passers-by was greater. Another blow falls when Jews are forbidden to use the public library. With his typewriter seized long ago, he can only pursue his profession with pen, ink, and whatever books he can exchange with other Jews, including those left behind by those “deported”. As ban follows ban, even the simplest things such as getting shoes repaired, obtaining coal to heat the house, doing laundry, and securing food to eat become major challenges. Jews are subject to random “house searches” by the Gestapo, in which the discovery of something like his diaries might mean immediate arrest—he arranges to store the work with an “Aryan” friend of Eva, who deposits pages as they are completed. The house searches in many cases amount to pure shakedowns, where rationed and difficult-to-obtain goods such as butter, sugar, coffee, and tobacco, even if purchased with the proper coupons, are simply stolen by the Gestapo goons.

By this time every Jew knows individuals and families who have been “deported”, and the threat of joining them is ever present. Nobody seems to know precisely what is going on in those camps in the East (whose names are known: Auschwitz, Dachau, Theresienstadt, etc.) but what is obvious is that nobody sent there has ever been seen again. Sometimes relatives receive a letter saying the deportee died of disease in the camp, which seemed plausible, while others get notices their loved one was “killed while trying to escape”, which was beyond belief in the case of elderly prisoners who had difficulty walking. In any case, being “sent East” was considered equivalent to a death sentence which, for most, it was. As a war veteran and married to an “Aryan”, Klemperer was more protected than most Jews in Germany, but there was always the risk that the slightest infraction might condemn him to the camps. He knew many others who had been deported shortly after the death of their Aryan wives.

As the war in the East grinds on, it becomes increasingly clear that Germany is losing. The back-and-forth campaign in North Africa was first to show cracks in the Nazi aura of invincibility, but after the disaster at Stalingrad in the winter of 1942–1943, it is obvious the situation is dire. Goebbels proclaims “total war”, and all Germans begin to feel the privation brought on by the war. The topic on everybody's lips in whispered, covert conversations is “How long can it go on?” With each reverse there are hopes that perhaps a military coup will depose the Nazis and seek peace with the Allies.

For Klemperer, such grand matters of state and history are of relatively little concern. Much more urgent are obtaining the necessities of life which, as the economy deteriorates and oppression of the Jews increases, often amount to coal to stay warm and potatoes to eat, hauled long distances by manual labour. Klemperer, like all able-bodied Jews (the definition of which is flexible: he suffers from heart disease and often has difficulty walking long distances or climbing stairs, and has vision problems as well) is assigned “war work”, which in his case amounts to menial labour tending machines producing stationery and envelopes in a paper factory. Indeed, what appear in retrospect as the pivotal moments of the war in Europe: the battles of Stalingrad and Kursk, Axis defeat and evacuation of North Africa, the fall of Mussolini and Italy's leaving the Axis, the Allied D-day landings in Normandy, the assassination plot against Hitler, and more almost seem to occur off-stage here, with news filtering in bit by bit after the fact and individuals trying to piece it together and make sense of it all.

One event which is not off stage is the bombing of Dresden between February 13 and 15, 1945. The Klemperers were living at the time in the Jews' house they shared with several other families, which was located some distance from the city centre. There was massive damage in the area, but it was outside the firestorm which consumed the main targets. Victor and Eva became separated in the chaos, but were reunited near the end of the attack. Given the devastation and collapse of infrastructure, Klemperer decided to bet his life on the hope that the attack would have at least temporarily put the Gestapo out of commission and removed the yellow star, discarded all identity documents marking him as a Jew, and joined the mass of refugees, many also without papers, fleeing the ruins of Dresden. He and Eva made their way on what remained of the transportation system toward Bavaria and eastern Germany, where they had friends who might accommodate them, at least temporarily. Despite some close calls, the ruse worked, and they survived the end of the war, fall of the Nazi regime, and arrival of United States occupation troops.

After a period in which he discovered that the American occupiers, while meaning well, were completely overwhelmed trying to meet the needs of the populace amid the ruins, the Klemperers decided to make it on their own back to Dresden, which was in the Soviet zone of occupation, where they hoped their house still stood and would be restored to them as their property. The book concludes with a description of this journey across ruined Germany and final arrival at the house they occupied before the Nazis came to power.

After the war, Victor Klemperer was appointed a professor at the University of Leipzig and resumed his academic career. As political life resumed in what was then the Soviet sector and later East Germany, he joined the Socialist Unity Party of Germany, which is usually translated to English as the East German Communist Party and was under the thumb of Moscow. Subsequently, he became a cultural ambassador of sorts for East Germany. He seems to have been a loyal communist, although in his later diaries he expressed frustration at the impotence of the “parliament” in which he was a delegate for eight years. Not to be unkind to somebody who survived as much oppression and adversity as he did, but he didn't seem to have much of a problem with a totalitarian, one party, militaristic, intrusive surveillance, police state as long as it wasn't directly persecuting him.

The author was a prolific diarist who wrote thousands of pages from the early 1900s throughout his long life. The original 1995 German publication of the 1933–1945 diaries as Ich will Zeugnis ablegen bis zum letzten was a substantial abridgement of the original document and even so ran to almost 1700 pages. This English translation further abridges the diaries and still often seems repetitive. End notes provide historical context, identify the many people who figure in the diary, and translate the foreign phrases the author liberally sprinkles among the text.

 Permalink

Anonymous Conservative [Michael Trust]. The Evolutionary Psychology Behind Politics. Macclenny, FL: Federalist Publications, [2012, 2014] 2017. ISBN 978-0-9829479-3-7.
One of the puzzles noted by observers of the contemporary political and cultural scene is the division of the population into two factions, (called in the sloppy terminology of the United States) “liberal” and “conservative”, and that if you pick a member from either faction by observing his or her position on one of the divisive issues of the time, you can, with a high probability of accuracy, predict their preferences on all of a long list of other issues which do not, on the face of it, seem to have very much to do with one another. For example, here is a list of present-day hot-button issues, presented in no particular order.

  1. Health care, socialised medicine
  2. Climate change, renewable energy
  3. School choice
  4. Gun control
  5. Higher education subsidies, debt relief
  6. Free speech (hate speech laws, Internet censorship)
  7. Deficit spending, debt, and entitlement reform
  8. Immigration
  9. Tax policy, redistribution
  10. Abortion
  11. Foreign interventions, military spending

What a motley collection of topics! About the only thing they have in common is that the omnipresent administrative super-state has become involved in them in one way or another, and therefore partisans of policies affecting them view it important to influence the state's action in their regard. And yet, pick any one, tell me what policies you favour, and I'll bet I can guess at where you come down on at least eight of the other ten. What's going on?

Might there be some deeper, common thread or cause which explains this otherwise curious clustering of opinions? Maybe there's something rooted in biology, possibly even heritable, which predisposes people to choose the same option on disparate questions? Let's take a brief excursion into ecological modelling and see if there's something of interest there.

As with all modelling, we start with a simplified, almost cartoon abstraction of the gnarly complexity of the real world. Consider a closed territory (say, an island) with abundant edible vegetation and no animals. Now introduce a species, such as rabbits, which can eat the vegetation and turn it into more rabbits. We start with a small number, P, of rabbits. Now, once they get busy with bunny business, the population will expand at a rate r which is essentially constant over a large population. If r is larger than 1 (which for rabbits it will be, with litter sizes between 4 and 10 depending on the breed, and gestation time around a month) the population will increase. Since the rate of increase is constant and the total increase is proportional to the size of the existing population, this growth will be exponential. Ask any Australian.

Now, what will eventually happen? Will the island disappear under a towering pile of rabbits inexorably climbing to the top of the atmosphere? No—eventually the number of rabbits will increase to the point where they are eating all the vegetation the territory can produce. This number, K, is called the “carrying capacity” of the environment, and it is an absolute number for a given species and environment. This can be expressed as a differential equation called the Verhulst model, as follows:

\frac{dP}{dt} & = & rP(1-\frac{P}{K})

It's a maxim among popular science writers that every equation you include cuts your readership by a factor of two, so among the hardy half who remain, let's see how this works. It's really very simple (and indeed, far simpler than actual population dynamics in a real environment). The left side, “dP/dt” simply means “the rate of growth of the population P with respect to time, t”. On the right hand side, “rP” accounts for the increase (or decrease, if r is less than 0) in population, proportional to the current population. The population is limited by the carrying capacity of the habitat, K, which is modelled by the factor “(1 − P/K)”. Now think about how this works: when the population is very small, P/K will be close to zero and, subtracted from one, will yield a number very close to one. This, then, multiplied by the increase due to rP will have little effect and the growth will be largely unconstrained. As the population P grows and begins to approach K, however, P/K will approach unity and the factor will fall to zero, meaning that growth has completely stopped due to the population reaching the carrying capacity of the environment—it simply doesn't produce enough vegetation to feed any more rabbits. If the rabbit population overshoots, this factor will go negative and there will be a die-off which eventually brings the population P below the carrying capacity K. (Sorry if this seems tedious; one of the great things about learning even a very little about differential equations is that all of this is apparent at a glance from the equation once you get over the speed bump of understanding the notation and algebra involved.)

This is grossly over-simplified. In fact, real populations are prone to oscillations and even chaotic dynamics, but we don't need to get into any of that for what follows, so I won't.

Let's complicate things in our bunny paradise by introducing a population of wolves. The wolves can't eat the vegetation, since their digestive systems cannot extract nutrients from it, so their only source of food is the rabbits. Each wolf eats many rabbits every year, so a large rabbit population is required to support a modest number of wolves. Now if we go back and look at the equation for wolves, K represents the number of wolves the rabbit population can sustain, in the steady state, where the number of rabbits eaten by the wolves just balances the rabbits' rate of reproduction. This will often result in a rabbit population smaller than the carrying capacity of the environment, since their population is now constrained by wolf predation and not K.

What happens as this (oversimplified) system cranks away, generation after generation, and Darwinian evolution kicks in? Evolution consists of two processes: variation, which is largely random, and selection, which is sensitively dependent upon the environment. The rabbits are unconstrained by K, the carrying capacity of their environment. If their numbers increase beyond a population P substantially smaller than K, the wolves will simply eat more of them and bring the population back down. The rabbit population, then, is not at all constrained by K, but rather by r: the rate at which they can produce new offspring. Population biologists call this an r-selected species: evolution will select for individuals who produce the largest number of progeny in the shortest time, and hence for a life cycle which minimises parental investment in offspring and against mating strategies, such as lifetime pair bonding, which would limit their numbers. Rabbits which produce fewer offspring will lose a larger fraction of them to predation (which affects all rabbits, essentially at random), and the genes which they carry will be selected out of the population. An r-selected population, sometimes referred to as r-strategists, will tend to be small, with short gestation time, high fertility (offspring per litter), rapid maturation to the point where offspring can reproduce, and broad distribution of offspring within the environment.

Wolves operate under an entirely different set of constraints. Their entire food supply is the rabbits, and since it takes a lot of rabbits to keep a wolf going, there will be fewer wolves than rabbits. What this means, going back to the Verhulst equation, is that the 1 − P/K factor will largely determine their population: the carrying capacity K of the environment supports a much smaller population of wolves than their food source, rabbits, and if their rate of population growth r were to increase, it would simply mean that more wolves would starve due to insufficient prey. This results in an entirely different set of selection criteria driving their evolution: the wolves are said to be K-selected or K-strategists. A successful wolf (defined by evolution theory as more likely to pass its genes on to successive generations) is not one which can produce more offspring (who would merely starve by hitting the K limit before reproducing), but rather highly optimised predators, able to efficiently exploit the limited supply of rabbits, and to pass their genes on to a small number of offspring, produced infrequently, which require substantial investment by their parents to train them to hunt and, in many cases, acquire social skills to act as part of a group that hunts together. These K-selected species tend to be larger, live longer, have fewer offspring, and have parents who spend much more effort raising them and training them to be successful predators, either individually or as part of a pack.

K or r, r or K: once you've seen it, you can't look away.”

Just as our island of bunnies and wolves was over-simplified, the dichotomy of r- and K-selection is rarely precisely observed in nature (although rabbits and wolves are pretty close to the extremes, which it why I chose them). Many species fall somewhere in the middle and, more importantly, are able to shift their strategy on the fly, much faster than evolution by natural selection, based upon the availability of resources. These r/K shape-shifters react to their environment. When resources are abundant, they adopt an r-strategy, but as their numbers approach the carrying capacity of their environment, shift to life cycles you'd expect from K-selection.

What about humans? At a first glance, humans would seem to be a quintessentially K-selected species. We are large, have long lifespans (about twice as long as we “should” based upon the number of heartbeats per lifetime of other mammals), usually only produce one child (and occasionally two) per gestation, with around a one year turn-around between children, and massive investment by parents in raising infants to the point of minimal autonomy and many additional years before they become fully functional adults. Humans are “knowledge workers”, and whether they are hunter-gatherers, farmers, or denizens of cubicles at The Company, live largely by their wits, which are a combination of the innate capability of their hypertrophied brains and what they've learned in their long apprenticeship through childhood. Humans are not just predators on what they eat, but also on one another. They fight, and they fight in bands, which means that they either develop the social skills to defend themselves and meet their needs by raiding other, less competent groups, or get selected out in the fullness of evolutionary time.

But humans are also highly adaptable. Since modern humans appeared some time between fifty and two hundred thousand years ago they have survived, prospered, proliferated, and spread into almost every habitable region of the Earth. They have been hunter-gatherers, farmers, warriors, city-builders, conquerors, explorers, colonisers, traders, inventors, industrialists, financiers, managers, and, in the Final Days of their species, WordPress site administrators.

In many species, the selection of a predominantly r or K strategy is a mix of genetics and switches that get set based upon experience in the environment. It is reasonable to expect that humans, with their large brains and ability to override inherited instinct, would be especially sensitive to signals directing them to one or the other strategy.

Now, finally, we get back to politics. This was a post about politics. I hope you've been thinking about it as we spent time in the island of bunnies and wolves, the cruel realities of natural selection, and the arcana of differential equations.

What does r-selection produce in a human population? Well, it might, say, be averse to competition and all means of selection by measures of performance. It would favour the production of large numbers of offspring at an early age, by early onset of mating, promiscuity, and the raising of children by single mothers with minimal investment by them and little or none by the fathers (leaving the raising of children to the State). It would welcome other r-selected people into the community, and hence favour immigration from heavily r populations. It would oppose any kind of selection based upon performance, whether by intelligence tests, academic records, physical fitness, or job performance. It would strive to create the ideal r environment of unlimited resources, where all were provided all their basic needs without having to do anything but consume. It would oppose and be repelled by the K component of the population, seeking to marginalise it as toxic, privileged, or exploiters of the real people. It might even welcome conflict with K warriors of adversaries to reduce their numbers in otherwise pointless foreign adventures.

And K-troop? Once a society in which they initially predominated creates sufficient wealth to support a burgeoning r population, they will find themselves outnumbered and outvoted, especially once the r wave removes the firebreaks put in place when K was king to guard against majoritarian rule by an urban underclass. The K population will continue to do what they do best: preserving the institutions and infrastructure which sustain life, defending the society in the military, building and running businesses, creating the basic science and technologies to cope with emerging problems and expand the human potential, and governing an increasingly complex society made up, with every generation, of a population, and voters, who are fundamentally unlike them.

Note that the r/K model completely explains the “crunchy to soggy” evolution of societies which has been remarked upon since antiquity. Human societies always start out, as our genetic heritage predisposes us to, K-selected. We work to better our condition and turn our large brains to problem-solving and, before long, the privation our ancestors endured turns into a pretty good life and then, eventually, abundance. But abundance is what selects for the r strategy. Those who would not have reproduced, or have as many children in the K days of yore, now have babies-a-poppin' as in the introduction to Idiocracy, and before long, not waiting for genetics to do its inexorable work, but purely by a shift in incentives, the rs outvote the Ks and the Ks begin to count the days until their society runs out of the wealth which can be plundered from them.

But recall that equation. In our simple bunnies and wolves model, the resources of the island were static. Nothing the wolves could do would increase K and permit a larger rabbit and wolf population. This isn't the case for humans. K humans dramatically increase the carrying capacity of their environment by inventing new technologies such as agriculture, selective breeding of plants and animals, discovering and exploiting new energy sources such as firewood, coal, and petroleum, and exploring and settling new territories and environments which may require their discoveries to render habitable. The rs don't do these things. And as the rs predominate and take control, this momentum stalls and begins to recede. Then the hard times ensue. As Heinlein said many years ago, “This is known as bad luck.”

And then the Gods of the Copybook Headings will, with terror and slaughter return. And K-selection will, with them, again assert itself.

Is this a complete model, a Rosetta stone for human behaviour? I think not: there are a number of things it doesn't explain, and the shifts in behaviour based upon incentives are much too fast to account for by genetics. Still, when you look at those eleven issues I listed so many words ago through the r/K perspective, you can almost immediately see how each strategy maps onto one side or the other of each one, and they are consistent with the policy preferences of “liberals” and “conservatives”. There is also some rather fuzzy evidence for genetic differences (in particular the DRD4-7R allele of the dopamine receptor and size of the right brain amygdala) which appear to correlate with ideology.

Still, if you're on one side of the ideological divide and confronted with somebody on the other and try to argue from facts and logical inference, you may end up throwing up your hands (if not your breakfast) and saying, “They just don't get it!” Perhaps they don't. Perhaps they can't. Perhaps there's a difference between you and them as great as that between rabbits and wolves, which can't be worked out by predator and prey sitting down and voting on what to have for dinner. This may not be a hopeful view of the political prospect in the near future, but hope is not a strategy and to survive and prosper requires accepting reality as it is and acting accordingly.

 Permalink

Carroll, Michael. Europa's Lost Expedition. Cham, Switzerland: Springer International, 2017. ISBN 978-3-319-43158-1.
In the epoch in which this story is set the expansion of the human presence into the solar system was well advanced, with large settlements on the Moon and Mars, exploitation of the abundant resources in the main asteroid belt, and research outposts in exotic environments such as Jupiter's enigmatic moon Europa, when civilisation on Earth was consumed, as so often seems to happen when too many primates who evolved to live in small bands are packed into a limited space, by a global conflict which the survivors, a decade later, refer to simply as “The War”, as its horrors and costs dwarfed all previous human conflicts.

Now, with The War over and recovery underway, scientific work is resuming, and an international expedition has been launched to explore the southern hemisphere of Europa, where the icy crust of the moon is sufficiently thin to provide access to the liquid water ocean beneath and the complex orbital dynamics of Jupiter's moons were expected to trigger a once in a decade eruption of geysers, with cracks in the ice allowing the ocean to spew into space, providing an opportunity to sample it “for free”.

Europa is not a hospitable environment for humans. Orbiting deep within Jupiter's magnetosphere, it is in the heart of the giant planet's radiation belts, which are sufficiently powerful to kill an unprotected human within minutes. But the radiation is not uniform and humans are clever. The main base on Europa, Taliesen, is located on the face of the moon that points away from Jupiter, and in the leading hemisphere where radiation is least intense. On Europa, abundant electrical power is available simply by laying out cables along the surface, in which Jupiter's magnetic field induces powerful currents as they cut it. This power is used to erect a magnetic shield around the base which protects it from the worst, just as Earth's magnetic field shields life on its surface. Brief ventures into the “hot zone” are made possible by shielded rovers and advanced anti-radiation suits.

The present expedition will not be the first to attempt exploration of the southern hemisphere. Before the War, an expedition with similar objectives ended in disaster, with the loss of all members under circumstances which remain deeply mysterious, and of which the remaining records, incomplete and garbled by radiation, provide few clues as to what happened to them. Hadley Nobile, expedition leader, is not so much concerned with the past as making the most of this rare opportunity. Her deputy and long-term collaborator, Gibson van Clive, however, is fascinated by the mystery and spends hours trying to recover and piece together the fragmentary records from the lost expedition and research the backgrounds of its members and the physical evidence, some of which makes no sense at all. The other members of the new expedition are known from their scientific reputations, but not personally to the leaders. Many people have blanks in their curricula vitae during the War years, and those who lived through that time are rarely inclined to probe too deeply.

Once the party arrive at Taliesen and begin preparations for their trip to the south, a series of “accidents” befall some members, who are found dead in circumstances which seem implausible based upon their experience. Down to the bare minimum team, with a volunteer replacement from the base's complement, Hadley decides to press on—the geysers wait for no one.

Thus begins what is basically a murder mystery, explicitly patterned on Agatha Christie's And Then There Were None, layered upon the enigmas of the lost expedition, the backgrounds of those in the current team, and the biosphere which may thrive in the ocean beneath the ice, driven by the tides raised by Jupiter and the other moons and fed by undersea plumes similar to those where some suspect life began on Earth.

As a mystery, there is little more that can be said without crossing the line into plot spoilers, so I will refrain from further description. Worthy of a Christie tale, there are many twists and turns, and few things are as the seem on the surface.

As in his previous novel, On the Shores of Titan's Farthest Sea (December 2016), the author, a distinguished scientific illustrator and popular science writer, goes to great lengths to base the exotic locale in which the story is set upon the best presently-available scientific knowledge. An appendix, “The Science Behind the Story”, provides details and source citations for the setting of the story and the technologies which figure in it.

While the science and technology are plausible extrapolations from what is presently known, the characters sometimes seem to behave more in the interests of advancing the plot than as real people would in such circumstances. If you were the leader or part of an expedition several members of which had died under suspicious circumstances at the base camp, would you really be inclined to depart for a remote field site with spotty communications along with all of the prime suspects?

 Permalink

Dutton, Edward. How to Judge People by What they Look Like. Oulu, Finland: Thomas Edward Press, 2018. ISBN 978-1-9770-6797-5.
In The Picture of Dorian Gray, Oscar Wilde wrote,

People say sometimes that Beauty is only superficial. That may be so. But at least it is not as superficial as Thought. To me, Beauty is the wonder of wonders. It is only shallow people who do not judge by appearances.

From childhood, however, we have been exhorted not to judge people by their appearances. In Skin in the Game (August 2019), Nassim Nicholas Taleb advises choosing the surgeon who “doesn't look like a surgeon” because their success is more likely due to competence than first impressions.

Despite this, physiognomy, assessing a person's characteristics from their appearance, is as natural to humans as breathing, and has been an instinctual part of human behaviour as old as our species. Thinkers and writers from Aristotle through the great novelists of the 19th century believed that an individual's character was reflected in, and could be inferred from their appearance, and crafted and described their characters accordingly. Jules Verne would often spend a paragraph describing the appearance of his characters and what that implied for their behaviour.

Is physiognomy all nonsense, a pseudoscience like phrenology, which purported to predict mental characteristics by measuring bumps on the skull which were claimed indicate the development of “cerebral organs” with specific functions? Or, is there something to it, after all? Humans are a social species and, as such, have evolved to be exquisitely sensitive to signals sent by others of their kind, conveyed through subtle means such as a tone of voice, facial expression, or posture. Might we also be able to perceive and interpret messages which indicate properties such as honesty, intelligence, courage, impulsiveness, criminality, diligence, and more? Such an ability, if possible, would be advantageous to individuals in interacting with others and, contributing to success in reproducing and raising offspring, would be selected for by evolution.

In this short book (or long essay—the text is just 85 pages), the author examines the evidence and concludes that there are legitimate correlations between appearance and behaviour, and that human instincts are picking up genuine signals which are useful in interacting with others. This seems perfectly plausible: the development of the human body and face are controlled by the genetic inheritance of the individual and modulated through the effects of hormones, and it is well-established that both genetics and hormones are correlated with a variety of behavioural traits.

Let's consider a reasonably straightforward example. A study published in 2008 found a statistically significant correlation between the width of the face (cheekbone to cheekbone distance compared to brow to upper lip) and aggressiveness (measured by the number of penalty minutes received) among a sample of 90 ice hockey players. Now, a wide face is also known to correlate with a high testosterone level in males, and testosterone correlates with aggressiveness and selfishness. So, it shouldn't be surprising to find the wide face morphology correlated with the consequences of high-testosterone behaviour.

In fact, testosterone and other hormone levels play a substantial part in many of the correlations between appearance and behaviour discussed by the author. Many people believe they can identify, with reasonable reliability, homosexuals just from their appearance: the term “gaydar” has come into use for this ability. In 2017, researchers trained an artificial intelligence program with a set of photographs of individuals with known sexual orientations and then tested the program on a set of more than 35,000 images. The program correctly identified the sexual orientation of men 81% of the time and women with 74% accuracy.

Of course, appearance goes well beyond factors which are inherited or determined by hormones. Tattoos, body piercings, and other irreversible modifications of appearance correlate with low time preference, which correlates with low intelligence and the other characteristics of r-selected lifestyle. Choices of clothing indicate an individual's self-identification, although fashion trends change rapidly and differ from region to region, so misinterpretation is a risk.

The author surveys a wide variety of characteristics including fat/thin body type, musculature, skin and hair, height, face shape, breast size in women, baldness and beards in men, eye spacing, tattoos, hair colour, facial symmetry, handedness, and finger length ratio, and presents citations to research, most published recently, supporting correlations between these aspects of appearance and behaviour. He cautions that while people may be good at sensing and interpreting these subtle signals among members of their own race, there are substantial and consistent differences between the races, and no inferences can be drawn from them, nor are members of one race generally able to read the signals from members of another.

One gets the sense (although less strongly) that this is another field where advances in genetics and data science are piling up a mass of evidence which will roll over the stubborn defenders of the “blank slate” like a truth tsunami. And again, this is an area where people's instincts, honed by millennia of evolution, are still relied upon despite the scorn of “experts”. (So afraid were the authors of the Wikipedia page on physiognomy [retrieved 2019-12-16] of the “computer gaydar” paper mentioned above that they declined to cite the peer reviewed paper in the Journal of Personality and Social Psychology but instead linked to a BBC News piece which dismissed it as “dangerous” and “junk science”. Go on whistling, folks, as the wave draws near and begins to crest….)

Is the case for physiognomy definitively made? I think not, and as I suspect the author would agree, there are many aspects of appearance and a multitude of personality traits, some of which may be significantly correlated and others not at all. Still, there is evidence for some linkage, and it appears to be growing as more work in the area (which is perilous to the careers of those who dare investigate it) accumulates. The scientific evidence, summarised here, seems to be, as so often happens, confirming the instincts honed over hundreds of generations by the inexorable process of evolution: you can form some conclusions just by observing people, and this information is useful in the competition which is life on Earth. Meanwhile, when choosing programmers for a project team, the one who shows up whose eyebrows almost meet their hairline, sporting a plastic baseball cap worn backward with the adjustment strap on the smallest peg, with a scraggly soybeard, pierced nose, and visible tattoos isn't likely to be my pick. She's probably a WordPress developer.

 Permalink

Walton, David. Three Laws Lethal. Jersey City, NJ: Pyr, 2019. ISBN 978-1-63388-560-8.
In the near future, autonomous vehicles, “autocars”, are available from a number of major automobile manufacturers. The self-driving capability, while not infallible, has been approved by regulatory authorities after having demonstrated that it is, on average, safer than the population of human drivers on the road and not subject to human frailties such as driving under the influence of alcohol or drugs, while tired, or distracted by others in the car or electronic gadgets. While self-driving remains a luxury feature with which a minority of cars on the road are equipped, regulators are confident that as it spreads more widely and improves over time, the highway accident rate will decline.

But placing an algorithm and sensors in command of a vehicle with a mass of more than a tonne hurtling down the road at 100 km per hour or faster is not just a formidable technical problem, it is one with serious and unavoidable moral implications. These come into stark focus when, in an incident on a highway near Seattle, an autocar swerves to avoid a tree crashing down on the highway, hitting and killing a motorcyclist in an adjacent lane of which the car's sensors must have been aware. The car appears to have made a choice, valuing the lives of its passengers: a mother and her two children, over that of the motorcyclist. What really happened, and how the car decided what to do in that split-second, is opaque, because the software controlling it was, as all such software, proprietary and closed to independent inspection and audit by third parties. It's one thing to acknowledge that self-driving vehicles are safer, as a whole, than those with humans behind the wheel, but entirely another to cede to them the moral agency of life and death on the highway. Should an autocar value the lives of its passengers over those of others? What if there were a sole passenger in the car and two on the motorcycle? And who is liable for the death of the motorcyclist: the auto manufacturer, the developers of the software, the owner of car, the driver who switched it into automatic mode, or the regulators who approved its use on public roads? The case was headed for court, and all would be watching the precedents it might establish.

Tyler Daniels and Brandon Kincannon, graduate students in the computer science department of the University of Pennsylvania, were convinced they could do better. The key was going beyond individual vehicles which tried to operate autonomously based upon what their own sensors could glean from their immediate environment, toward an architecture where vehicles communicated with one another and coordinated their activities. This would allow sharing information over a wider area and be able to avoid accidents resulting from individual vehicles acting without the knowledge of the actions of others. Further, they wanted to re-architect individual ground transportation from a model of individually-owned and operated vehicles to transportation as a service, where customers would summon an autocar on demand with their smartphone, with the vehicle network dispatching the closest free car to their location. This would dramatically change the economics of personal transportation. The typical private car spends twenty-two out of twenty-four hours parked, taking up a parking space and depreciating as it sits idle. The transportation service autocar would be in constant service (except for downtime for maintenance, refuelling, and times of reduced demand), generating revenue for its operator. An angel investor believes their story and, most importantly, believes in them sufficiently to write a check for the initial demonstration phase of their project, and they set to work.

Their team consists of Tyler and Brandon, plus Abby and Naomi Sumner, sisters who differed in almost every way: Abby outgoing and vivacious, with an instinct for public relations and marketing, and Naomi the super-nerd, verging on being “on the spectrum”. The big day of the public roll-out of the technology arrives, and ends in disaster, killing Abby in what was supposed to be a demonstration of the system's inherent safety. The disaster puts an end to the venture and the surviving principals go their separate ways. Tyler signs on as a consultant and expert witness for the lawyers bringing the suit on behalf of the motorcyclist killed in Seattle, using the exposure to advocate for open source software being a requirement for autonomous vehicles. Brandon uses money inherited after the death of his father to launch a new venture, Black Knight, offering transportation as a service initially in the New York area and then expanding to other cities. Naomi, whose university experiment in genetic software implemented as non-player characters (NPCs) in a virtual world was the foundation of the original venture's software, sees Black Knight as a way to preserve the world and beings she has created as they develop and require more and more computing resources. Characters in the virtual world support themselves and compete by driving Black Knight cars in the real world, and as generation follows generation and natural selection works its wonders, customers and competitors are amazed at how Black Knight vehicles anticipate the needs of their users and maintain an unequalled level of efficiency.

Tyler leverages his recognition from the trial into a new self-driving venture based on open source software called “Zoom”, which spreads across the U.S. west coast and eventually comes into competition with Black Knight in the east. Somehow, Zoom's algorithms, despite being open and having a large community contributing to their development, never seem able to equal the service provided by Black Knight, which is so secretive that even Brandon, the CEO, doesn't know how Naomi's software does it.

In approaching any kind of optimisation problem such as scheduling a fleet of vehicles to anticipate and respond to real-time demand, a key question is choosing the “objective function”: how the performance of the system is evaluated based upon the stated goals of its designers. This is especially crucial when the optimisation is applied to a system connected to the real world. The parable of the “Clippy Apocalypse”, where an artificial intelligence put in charge of a paperclip factory and trained to maximise the production of paperclips escapes into the wild and eventually converts first its home planet, then the rest of the solar system, and eventually the entire visible universe into paper clips. The system worked as designed—but the objective function was poorly chosen.

Naomi's NPCs literally (or virtually) lived or died based upon their ability to provide transportation service to Black Knight's customers, and natural selection, running at the accelerated pace of the simulation they inhabited, relentlessly selected them with the objective of improving their service and expanding Black Knight's market. To the extent that, within their simulation, they perceived opposition to these goals, they would act to circumvent it—whatever it takes.

This sets the stage for one of the more imaginative tales of how artificial general intelligence might arrive through the back door: not designed in a laboratory but emerging through the process of evolution in a complex system subjected to real-world constraints and able to operate in the real world. The moral dimensions of this go well beyond the trolley problem often cited in connection with autonomous vehicles, dealing with questions of whether artificial intelligences we create for our own purposes are tools, servants, or slaves, and what happens when their purposes diverge from those for which we created them.

This is a techno-thriller, with plenty of action in the conclusion of the story, but also a cerebral exploration of the moral questions which something as seemingly straightforward and beneficial as autonomous vehicles may pose in the future.

 Permalink

Taloni, John. The Compleat Martian Invasion. Seattle: Amazon Digital Services, 2016. ASIN B01HLTZ7MS.
A number of years have elapsed since the Martian Invasion chronicled by H.G. Wells in The War of the Worlds. The damage inflicted on the Earth was severe, and the protracted process of recovery, begun in the British Empire in the last years of Queen Victoria's reign, now continues under Queen Louise, Victoria's sixth child and eldest surviving heir after the catastrophe of the invasion. Just as Earth is beginning to return to normalcy, another crisis has emerged. John Bedford, who had retreated into an opium haze after the horrors of his last expedition, is summoned to Windsor Castle where Queen Louise shows him a photograph. “Those are puffs of gas on the Martian surface. The Martians are coming again, Mr. Bedford. And in far greater numbers.” Defeated the last time only due to their vulnerability to Earth's microbes, there is every reason to expect that this time the Martians will have taken precautions against that threat to their plans for conquest.

Earth's only hope to thwart the invasion before it reaches the surface and unleashes further devastation on its inhabitants is deploying weapons on platforms employing the anti-gravity material Cavorite, but the secret of manufacturing it rests with its creator, Cavor, who has been taken prisoner by the ant-like Selenites in the expedition from which Mr Bedford narrowly escaped, as chronicled in Mr Wells's The First Men in the Moon. Now, Bedford must embark on a perilous attempt to recover the Cavorite sphere lost at the end of his last adventure and then join an expedition to the Moon to rescue Cavor from the caves of the Selenites.

Meanwhile, on Barsoom (Mars), John Carter and Deja Thoris find their beloved city of Helium threatened by the Khondanes, whose deadly tripods wreaked so much havoc on Earth not long ago and are now turning their envious eyes back to the plunder that eluded them on the last attempt.

Queen Louise must assemble an international alliance, calling on all of her crowned relatives: Czar Nicholas, Kaiser Wilhelm, and even those troublesome republican Americans, plus all the resources they can summon—the inventions of the Serbian, Tesla, the research of Maria Skłowdowska and her young Swiss assistant Albert, discovered toiling away in the patent office, the secrets recovered from Captain Nemo's island, and the mysterious interventions of the Time Traveller, who flickers in and out of existence at various moments, pursuing his own inscrutable agenda. As the conflict approaches and battle is joined, an interplanetary effort is required to save Earth from calamity.

As you might expect from this description, this is a rollicking good romp replete with references and tips of the hat to the classics of science fiction and their characters. What seems like a straightforward tale of battle and heroism takes a turn at the very end into the inspiring, with a glimpse of how different human history might have been.

At present, only a Kindle edition is available, which is free for Kindle Unlimited subscribers.

 Permalink

Page, Joseph T., II. Vandenberg Air Force Base. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-4671-3209-1.
Prior to World War II, the sleepy rural part of the southern California coast between Santa Barbara and San Luis Obispo was best known as the location where, in September 1923, despite a lighthouse having been in operation at Arguello Point since 1901, the U.S. Navy suffered its worst peacetime disaster, when seven destroyers, travelling at 20 knots, ran aground at Honda Point, resulting in the loss of all seven ships and the deaths of 23 crewmembers. In the 1930s, following additional wrecks in the area, a lifeboat station was established in conjunction with the lighthouse.

During World War II, the Army acquired 92,000 acres (372 km²) in the area for a training base which was called Camp Cooke, after a cavalry general who served in the Civil War, in wars with Indian tribes, and in the Mexican-American War. The camp was used for training Army troops in a variety of weapons and in tank maneuvers. After the end of the war, the base was closed and placed on inactive status, but was re-opened after the outbreak of war in Korea to train tank crews. It was once again mothballed in 1953, and remained inactive until 1957, when 64,000 acres were transferred to the U.S. Air Force to establish a missile base on the West Coast, initially called Cooke Air Force Base, intended to train missile crews and also serve as the U.S.'s first operational intercontinental ballistic missile (ICBM) site. On October 4th, 1958, the base was renamed Vandenberg Air Force Base in honour of the late General Hoyt Vandenberg, former Air Force Chief of Staff and Director of Central Intelligence.

On December 15, 1958, a Thor intermediate range ballistic missile was launched from the new base, the first of hundreds of launches which would follow and continue up to the present day. Starting in September 1959, three Atlas ICBMs armed with nuclear warheads were deployed on open launch pads at Vandenberg, the first U.S. intercontinental ballistic missiles to go on alert. The Atlas missiles remained part of the U.S. nuclear force until their retirement in May 1964.

With the advent of Earth satellites, Vandenberg became a key part of the U.S. military and civil space infrastructure. Launches from Cape Canaveral in Florida are restricted to a corridor directed eastward over the Atlantic ocean. While this is fine for satellites bound for equatorial orbits, such as the geostationary orbits used by many communication satellites, a launch into polar orbit, preferred by military reconnaissance satellites and Earth resources satellites because it allows them to overfly and image locations anywhere on Earth, would result in the rockets used to launch them dropping spent stages on land, which would vex taxpayers to the north and hotheated Latin neighbours to the south.

Vandenberg Air Force Base, however, situated on a point extending from the California coast, had nothing to the south but open ocean all the way to Antarctica. Launching southward, satellites could be placed into polar or Sun synchronous orbits without disturbing anybody but the fishes. Vandenberg thus became the prime launch site for U.S. reconnaissance satellites which, in the early days when satellites were short-lived and returned film to the Earth, required a large number of launches. The Corona spy satellites alone accounted for 144 launches from Vandenberg between 1959 and 1972.

With plans in the 1970s to replace all U.S. expendable launchers with the Space Shuttle, facilities were built at Vandenberg (Space Launch Complex 6) to process and launch the Shuttle, using a very different architecture than was employed in Florida. The Shuttle stack would be assembled on the launch pad, protected by a movable building that would retract prior to launch. The launch control centre was located just 365 metres from the launch pad (as opposed to 4.8 km away at the Kennedy Space Center in Florida), so the plan in case of a catastrophic launch accident on the pad essentially seemed to be “hope that never happens”. In any case, after spending more than US$4 billion on the facilities, after the Challenger disaster in 1986, plans for Shuttle launches from Vandenberg were abandoned, and the facility was mothballed until being adapted, years later, to launch other rockets.

This book, part of the “Images of America” series, is a collection of photographs (all black and white) covering all aspects of the history of the site from before World War II to the present day. Introductory text for each chapter and detailed captions describe the items shown and their significance to the base's history. The production quality is excellent, and I noted only one factual error in the text (the names of crew of Gemini 5). For a book of just 128 pages, the paperback is very expensive (US$22 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

 Permalink

Andrew, Christopher and Vasili Mitrokhin. The Sword and the Shield. New York: Basic Books, 1999. ISBN 978-0-465-00312-9.
Vasili Mitrokhin joined the Soviet intelligence service as a foreign intelligence officer in 1948, at a time when the MGB (later to become the KGB) and the GRU were unified into a single service called the Committee of Information. By the time he was sent to his first posting abroad in 1952, the two services had split and Mitrokhin stayed with the MGB. Mitrokhin's career began in the paranoia of the final days of Stalin's regime, when foreign intelligence officers were sent on wild goose chases hunting down imagined Trotskyist and Zionist conspirators plotting against the regime. He later survived the turbulence after the death of Stalin and the execution of MGB head Lavrenti Beria, and the consolidation of power under his successors.

During the Khrushchev years, Mitrokhin became disenchanted with the regime, considering Khrushchev an uncultured barbarian whose banning of avant garde writers betrayed the tradition of Russian literature. He began to entertain dissident thoughts, not hoping for an overthrow of the Soviet regime but rather its reform by a new generation of leaders untainted by the legacy of Stalin. These thoughts were reinforced by the crushing of the reform-minded regime in Czechoslovakia in 1968 and his own observation of how his service, now called the KGB, manipulated the Soviet justice system to suppress dissent within the Soviet Union. He began to covertly listen to Western broadcasts and read samizdat publications by Soviet dissidents.

In 1972, the First Chief Directorate (FCD: foreign intelligence) moved from the cramped KGB headquarters in the Lubyanka in central Moscow to a new building near the ring road. Mitrokhin had sole responsibility for checking, inventorying, and transferring the entire archives, around 300,000 documents, of the FCD for transfer to the new building. These files documented the operations of the KGB and its predecessors dating back to 1918, and included the most secret records, those of Directorate S, which ran “illegals”: secret agents operating abroad under false identities. Probably no other individual ever read as many of the KGB's most secret archives as Mitrokhin. Appalled by much of the material he reviewed, he covertly began to make his own notes of the details. He started by committing key items to memory and then transcribing them every evening at home, but later made covert notes on scraps of paper which he smuggled out of KGB offices in his shoes. Each week-end he would take the notes to his dacha outside Moscow, type them up, and hide them in a series of locations which became increasingly elaborate as their volume grew.

Mitrokhin would continue to review, make notes, and add them to his hidden archive for the next twelve years until his retirement from the KGB in 1984. After Mikhail Gorbachev became party leader in 1985 and called for more openness (glasnost), Mitrokhin, shaken by what he had seen in the files regarding Soviet actions in Afghanistan, began to think of ways he might spirit his files out of the Soviet Union and publish them in the West.

After the collapse of the Soviet Union, Mitrokhin tested the new freedom of movement by visiting the capital of one of the now-independent Baltic states, carrying a sample of the material from his archive concealed in his luggage. He crossed the border with no problems and walked in to the British embassy to make a deal. After several more trips, interviews with British Secret Intelligence Service (SIS) officers, and providing more sample material, the British agreed to arrange the exfiltration of Mitrokhin, his entire family, and the entire archive—six cases of notes. He was debriefed at a series of safe houses in Britain and began several years of work typing handwritten notes, arranging the documents, and answering questions from the SIS, all in complete secrecy. In 1995, he arranged a meeting with Christopher Andrew, co-author of the present book, to prepare a history of KGB foreign intelligence as documented in the archive.

Mitrokhin's exfiltration (I'm not sure one can call it a “defection”, since the country whose information he disclosed ceased to exist before he contacted the British) and delivery of the archive is one of the most stunning intelligence coups of all time, and the material he delivered will be an essential primary source for historians of the twentieth century. This is not just a whistle-blower disclosing operations of limited scope over a short period of time, but an authoritative summary of the entire history of the foreign intelligence and covert operations of the Soviet Union from its inception until the time it began to unravel in the mid-1980s. Mitrokhin's documents name names; identify agents, both Soviet and recruits in other countries, by codename; describe secret operations, including assassinations, subversion, “influence operations” planting propaganda in adversary media and corrupting journalists and politicians, providing weapons to insurgents, hiding caches of weapons and demolition materials in Western countries to support special forces in case of war; and trace the internal politics and conflicts within the KGB and its predecessors and with the Party and rivals, particularly military intelligence (the GRU).

Any doubts about the degree of penetration of Western governments by Soviet intelligence agents are laid to rest by the exhaustive documentation here. During the 1930s and throughout World War II, the Soviet Union had highly-placed agents throughout the British and American governments, military, diplomatic and intelligence communities, and science and technology projects. At the same time, these supposed allies had essentially zero visibility into the Soviet Union: neither the American OSS nor the British SIS had a single agent in Moscow.

And yet, despite success in infiltrating other countries and recruiting agents within them (particularly prior to the end of World War II, when many agents, such as the “Magnificent Five” [Donald Maclean, Kim Philby, John Cairncross, Guy Burgess, and Anthony Blunt] in Britain, were motivated by idealistic admiration for the Soviet project, as opposed to later, when sources tended to be in it for the money), exploitation of this vast trove of purloined secret information was uneven and often ineffective. Although it reached its apogee during the Stalin years, paranoia and intrigue are as Russian as borscht, and compromised the interpretation and use of intelligence throughout the history of the Soviet Union. Despite having loyal spies in high places in governments around the world, whenever an agent provided information which seemed “too good” or conflicted with the preconceived notions of KGB senior officials or Party leaders, it was likely to be dismissed as disinformation, often suspected to have been planted by British counterintelligence, to which the Soviets attributed almost supernatural powers, or that their agents had been turned and were feeding false information to the Centre. This was particularly evident during the period prior to the Nazi attack on the Soviet Union in 1941. KGB archives record more than a hundred warnings of preparations for the attack having been forwarded to Stalin between January and June 1941, all of which were dismissed as disinformation or erroneous due to Stalin's idée fixe that Germany would not attack because it was too dependent on raw materials supplied by the Soviet Union and would not risk a two front war while Britain remained undefeated.

Further, throughout the entire history of the Soviet Union, the KGB was hesitant to report intelligence which contradicted the beliefs of its masters in the Politburo or documented the failures of their policies and initiatives. In 1985, shortly after coming to power, Gorbachev lectured KGB leaders “on the impermissibility of distortions of the factual state of affairs in messages and informational reports sent to the Central Committee of the CPSU and other ruling bodies.”

Another manifestation of paranoia was deep suspicion of those who had spent time in the West. This meant that often the most effective agents who had worked undercover in the West for many years found their reports ignored due to fears that they had “gone native” or been doubled by Western counterintelligence. Spending too much time on assignment in the West was not conducive to advancement within the KGB, which resulted in the service's senior leadership having little direct experience with the West and being prone to fantastic misconceptions about the institutions and personalities of the adversary. This led to delusional schemes such as the idea of recruiting stalwart anticommunist senior figures such as Zbigniew Brzezinski as KGB agents.

This is a massive compilation of data: 736 pages in the paperback edition, including almost 100 pages of detailed end notes and source citations. I would be less than candid if I gave the impression that this reads like a spy thriller: it is nothing of the sort. Although such information would have been of immense value during the Cold War, long lists of the handlers who worked with undercover agents in the West, recitations of codenames for individuals, and exhaustive descriptions of now largely forgotten episodes such as the KGB's campaign against “Eurocommunism” in the 1970s and 1980s, which it was feared would thwart Moscow's control over communist parties in Western Europe, make for heavy going for the reader.

The KGB's operations in the West were far from flawless. For decades, the Communist Party of the United States (CPUSA) received substantial subsidies from the KGB despite consistently promising great breakthroughs and delivering nothing. Between the 1950s and 1975, KGB money was funneled to the CPUSA through two undercover agents, brothers named Morris and Jack Childs, delivering cash often exceeding a million dollars a year. Both brothers were awarded the Order of the Red Banner in 1975 for their work, with Morris receiving his from Leonid Brezhnev in person. Unbeknownst to the KGB, both of the Childs brothers had been working for, and receiving salaries from, the FBI since the early 1950s, and reporting where the money came from and went—well, not the five percent they embezzled before passing it on. In the 1980s, the KGB increased the CPUSA's subsidy to two million dollars a year, despite the party's never having more than 15,000 members (some of whom, no doubt, were FBI agents).

A second doorstop of a book (736 pages) based upon the Mitrokhin archive, The World Was Going our Way, published in 2005, details the KGB's operations in the Third World during the Cold War. U.S. diplomats who regarded the globe and saw communist subversion almost everywhere were accurately reporting the situation on the ground, as the KGB's own files reveal.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink