Last 10 Books Read, Newest to Oldest

Bookmark this link to come directly here.

December 2017

Benford, Gregory. The Berlin Project. New York: Saga Press, 2017. ISBN 978-1-4814-8765-8.
In September 1938, Karl Cohen returned from a postdoctoral position in France to the chemistry department at Columbia University in New York, where he had obtained his Ph.D. two years earlier. Accompanying him was his new wife, Marthe, daughter of a senior officer in the French army. Cohen went to work for Harold Urey, professor of chemistry at Columbia and winner of the 1934 Nobel Prize in chemistry for the discovery of deuterium. At the start of 1939, the fields of chemistry and nuclear physics were stunned by the discovery of nuclear fission: researchers at the Kaiser Wilhelm Institute in Berlin had discovered that the nucleus of Uranium-235 could be split into two lighter nuclei when it absorbed a neutron, releasing a large amount of energy and additional neutrons which might be able to fission other uranium nuclei, creating a “chain reaction” which might permitting tapping the enormous binding energy of the nucleus to produce abundant power—or a bomb.

The discovery seemed to open a path to nuclear power, but it was clear from the outset that the practical challenges were going to be daunting. Natural uranium is composed of two principal isotopes, U-238 and U-235. The heavier U-238 isotope makes up 99.27% of natural uranium, while U-235 accounts for only 0.72%. Only U-235 can readily be fissioned, so in order to build a bomb, it would be necessary to separate the two isotopes and isolate near-pure U-235. Isotopes differ only in the number of neutrons in their nuclei, but have the same number of protons and electrons. Since chemistry is exclusively determined by the electron structure of an atom, no chemical process can separate two isotopes: it must be done physically, based upon their mass difference. And since U-235 and U-238 differ in mass only by around 1.25%, any process, however clever, would necessarily be inefficient and expensive. It was clear that nuclear energy or weapons would require an industrial-scale effort, not something which could be done in a university laboratory.

Several candidate processes were suggested: electromagnetic separation, thermal or gaseous diffusion, and centrifuges. Harold Urey believed a cascade of high-speed centrifuges, fed with uranium hexafluoride gas, was the best approach, and he was the world's foremost expert on gas centrifuges. The nascent uranium project, eventually to become the Manhattan Project, was inclined toward the electromagnetic and gaseous diffusion processes, since they were believed to be well-understood and only required a vast scaling up as opposed to demonstration of a novel and untested technology.

Up to this point, everything in this alternative history novel is completely factual, and all of the characters existed in the real world (Karl Cohen is the author's father in-law). Historically, Urey was unable to raise the funds to demonstrate the centrifuge technology, and the Manhattan project proceeded with the electromagnetic and gaseous diffusion routes to separate U-235 while, in parallel, pursuing plutonium production from natural uranium in graphite-moderated reactors. Benford adheres strictly to the rules of the alternative history game in that only one thing is changed, and everything else follows as consequences of that change.

Here, Karl Cohen contacts a prominent Manhattan rabbi known to his mother who, seeing a way to combine protecting Jews in Europe from Hitler, advancing the Zionist cause, and making money from patents on a strategic technology, assembles a syndicate of wealthy and like-minded investors, raising a total of a hundred thousand dollars (US$ 1.8 million in today's funny money) to fund Urey's prototype centrifuge project in return for rights to patents on the technology. Urey succeeds, and by mid-1941 the centrifuge has been demonstrated and contacts made with Union Carbide to mass-produce and operate a centrifuge separation plant. Then, in early December of that year, everything changed, and by early 1942 the Manhattan Project had bought out the investors at a handsome profit and put the centrifuge separation project in high gear. As Urey's lead on the centrifuge project, Karl Cohen finds himself in the midst of the rapidly-developing bomb project, meeting and working with all of the principals.

Thus begins the story of a very different Manhattan Project and World War II. With the centrifuge project starting in earnest shortly after Pearl Harbor, by June 6th, 1944 the first uranium bomb is ready, and the Allies decide to use it on Berlin as a decapitation strike simultaneous with the D-Day landings in Normandy. The war takes a very different course, both in Europe and the Pacific, and a new Nazi terror weapon, first hinted at in a science fiction story, complicates the conflict. A different world is the outcome, seen from a retrospective at the end.

Karl Cohen's central position in the Manhattan Project introduces us to a panoply of key players including Leslie Groves, J. Robert Oppenheimer, Edward Teller, Leo Szilard, Freeman Dyson, John W. Campbell, Jr., and Samuel Goudsmit. He participates in a secret mission to Switzerland to assess German progress toward a bomb in the company of professional baseball catcher become spy Moe Berg, who is charged with assassinating Heisenberg if Cohen judges he knows too much.

This is a masterpiece of alternative history, based firmly in fact, and entirely plausible. The description of the postwar consequences is of a world in which I would prefer to have been born. I won't discuss the details to avoid spoiling your discovery of how they all work out in the hands of a master storyteller who really knows his stuff (Gregory Benford is a Professor Emeritus of physics at the University of California, Irvine).

 Permalink

October 2017

Morton, Oliver. The Planet Remade. Princeton: Princeton University Press, 2015. ISBN 978-0-691-17590-4.
We live in a profoundly unnatural world. Since the start of the industrial revolution, and rapidly accelerating throughout the twentieth century, the actions of humans have begun to influence the flow of energy and materials in the Earth's biosphere on a global scale. Earth's current human population and standard of living are made possible entirely by industrial production of nitrogen-based fertilisers and crop plants bred to efficiently exploit them. Industrial production of fixed (chemically reactive) nitrogen from the atmosphere now substantially exceeds all of that produced by the natural soil bacteria on the planet which, prior to 1950, accounted for almost all of the nitrogen required to grow plants. Fixing nitrogen by the Haber-Bosch process is energy-intensive, and consumes around 1.5 percent of all the world's energy usage and, as a feedstock, 3–5% of natural gas produced worldwide. When we eat these crops, or animals fed from them, we are, in a sense, eating fossil fuels. On the order of four out of five nitrogen molecules that make up your body were made in a factory by the Haber-Bosch process. We are the children, not of nature, but of industry.

The industrial production of fertiliser, along with crops tailored to use them, is entirely responsible for the rapid growth of the Earth's population, which has increased from around 2.5 billion in 1950, when industrial fertiliser and “green revolution” crops came into wide use, to more than 7 billion today. This was accompanied not by the collapse into global penury predicted by Malthusian doom-sayers, but rather a broad-based rise in the standard of living, with extreme poverty and malnutrition falling to all-time historical lows. In the lifetimes of many people, including this scribbler, our species has taken over the flow of nitrogen through the Earth's biosphere, replacing a process mediated by bacteria for billions of years with one performed in factories. The flow of nitrogen from atmosphere to soil, to plants and the creatures who eat them, back to soil, sea, and ultimately the atmosphere is now largely in the hands of humans, and their very lives have become dependent upon it.

This is an example of “geoengineering”—taking control of what was a natural process and replacing it with an engineered one to produce a desired outcome: in this case, the ability to feed a much larger population with an unprecedented standard of living. In the case of nitrogen fixation, there wasn't a grand plan drawn up to do all of this: each step made economic sense to the players involved. (In fact, one of the motivations for developing the Haber-Bosch process was not to produce fertiliser, but rather to produce feedstocks for the manufacture of military and industrial explosives, which had become dependent on nitrates obtained from guano imported to Europe from South America.) But the outcome was the same: ours is an engineered world. Those who are repelled by such an intervention in natural processes or who are concerned by possible detrimental consequences of it, foreseen or unanticipated, must come to terms with the reality that abandoning this world-changing technology now would result in the collapse of the human population, with at least half of the people alive today starving to death, and many of the survivors reduced to subsistence in abject poverty. Sadly, one encounters fanatic “greens” who think this would be just fine (and, doubtless, imagining they'd be among the survivors).

Just mentioning geoengineering—human intervention and management of previously natural processes on a global scale—may summon in the minds of many Strangelove-like technological megalomania or the hubris of Bond villains, so it's important to bear in mind that we're already doing it, and have become utterly dependent upon it. When we consider the challenges we face in accommodating a population which is expected to grow to ten billion by mid-century (and, absent catastrophe, this is almost a given: the parents of the ten billion are mostly alive today), who will demand and deserve a standard of living comparable to what they see in industrial economies, and while carefully weighing the risks and uncertainties involved, it may be unwise to rule out other geoengineering interventions to mitigate undesirable consequences of supporting the human population.

In parallel with the human takeover of the nitrogen cycle, another geoengineering project has been underway, also rapidly accelerating in the 20th century, driven both by population growth and industrialisation of previously agrarian societies. For hundreds of millions of years, the Earth also cycled carbon through the atmosphere, oceans, biosphere, and lithosphere. Carbon dioxide (CO₂) was metabolised from the atmosphere by photosynthetic plants, extracting carbon for their organic molecules and producing oxygen released to the atmosphere, then passed along as plants were eaten, returned to the soil, or dissolved in the oceans, where creatures incorporated carbonates into their shells, which eventually became limestone rock and, over geological time, was subducted as the continents drifted, reprocessed far below the surface, and expelled back into the atmosphere by volcanoes. (This is a gross oversimplification of the carbon cycle, but we don't need to go further into it for what follows. The point is that it's something which occurs on a time scale of tens to hundreds of millions of years and on which humans, prior to the twentieth century, had little influence.)

The natural carbon cycle is not leakproof. Only part of the carbon sequestered by marine organisms and immured in limestone is recycled by volcanoes; it is estimated that this loss of carbon will bring the era of multicellular life on Earth to an end around a billion years from now. The carbon in some plants is not returned to the biosphere when they die. Sometimes, the dead vegetation accumulates in dense beds where it is protected against oxidation and eventually forms deposits of peat, coal, petroleum, and natural gas. Other than natural seeps and releases of the latter substances, their carbon is also largely removed from the biosphere. Or at least it was until those talking apes came along….

The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.

Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂?

Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere.

But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet).

But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it?

(Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.)

We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.)

Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.)

So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system.

Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind!

But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond?

Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful.

Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather.

Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant.

Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships.

Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists.

We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.

 Permalink

September 2017

Scoles, Sarah. Making Contact. New York: Pegasus Books, 2017. ISBN 978-1-68177-441-1.
There are few questions in our scientific inquiry into the universe and our place within it more profound than “are we alone?” As we have learned more about our world and the larger universe in which it exists, this question has become ever more fascinating. We now know that our planet, once thought the centre of the universe, is but one of what may be hundreds of billions of planets in our own galaxy, which is one of hundreds of billions of galaxies in the observable universe. Not long ago, we knew only of the planets in our own solar system, and some astronomers believed planetary systems were rare, perhaps formed by freak encounters between two stars following their orbits around the galaxy. But now, thanks to exoplanet hunters and, especially, the Kepler spacecraft, we know that it's “planets, planets, everywhere”—most stars have planets, and many stars have planets where conditions may be suitable for the origin of life.

If this be the case, then when we gaze upward at the myriad stars in the heavens, might there be other eyes (or whatever sense organs they use for the optical spectrum) looking back from planets of those stars toward our Sun, wondering if they are alone? Many are the children, and adults, who have asked themselves that question when standing under a pristine sky. For the ten year old Jill Tarter, it set her on a path toward a career which has been almost coterminous with humanity's efforts to discover communications from extraterrestrial civilisations—an effort which continues today, benefitting from advances in technology unimagined when she undertook the quest.

World War II had seen tremendous advancements in radio communications, in particular the short wavelengths (“microwaves”) used by radar to detect enemy aircraft and submarines. After the war, this technology provided the foundation for the new field of radio astronomy, which expanded astronomers' window on the universe from the traditional optical spectrum into wavelengths that revealed phenomena never before observed nor, indeed, imagined, and hinted at a universe which was much larger, complicated, and violent than previously envisioned.

In 1959, Philip Morrison and Guiseppe Cocconi published a paper in Nature in which they calculated that using only technologies and instruments already existing on the Earth, intelligent extraterrestrials could send radio messages across the distances to the nearby stars, and that these messages could be received, detected, and decoded by terrestrial observers. This was the origin of SETI—the Search for Extraterrestrial Intelligence. In 1960, Frank Drake used a radio telescope to search for signals from two nearby star systems; he heard nothing.

As they say, absence of evidence is not evidence of absence, and this is acutely the case in SETI. First of all, consider that you must first decide what kind of signal aliens might send. If it's something which can't be distinguished from natural sources, there's little hope you'll be able to tease it out of the cacophony which is the radio spectrum. So we must assume they're sending something that doesn't appear natural. But what is the variety of natural sources? There's a dozen or so Ph.D. projects just answering that question, including some surprising discoveries of natural sources nobody imagined, such as pulsars, which were sufficiently strange that when first observed they were called “LGM” sources for “Little Green Men”. On what frequency are they sending (in other words, where do we have to turn our dial to receive them, for those geezers who remember radios with dials)? The most efficient signals will be those with a very narrow frequency range, and there are billions of possible frequencies the aliens might choose. We could be pointed in the right place, at the right time, and simply be tuned to the wrong station.

Then there's that question of “the right time”. It would be absurdly costly to broadcast a beacon signal in all directions at all times: that would require energy comparable to that emitted by a star (which, if you think about it, does precisely that). So it's likely that any civilisation with energy resources comparable to our own would transmit in a narrow beam to specific targets, switching among them over time. If we didn't happen to be listening when they were sending, we'd never know they were calling.

If you put all of these constraints together, you come up with what's called an “observational phase space”—a multidimensional space of frequency, intensity, duration of transmission, angular extent of transmission, bandwidth, and other parameters which determine whether you'll detect the signal. And that assumes you're listening at all, which depends upon people coming up with the money to fund the effort and pursue it over the years.

It's beyond daunting. The space to be searched is so large, and our ability to search it so limited, that negative results, even after decades of observation, are equivalent to walking down to the seashore, sampling a glass of ocean water, and concluding that based on the absence of fish, the ocean contained no higher life forms. But suppose you find a fish? That would change everything.

Jill Tarter began her career in the mainstream of astronomy. Her Ph.D. research at the University of California, Berkeley was on brown dwarfs (bodies more massive than gas giant planets but too small to sustain the nuclear fusion reactions which cause stars to shine—a brown dwarf emits weakly in the infrared as it slowly radiates away the heat from the gravitational contraction which formed it). Her work was supported by a federal grant, which made her uncomfortable—what relevance did brown dwarfs have to those compelled to pay taxes to fund investigating them? During her Ph.D. work, she was asked by a professor in the department to help with an aged computer she'd used in an earlier project. To acquaint her with the project, the professor asked her to read the Project Cyclops report. It was a conversion experience.

Project Cyclops was a NASA study conducted in 1971 on how to perform a definitive search for radio communications from intelligent extraterrestrials. Its report [18.2 Mb PDF], issued in 1972, remains the “bible” for radio SETI, although advances in technology, particularly in computing, have rendered some of its recommendations obsolete. The product of a NASA which was still conducting missions to the Moon, it was grandiose in scale, envisioning a large array of radio telescope dishes able to search for signals from stars up to 1000 light years in distance (note that this is still a tiny fraction of the stars in the galaxy, which is around 150,000 light years in diameter). The estimated budget for the project was between 6 and 10 billion dollars (multiply those numbers by around six to get present-day funny money) spent over a period of ten to fifteen years. The report cautioned that there was no guarantee of success during that period, and that the project should be viewed as a long-term endeavour with ongoing funding to operate the system and continue the search.

The Cyclops report arrived at a time when NASA was downsizing and scaling back its ambitions: the final three planned lunar landing missions had been cancelled in 1970, and production of additional Saturn V launch vehicles had been terminated the previous year. The budget climate wasn't hospitable to Apollo-scale projects of any description, especially those which wouldn't support lots of civil service and contractor jobs in the districts and states of NASA's patrons in congress. Unsurprisingly, Project Cyclops simply landed on the pile of ambitious NASA studies that went nowhere. But to some who read it, it was an inspiration. Tarter thought, “This is the first time in history when we don't just have to believe or not believe. Instead of just asking the priests and philosophers, we can try to find an answer. This is an old and important question, and I have the opportunity to change how we try to answer it.” While some might consider searching the sky for “little green men” frivolous and/or absurd, to Tarter this, not the arcana of brown dwarfs, was something worthy of support, and of her time and intellectual effort, “something that could impact people's lives profoundly in a short period of time.”

The project to which Tarter had been asked to contribute, Project SERENDIP (a painful acronym of Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) was extremely modest compared to Cyclops. It had no dedicated radio telescopes at all, nor even dedicated time on existing observatories. Instead, it would “piggyback” on observations made for other purposes, listening to the feed from the telescope with an instrument designed to detect the kind of narrow-band beacons envisioned by Cyclops. To cope with the problem of not knowing the frequency on which to listen, the receiver would monitor 100 channels simultaneously. Tarter's job was programming the PDP 8/S computer to monitor the receiver's output and search for candidate signals. (Project SERENDIP is still in operation today, employing hardware able to simultaneously monitor 128 million channels.)

From this humble start, Tarter's career direction was set. All of her subsequent work was in SETI. It would be a roller-coaster ride all the way. In 1975, NASA had started a modest study to research (but not build) technologies for microwave SETI searches. In 1978, the program came into the sights of senator William Proxmire, who bestowed upon it his “Golden Fleece” award. The program initially survived his ridicule, but in 1982, the budget zeroed out the project. Carl Sagan personally intervened with Proxmire, and in 1983 the funding was reinstated, continuing work on a more capable spectral analyser which could be used with existing radio telescopes.

Buffeted by the start-stop support from NASA and encouraged by Hewlett-Packard executive Bernard Oliver, a supporter of SETI from its inception, Tarter decided that SETI needed its own institutional home, one dedicated to the mission and able to seek its own funding independent of the whims of congressmen and bureaucrats. In 1984, the SETI Institute was incorporated in California. Initially funded by Oliver, over the years major contributions have been made by technology moguls including William Hewlett, David Packard, Paul Allen, Gordon Moore, and Nathan Myhrvold. The SETI Institute receives no government funding whatsoever, although some researchers in its employ, mostly those working on astrobiology, exoplanets, and other topics not directly related to SETI, are supported by research grants from NASA and the National Science Foundation. Fund raising was a skill which did not come naturally to Tarter, but it was mission critical, and so she mastered the art. Today, the SETI Institute is considered one of the most savvy privately-funded research institutions, both in seeking large donations and in grass-roots fundraising.

By the early 1990s, it appeared the pendulum had swung once again, and NASA was back in the SETI game. In 1992, a program was funded to conduct a two-pronged effort: a targeted search of 800 nearby stars, and an all-sky survey looking for stronger beacons. Both would employ what were then state-of-the-art spectrum analysers able to monitor 15 million channels simultaneously. After just a year of observations, congress once again pulled the plug. The SETI Institute would have to go it alone.

Tarter launched Project Phoenix, to continue the NASA targeted search program using the orphaned NASA spectrometer hardware and whatever telescope time could be purchased from donations to the SETI Institute. In 1995, observations resumed at the Parkes radio telescope in Australia, and subsequently a telescope at the National Radio Astronomy Observatory in Green Bank, West Virginia, and the 300 metre dish at Arecibo Observatory in Puerto Rico. The project continued through 2004.

What should SETI look like in the 21st century? Much had changed since the early days in the 1960s and 1970s. Digital electronics and computers had increased in power a billionfold, not only making it possible to scan a billion channels simultaneously and automatically search for candidate signals, but to combine the signals from a large number of independent, inexpensive antennas (essentially, glorified satellite television dishes), synthesising the aperture of a huge, budget-busting radio telescope. With progress in electronics expected to continue in the coming decades, any capital investment in antenna hardware would yield an exponentially growing science harvest as the ability to analyse its output grew over time. But to take advantage of this technological revolution, SETI could no longer rely on piggyback observations, purchased telescope time, or allocations at the whim of research institutions: “SETI needs its own telescope”—one optimised for the mission and designed to benefit from advances in electronics over its lifetime.

In a series of meetings from 1998 to 2000, the specifications of such an instrument were drawn up: 350 small antennas, each 6 metres in diameter, independently steerable (and thus able to be used all together, or in segments to simultaneously observe different targets), with electronics to combine the signals, providing an effective aperture of 900 metres with all dishes operating. With initial funding from Microsoft co-founder Paul Allen (and with his name on the project, the Allen Telescope Array), the project began construction in 2004. In 2007, observations began with the first 42 dishes. By that time, Paul Allen had lost interest in the project, and construction of additional dishes was placed on hold until a new benefactor could be found. In 2011, a funding crisis caused the facility to be placed in hibernation, and the observatory was sold to SRI International for US$ 1. Following a crowdfunding effort led by the SETI Institute, the observatory was re-opened later that year, and continues operations to this date. No additional dishes have been installed: current work concentrates on upgrading the electronics of the existing antennas to increase sensitivity.

Jill Tarter retired as co-director of the SETI Institute in 2012, but remains active in its scientific, fundraising, and outreach programs. There has never been more work in SETI underway than at the present. In addition to observations with the Allen Telescope Array, the Breakthrough Listen project, funded at US$ 100 million over ten years by Russian billionaire Yuri Milner, is using thousands of hours of time on large radio telescopes, with a goal of observing a million nearby stars and the centres of a hundred galaxies. All data are available to the public for analysis. A new frontier, unimagined in the early days of SETI, is optical SETI. A pulsed laser, focused through a telescope of modest aperture, is able to easily outshine the Sun in a detector sensitive to its wavelength and pulse duration. In the optical spectrum, there's no need for fancy electronics to monitor a wide variety of wavelengths: all you need is a prism or diffraction grating. The SETI Institute has just successfully completed a US$ 100,000 Indiegogo campaign to crowdfund the first phase of the Laser SETI project, which has as its ultimate goal an all-sky, all-the-time search for short pulses of light which may be signals from extraterrestrials or new natural phenomena to which no existing astronomical instrument is sensitive.

People often ask Jill Tarter what it's like to spend your entire career looking for something and not finding it. But she, and everybody involved in SETI, always knew the search would not be easy, nor likely to succeed in the short term. The reward for engaging in it is being involved in founding a new field of scientific inquiry and inventing and building the tools which allow exploring this new domain. The search is vast, and to date we have barely scratched the surface. About all we can rule out, after more than half a century, is a Star Trek-like universe where almost every star system is populated by aliens chattering away on the radio. Today, the SETI enterprise, entirely privately funded and minuscule by the standards of “big science”, is strongly coupled to the exponential growth in computing power and hence, roughly doubles its ability to search around every two years.

The question “are we alone?” is one which has profound implications either way it is answered. If we discover one or more advanced technological civilisations (and they will almost certainly be more advanced than we—we've only had radio for a little more than a century, and there are stars and planets in the galaxy billions of years older than ours), it will mean it's possible to grow out of the daunting problems we face in the adolescence of our species and look forward to an exciting and potentially unbounded future. If, after exhaustive searches (which will take at least another fifty years of continued progress in expanding the search space), it looks like we're alone, then intelligent life is so rare that we may be its only exemplar in the galaxy and, perhaps, the universe. Then, it's up to us. Our destiny, and duty, is to ensure that this spark, lit within us, will never be extinguished.

 Permalink

August 2017

Casey, Doug and John Hunt. Drug Lord. Charlottesville, VA: HighGround Books, 2017. ISBN 978-1-947449-07-7.
This is the second novel in the authors' “High Ground” series, chronicling the exploits of Charles Knight, an entrepreneur and adventurer determined to live his life according to his own moral code, constrained as little as possible by the rules and regulations of coercive and corrupt governments. The first novel, Speculator (October 2016), follows Charles's adventures in Africa as an investor in a junior gold exploration company which just might have made the discovery of the century, and in the financial markets as he seeks to profit from what he's learned digging into the details. Charles comes onto the radar of ambitious government agents seeking to advance their careers by collecting his scalp.

Charles ends up escaping with his freedom and ethics intact, but with much of his fortune forfeit. He decides he's had enough of “the land of the free” and sets out on his sailboat to explore the world and sample the pleasures and opportunities it holds for one who thinks for himself. Having survived several attempts on his life and prevented a war in Africa in the previous novel, seven years later he returns to a really dangerous place, Washington DC, populated by the Morlocks of Mordor.

Charles has an idea for a new business. The crony capitalism of the U.S. pharmaceutical-regulatory complex has inflated the price of widely-used prescription drugs to many times that paid outside the U.S., where these drugs, whose patents have expired under legal regimes less easily manipulated than that of the U.S., are manufactured in a chemically-identical form by thoroughly professional generic drug producers. Charles understands, as fully as any engineer, that wherever there is nonlinearity the possibility for gain exists, and when that nonlinearity is the result of the action of coercive government, the potential profits from circumventing its grasp on the throat of the free market can be very large, indeed.

When Charles's boat docked in the U.S., he had an undeclared cargo: a large number of those little blue pills much in demand by men of a certain age, purchased for pennies from a factory in India through a cut-out in Africa he met on his previous adventure. He has the product, and a supplier able to obtain much more. Now, all he needs is distribution. He must venture into the dark underside of DC to make the connections that can get the product to the customers, and persuade potential partners that they can make much more and far more safely by distributing his products (which don't fall under the purview of the Drug Enforcement Agency, and to which local cops not only don't pay much attention, but may be potential customers).

Meanwhile, Charles's uncle Maurice, who has been managing what was left of his fortune during his absence, has made an investment in a start-up pharmaceutical company, Visioryme, whose first product, VR-210, or Sybillene, is threading its way through the FDA regulatory gauntlet toward approval for use as an antidepressant. Sybillene works through a novel neurochemical pathway, and promises to be an effective treatment for clinical depression while avoiding the many deleterious side effects of other drugs. In fact, Sybillene doesn't appear to have any side effects at all—or hardly any—there's that one curious thing that happened in animal testing, but not wishing to commit corporate seppuku, Visioryme hasn't mentioned it to the regulators or even their major investor, Charles.

Charles pursues his two pharmaceutical ventures in parallel: one in the DC ghetto and Africa; the other in the tidy suburban office park where Visioryme is headquartered. The first business begins to prosper, and Charles must turn his ingenuity to solving the problems attendant to any burgeoning enterprise: supply, transportation, relations with competitors (who, in this sector of the economy, not only are often armed but inclined to shoot first), expanding the product offerings, growing the distribution channels, and dealing with all of the money that's coming in, entirely in cash, without coming onto the radar of any of the organs of the slavers and their pervasive snooper-state.

Meanwhile, Sybillene finally obtains FDA approval, and Visioryme begins to take off and ramp up production. Charles's connections in Africa help the company obtain the supplies of bamboo required in production of the drug. It seems like he now has two successful ventures, on the dark and light sides, respectively, of the pharmaceutical business (which is dark and which is light depending on your view of the FDA).

Then, curious reports start to come in about doctors prescribing Sybillene off-label in large doses to their well-heeled patients. Off-label prescription is completely legal and not uncommon, but one wonders what's going on. Then there's the talk Charles is picking up from his other venture of demand for a new drug on the street: Sybillene, which goes under names such as Fey, Vatic, Augur, Covfefe, and most commonly, Naked Emperor. Charles's lead distributor reports, “It helps people see lies for what they are, and liars too. I dunno. I never tried it. Lots of people are asking though. Society types. Lawyers, businessmen, doctors, even cops.” It appears that Sybillene, or Naked Emperor, taken in a high dose, is a powerful nootropic which doesn't so much increase intelligence as, the opposite of most psychoactive drugs, allows the user to think more clearly, and see through the deception that pollutes the intellectual landscape of a modern, “developed”, society.

In that fœtid city by the Potomac, the threat posed by such clear thinking dwarfs that of other “controlled substances” which merely turn their users into zombies. Those atop an empire built on deceit, deficits, and debt cannot run the risk of a growing fraction of the population beginning to see through the funny money, Ponzi financing, Potemkin military, manipulation of public opinion, erosion of the natural rights of citizens, and the sham which is replacing the last vestiges of consensual government. Perforce, Sybillene must become Public Enemy Number One, and if a bit of lying and even murder is required, well, that's the price of preserving the government's ability to lie and murder.

Suddenly, Charles is involved in two illegal pharmaceutical ventures. As any wise entrepreneur would immediately ask himself, “might there be synergies?”

Thus begins a compelling, instructive, and inspiring tale of entrepreneurship and morality confronted with dark forces constrained by no limits whatsoever. We encounter friends and foes from the first novel, as once again Charles finds himself on point position defending those in the enterprises he has created. As I said in my review of Speculator, this book reminds me of Ayn Rand's The Fountainhead, but it is even more effective because Charles Knight is not a super-hero but rather a person with a strong sense of right and wrong who is making up his life as he goes along and learning from the experiences he has: good and bad, success and failure. Charles Knight, even without Naked Emperor, has that gift of seeing things precisely as they are, unobscured by the fog, cant, spin, and lies which are the principal products of the city in which it is set.

These novels are not just page-turning thrillers, they're simultaneously an introductory course in becoming an international man (or woman), transcending the lies of the increasingly obsolescent nation-state, and finding the liberty that comes from seizing control of one's own destiny. They may be the most powerful fictional recruiting tool for the libertarian and anarcho-capitalist world view since the works of Ayn Rand and L. Neil Smith. Speculator was my fiction book of the year for 2016, and this sequel is in the running for 2017.

 Permalink

Egan, Greg. Dichronauts. New York: Night Shade Books, 2017. ISBN 978-1-59780-892-7.
One of the more fascinating sub-genres of science fiction is “world building”: creating the setting in which a story takes place by imagining an environment radically different from any in the human experience. This can run the gamut from life in the atmosphere of a gas giant planet (Saturn Rukh), on the surface of a neutron star (Dragon's Egg), or on an enormous alien-engineered wheel surrounding a star (Ringworld). When done well, the environment becomes an integral part of the tale, shaping the characters and driving the plot. Greg Egan is one of the most accomplished of world builders. His fiction includes numerous examples of alien environments, with the consequences worked out and woven into the story.

The present novel may be his most ambitious yet: a world in which the fundamental properties of spacetime are different from those in our universe. Unfortunately, for this reader, the execution was unequal to the ambition and the result disappointing. I'll explain this in more detail, but let's start with the basics.

We inhabit a spacetime which is well-approximated by Minkowski space. (In regions where gravity is strong, spacetime curvature must be taken into account, but this can be neglected in most circumstances including those in this novel.) Minkowski space is a flat four-dimensional space where each point is identified by three space and one time coordinate. It is thus spoken of as a 3+1 dimensional space. The space and time dimensions are not interchangeable: when computing the spacetime separation of two events, their distance or spacetime interval is given by the quantity −t²+x²+y²+z². Minkowski space is said to have a metric signature of (−,+,+,+), from the signs of the four coordinates in the distance (metric) equation.

Why does our universe have a dimensionality of 3+1? Nobody knows—string theorists who argue for a landscape of universes in an infinite multiverse speculate that the very dimensionality of a universe may be set randomly when the baby universe is created in its own big bang bubble. Max Tegmark has argued that universes with other dimensionalities would not permit the existence of observers such as us, so we shouldn't be surprised to find ourselves in one of the universes which is compatible with our own existence, nor should we rule out a multitude of other universes with different dimensionalities, all of which may be devoid of observers.

But need they necessarily be barren? The premise of this novel is, “not necessarily so”, and Egan has created a universe with a metric signature of (−,−,+,+), a 2+2 dimensional spacetime with two spacelike dimensions and two timelike dimensions. Note that “timelike” refers to the sign of the dimension in the distance equation, and the presence of two timelike dimensions is not equivalent to two time dimensions. There is still a single dimension of time, t, in which events occur in a linear order just as in our universe. The second timelike dimension, which we'll call u, behaves like a spatial dimension in that objects can move within it as they can along the other x and y spacelike dimensions, but its contribution in the distance equation is negative: −t²−u²+x²+y². This results in a seriously weird, if not bizarre world.

From this point on, just about everything I'm going to say can be considered a spoiler if your intention is to read the book from front to back and not consult the extensive background information on the author's Web site. Conversely, I shall give away nothing regarding the plot or ending which is not disclosed in the background information or the technical afterword of the novel. I do not consider this material as spoilers; in fact, I believe that many readers who do not first understand the universe in which the story is set are likely to abandon the book as simply incomprehensible. Some of the masters of world building science fiction introduce the reader to the world as an ongoing puzzle as the story unfolds but, for whatever reason, Egan did not choose to do that here, or else he did so sufficiently poorly that this reader didn't even notice the attempt. I think the publisher made a serious mistake in not alerting the reader to the existence of the technical afterword, the reading of which I consider a barely sufficient prerequisite for understanding the setting in which the novel takes place.

In the Dichronauts universe, there is a “world” around which a smaller ”star” orbits (or maybe the other way around; it's just a coordinate transformation). The geometry of the spacetime dominates everything. While in our universe we're free to move in any of the three spatial dimensions, in this spacetime motion in the x and y dimensions is as for us, but if you're facing in the positive x dimension—let's call it east—you cannot rotate outside the wedge from northeast to southeast, and as you rotate the distance equation causes a stretching to occur, like the distortions in relativistic motion in special relativity. It is no more possible to turn all the way to the northeast than it is to attain the speed of light in our universe. If you were born east-facing, the only way you can see to the west is to bend over and look between your legs. The beings who inhabit this world seem to be born randomly east- or west-facing.

Light only propagates within the cone defined by the spacelike dimensions. Any light source has a “dark cone” defined by a 45° angle around the timelike u dimension. In this region, vision does not work, so beings are blind to their sides. The creatures who inhabit the world are symbionts of bipeds who call themselves “walkers” and slug-like creatures, “siders”, who live inside their skulls and receive their nutrients from the walker's bloodstream. Siders are equipped with “pingers”, which use echolocation like terrestrial bats to sense within the dark cone. While light cannot propagate there, physical objects can move in that direction, including the density waves which carry sound. Walkers and siders are linked at the brain level and can directly perceive each other's views of the world and communicate without speaking aloud. Both symbiotes are independently conscious, bonded at a young age, and can, like married couples, have acrimonious disputes. While walkers cannot turn outside the 90° cone, they can move in the timelike north-south direction by “sidling”, relying upon their siders to detect obstacles within their cone of blindness.

Due to details of the structure of their world, the walker/sider society, which seems to be at a pre-industrial level (perhaps due to the fact that many machines would not work in the weird geometry they inhabit), is forced to permanently migrate to stay within the habitable zone between latitudes which are seared by the rays of the star and those too cold for agriculture. For many generations, the town of Baharabad has migrated along a river, but now the river appears to be drying up, creating a crisis. Seth (walker) and Theo (sider), are surveyors, charged with charting the course of their community's migration. Now they are faced with the challenge of finding a new river to follow, one which has not already been claimed by another community. On an expedition to the limits of the habitable zone, they encounter what seems to be the edge of the world. Is it truly the edge, and if not what lies beyond? They join a small group of explorers who probe regions of their world never before seen, and discover clues to the origin of their species.

This didn't work for me. If you read all of the background information first (which, if you're going to dig into this novel, I strongly encourage you to do), you'll appreciate the effort the author went to in order to create a mathematically consistent universe with two timelike dimensions, and to work out the implications of this for a world within it and the beings who live there. But there is a tremendous amount of arm waving behind the curtain which, if you peek, subverts the plausibility of everything. For example, the walker/sider creatures are described as having what seems to be a relatively normal metabolism: they eat fruit, grow crops, breathe, eat, and drink, urinate and defecate, and otherwise behave as biological organisms. But biology as we know it, and all of these biological functions, requires the complex stereochemistry of the organic molecules upon which organisms are built. If the motion of molecules were constrained to a cone, and their shape stretched with rotation, the operation of enzymes and other biochemistry wouldn't work. And yet that doesn't seem to be a problem for these beings.

Finally, the story simply stops in the middle, with the great adventure and resolution of the central crisis unresolved. There will probably be a sequel. I shall not read it.

 Permalink

Hirsi Ali, Ayaan. The Challenge of Dawa. Stanford, CA: Hoover Institution Press, 2017.
Ayaan Hirsi Ali was born in Somalia in 1969. In 1992 she was admitted to the Netherlands and granted political asylum on the basis of escaping an arranged marriage. She later obtained Dutch citizenship, and was elected to the Dutch parliament, where she served from 2001 through 2006. In 2004, she collaborated with Dutch filmmaker Theo van Gogh on the short film Submission, about the abuse of women in Islamic societies. After release of the film, van Gogh was assassinated, with a note containing a death threat for Hirsi Ali pinned to his corpse with a knife. Thereupon, she went into hiding with a permanent security detail to protect her against ongoing threats. In 2006, she moved to the U.S., taking a position at the American Enterprise Institute. She is currently a Fellow at the Hoover Institution.

In this short book (or long pamphlet: it is just 105 pages, with 70 pages of main text), Hirsi Ali argues that almost all Western commentators on the threat posed by Islam have fundamentally misdiagnosed the nature of the challenge it poses to Western civilisation and the heritage of the Enlightenment, and, failing to understand the tactics of Islam's ambition to dominate the world, dating to Mohammed's revelations in Medina and his actions in that period of his life, have adopted strategies which are ineffective and in some cases counterproductive in confronting the present danger.

The usual picture of Islam presented by politicians and analysts in the West (at least those who admit there is any problem at all) is that most Muslims are peaceful, productive people who have no problems becoming integrated in Western societies, but there is a small minority, variously called “radical”, “militant”, “Islamist”, “fundamentalist”, or other names, who are bent on propagating their religion by means of violence, either in guerrilla or conventional wars, or by terror attacks on civilian populations. This view has led to involvement in foreign wars, domestic surveillance, and often intrusive internal security measures to counter the threat, which is often given the name of “jihad”. A dispassionate analysis of these policies over the last decade and a half must conclude that they are not working: despite trillions of dollars spent and thousands of lives lost, turning air travel into a humiliating and intimidating circus, and invading the privacy of people worldwide, the Islamic world seems to be, if anything, more chaotic than it was in the year 2000, and the frequency and seriousness of so-called “lone wolf” terrorist attacks against soft targets does not seem to be abating. What if we don't really understand what we're up against? What if jihad isn't the problem, or only a part of something much larger?

Dawa (or dawah, da'wah, daawa, daawah—there doesn't seem to be anything associated with this religion which isn't transliterated at least three different ways—the Arabic is “دعوة”) is an Arabic word which literally means “invitation”. In the context of Islam, it is usually translated as “proselytising” or spreading the religion by nonviolent means, as is done by missionaries of many other religions. But here, Hirsi Ali contends that dawa, which is grounded in the fundamental scripture of Islam: the Koran and Hadiths (sayings of Mohammed), is something very different when interpreted and implemented by what she calls “political Islam”. As opposed to a distinction between moderate and radical Islam, she argues that Islam is more accurately divided into “spiritual Islam” as revealed in the earlier Mecca suras of the Koran, and “political Islam”, embodied by those dating from Medina. Spiritual Islam defines a belief system, prayers, rituals, and duties of believers, but is largely confined to the bounds of other major religions. Political Islam, however, is a comprehensive system of politics, civil and criminal law, economics, the relationship with and treatment of nonbelievers, and military strategy, and imposes a duty to spread Islam into new territories.

Seen through the lens of political Islam, dawa and those engaged in it, often funded today by the deep coffers of petro-tyrannies, is nothing like the activities of, say, Roman Catholic or Mormon missionaries. Implemented through groups such as the Council on American-Islamic Relations (CAIR), centres on Islamic and Middle East studies on university campuses, mosques and Islamic centres in communities around the world, so-called “charities” and non-governmental organisations, all bankrolled by fundamentalist champions of political Islam, dawa in the West operates much like the apparatus of Communist subversion described almost sixty years ago by J. Edgar Hoover in Masters of Deceit. You have the same pattern of apparently nonviolent and innocuously-named front organisations, efforts to influence the influential (media figures, academics, politicians), infiltration of institutions along the lines of Antonio Gramsci's “long march”, exploitation of Western traditions such as freedom of speech and freedom of religion to achieve goals diametrically opposed to them, and redefinition of the vocabulary and intimidation of any who dare state self-evident facts (mustn't be called “islamophobic”!), all funded from abroad. Unlike communists in the heyday of the Comintern and afterward the Cold War, Islamic subversion is assisted by large scale migration of Muslims into Western countries, especially in Europe, where the organs of dawa encourage them to form their own separate communities, avoiding assimilation, and demanding the ability to implement their own sharia law and that others respect their customs. Dawa is directed at these immigrants as well, with the goal of increasing their commitment to Islam and recruiting them for its political agenda: the eventual replacement of Western institutions with sharia law and submission to a global Islamic caliphate. This may seem absurdly ambitious for communities which, in most countries, aren't much greater than 5% of the population, but they're patient: they've been at it for fourteen centuries, and they're out-breeding the native populations in almost every country where they've become established.

Hirsi Ali argues persuasively that the problem isn't jihad: jihad is a tactic which can be employed as part of dawa when persuasion, infiltration, and subversion prove insufficient, or as a final step to put the conquest over the top, but it's the commitment to global hegemony, baked right into the scriptures of Islam, which poses the most dire risk to the West, especially since so few decision makers seem to be aware of it or, if they are, dare not speak candidly of it lest they be called “islamophobes” or worse. This is something about which I don't need to be persuaded: I've been writing about it since 2015; see “Clash of Ideologies: Communism, Islam, and the West”. I sincerely hope that this work by an eloquent observer who has seen political Islam from the inside will open more eyes to the threat it poses to the West. A reasonable set of policy initiatives to confront the threat is presented at the end. The only factual error I noted is the claim on p. 57 that Joseph R. McCarthy was in charge of the House Committee on Un-American Activities—in fact, McCarthy, a Senator, presided over the Senate Permanent Subcommittee on Investigations.

This is a publication of the Hoover Institution. It has no ISBN and cannot be purchased through usual booksellers. Here is the page for the book, whence you can download the PDF file for free.

 Permalink

Cline, Ernest. Ready Player One. New York: Broadway Books, 2011. ISBN 978-0-307-88744-3.
By the mid-21st century, the Internet has become largely subsumed as the transport layer for the OASIS (Ontologically Anthropocentric Sensory Immersive Simulation), a massively multiuser online virtual reality environment originally developed as a multiplayer game, but which rapidly evolved into a platform for commerce, education, social interaction, and entertainment used by billions of people around the world. The OASIS supports immersive virtual reality, limited only by the user's budget for hardware used to access the network. With top-of-the-line visors and sound systems, body motion sensors, and haptic feedback, coupled to a powerful interface console, a highly faithful experience was possible. The OASIS was the creation of James Halliday, a legendary super-nerd who made his first fortune designing videogames for home computers in the 1980s, and then re-launched his company in 2012 as Gregarious Simulation Systems (GSS), with the OASIS as its sole product. The OASIS was entirely open source: users could change things within the multitude of worlds within the system (within the limits set by those who created them), or create their own new worlds. Using a distributed computing architecture which pushed much of the processing power to the edge of the network, on users' own consoles, the system was able to grow without bound without requiring commensurate growth in GSS data centres. And it was free, or almost so. To access the OASIS, you paid only a one-time lifetime sign-up fee of twenty-five cents, just like the quarter you used to drop into the slot of an arcade videogame. Users paid nothing to use the OASIS itself: their only costs were the hardware they used to connect (which varied widely in cost and quality of the experience) and the bandwidth to connect to the network. But since most of the processing was done locally, the latter cost was modest. GSS made its money selling or renting virtual real estate (“surreal estate”) within the simulation. If you wanted to open, say, a shopping mall or build your own Fortress of Solitude on an asteroid, you had to pay GSS for the territory. GSS also sold virtual goods: clothes, magical artefacts, weapons, vehicles of all kinds, and buildings. Most were modestly priced, but since they cost nothing to manufacture, were pure profit to the company.

As the OASIS permeated society, GSS prospered. Halliday remained the majority shareholder in the company, having bought back the share once owned by his co-founder and partner Ogden (“Og”) Morrow, after what was rumoured to be a dispute between the two the details of which had never been revealed. By 2040, Halliday's fortune, almost all in GSS stock, had grown to more than two hundred and forty billion dollars. And then, after fifteen years of self-imposed isolation which some said was due to insanity, Halliday died of cancer. He was a bachelor, with no living relatives, no heirs, and, it was said, no friends. His death was announced on the OASIS in a five minute video titled Anaorak's Invitation (“Anorak” was the name of Halliday's all-powerful avatar within the OASIS). In the film, Halliday announces that his will places his entire fortune in escrow until somebody completes the quest he has programmed within the OASIS:

Three hidden keys open three secret gates,
Wherein the errant will be tested for worthy traits,
And those with the skill to survive these straits,
Will reach The End where the prize awaits.

The prize is Halliday's entire fortune and, with it, super-user control of the principal medium of human interaction, business, and even politics. Before fading out, Halliday shows three keys: copper, jade, and crystal, which must be obtained to open the three gates. Only after passing through the gates and passing the tests within them, will the intrepid paladin obtain the Easter egg hidden within the OASIS and gain control of it. Halliday provided a link to Anorak's Almanac, more than a thousand pages of journal entries made during his life, many of which reflect his obsession with 1980s popular culture, science fiction and fantasy, videogames, movies, music, and comic books. The clues to finding the keys and the Egg were widely believed to be within this rambling, disjointed document.

Given the stakes, and the contest's being open to anybody in the OASIS, what immediately came to be called the Hunt became a social phenomenon, all-consuming to some. Egg hunters, or “gunters”, immersed themselves in Halliday's journal and every pop culture reference within it, however obscure. All of this material was freely available on the OASIS, and gunters memorised every detail of anything which had caught Halliday's attention. As time passed, and nobody succeeded in finding even the copper key (Halliday's memorial site displayed a scoreboard of those who achieved goals in the Hunt, so far blank), many lost interest in the Hunt, but a dedicated hard core persisted, often to the exclusion of all other diversions. Some gunters banded together into “clans”, some very large, agreeing to exchange information and, if one found the Egg, to share the proceeds with all members. More sinister were the activities of Innovative Online Industries—IOI—a global Internet and communications company which controlled much of the backbone that underlay the OASIS. It had assembled a large team of paid employees, backed by the research and database facilities of IOI, with their sole mission to find the Egg and turn control of the OASIS over to IOI. These players, all with identical avatars and names consisting of their six-digit IOI employee numbers, all of which began with the digit “6”, were called “sixers” or, more often in the gunter argot, “Sux0rz”.

Gunters detested IOI and the sixers, because it was no secret that if they found the Egg, IOI's intention was to close the architecture of the OASIS, begin to charge fees for access, plaster everything with advertising, destroy anonymity, snoop indiscriminately, and use their monopoly power to put their thumb on the scale of all forms of communication including political discourse. (Fortunately, that couldn't happen to us with today's enlightened, progressive Silicon Valley overlords.) But IOI's financial resources were such that whenever a rare and powerful magical artefact (many of which had been created by Halliday in the original OASIS, usually requiring the completion of a quest to obtain, but freely transferrable thereafter) came up for auction, IOI was usually able to outbid even the largest gunter clans and add it to their arsenal.

Wade Watts, a lone gunter whose avatar is named Parzival, became obsessed with the Hunt on the day of Halliday's death, and, years later, devotes almost every minute of his life not spent sleeping or in school (like many, he attends school in the OASIS, and is now in the last year of high school) on the Hunt, reading and re-reading Anorak's Almanac, reading, listening to, playing, and viewing everything mentioned therein, to the extent he can recite the dialogue of the movies from memory. He makes copious notes in his “grail diary”, named after the one kept by Indiana Jones. His friends, none of whom he has ever met in person, are all gunters who congregate on-line in virtual reality chat rooms such as that run by his best friend, Aech.

Then, one day, bored to tears and daydreaming in Latin class, Parzival has a flash of insight. Putting together a message buried in the Almanac that he and many other gunters had discovered but failed to understand, with a bit of Latin and his encyclopedic knowledge of role playing games, he decodes the clue and, after a demanding test, finds himself in possession of the Copper Key. His name, alone, now appears at the top of the scoreboard, with 10,000 points. The path to the First Gate was now open.

Discovery of the Copper Key was a sensation: suddenly Parzival, a humble level 10 gunter, is a worldwide celebrity (although his real identity remains unknown, as he refuses all media offers which would reveal or compromise it). Knowing that the key can be found re-energises other gunters, not to speak of IOI, and Parzival's footprints in the OASIS are scrupulously examined for clues to his achievement. (Finding a key and opening a gate does not render it unavailable to others. Those who subsequently pass the tests will receive their own copies of the key, although there is a point bonus for finding it first.)

So begins an epic quest by Parzival and other gunters, contending with the evil minions of IOI, whose potential gain is so high and ethics so low that the risks may extend beyond the OASIS into the real world. For the reader, it is a nostalgic romp through every aspect of the popular culture of the 1980s: the formative era of personal computing and gaming. The level of detail is just staggering: this may be the geekiest nerdfest ever published. Heck, there's even a reference to an erstwhile Autodesk employee! The only goof I noted is a mention of the “screech of a 300-baud modem during the log-in sequence”. Three hundred baud modems did not have the characteristic squawk and screech sync-up of faster modems which employ trellis coding. While there are a multitude of references to details which will make people who were there, then, smile, readers who were not immersed in the 1980s and/or less familiar with its cultural minutiæ can still enjoy the challenges, puzzles solved, intrigue, action, and epic virtual reality battles which make up the chronicle of the Hunt. The conclusion is particularly satisfying: there may be a bigger world than even the OASIS.

A movie based upon the novel, directed by Steven Spielberg, is scheduled for release in March 2018.

 Permalink

Gleick, James. Time Travel. New York: Pantheon Books, 2016. ISBN 978-0-307-90879-7.
In 1895, a young struggling writer who earned his precarious living by writing short humorous pieces for London magazines, often published without a byline, buckled down and penned his first long work, a longish novella of some 33,000 words. When published, H. G. Wells's The Time Machine would not only help to found a new literary genre—science fiction, but would introduce a entirely new concept to storytelling: time travel. Many of the themes of modern fiction can be traced to the myths of antiquity, but here was something entirely new: imagining a voyage to the future to see how current trends would develop, or back into the past, perhaps not just to observe history unfold and resolve its persistent mysteries, but possibly to change the past, opening the door to paradoxes which have been the subject not only of a multitude of subsequent stories but theories and speculation by serious scientists. So new was the concept of travel through time that the phrase “time travel” first appeared in the English language only in 1914, in a reference to Wells's story.

For much of human history, there was little concept of a linear progression of time. People lived lives much the same as those of their ancestors, and expected their descendants to inhabit much the same kind of world. Their lives seemed to be governed by a series of cycles: day and night, the phases of the Moon, the seasons, planting and harvesting, and successive generations of humans, rather than the ticking of an inexorable clock. Even great disruptive events such as wars, plagues, and natural disasters seemed to recur over time, even if not on a regular, predictable schedule. This led to the philosophical view of “eternal return”, which appears in many ancient cultures and in Western philosophy from Pythagoras to Neitzsche. In mathematics, the Poincaré recurrence theorem formally demonstrated that an isolated finite system will eventually (although possibly only after a time much longer than the age of the universe), return to a given state and repeat its evolution an infinite number of times.

But nobody (except perhaps a philosopher) who had lived through the 19th century in Britain could really believe that. Over the space of a human lifetime, the world and the human condition had changed radically and seemed to be careening into a future difficult to envision. Steam power, railroads, industrialisation of manufacturing, the telegraph and telephone, electricity and the electric light, anaesthesia, antiseptics, steamships and global commerce, submarine cables and near-instantaneous international communications, had all remade the world. The idea of progress was not just an abstract concept of the Enlightenment, but something anybody could see all around them.

But progress through what? In the fin de siècle milieu that Wells inhabited, through time: a scroll of history being written continually by new ideas, inventions, creative works, and the social changes flowing from these events which changed the future in profound and often unknowable ways. The intellectual landscape was fertile for utopian ideas, many of which Wells championed. Among the intellectual élite, the fourth dimension was much in vogue, often a fourth spatial dimension but also the concept of time as a dimension comparable to those of space. This concept first appears in the work of Edgar Allan Poe in 1848, but was fully fleshed out by Wells in The Time Machine: “ ‘Clearly,’ the Time Traveller proceeded, ‘any real body must have extension in four dimensions: it must have Length, Breadth, Thickness, and—Duration.’ ” But if we can move freely through the three spatial directions (although less so in the vertical in Wells's day than the present), why cannot we also move back and forth in time, unshackling our consciousness and will from the tyranny of the timepiece just as the railroad, steamship, and telegraph had loosened the constraints of locality?

Just ten years after The Time Machine, Einstein's special theory of relativity resolved puzzles in electrodynamics and mechanics by demonstrating that time and space mixed depending upon the relative states of motion of observers. In 1908, Hermann Minkowski reformulated Einstein's theory in terms of a four dimensional space-time. He declared, “Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” (Einstein was, initially, less than impressed with this view, calling it “überflüssige Gelehrsamkeit”: superfluous learnedness, but eventually accepted the perspective and made it central to his 1915 theory of gravitation.) But further, embedded within special relativity, was time travel—at least into the future.

According to the equations of special relativity, which have been experimentally verified as precisely as anything in science and are fundamental to the operation of everyday technologies such as the Global Positioning System, a moving observer will measure time to flow more slowly than a stationary observer. We don't observe this effect in everyday life because the phenomenon only becomes pronounced at velocities which are a substantial fraction of the speed of light, but even at the modest velocity of orbiting satellites, it cannot be neglected. Due to this effect of time dilation, if you had a space ship able to accelerate at a constant rate of one Earth gravity (people on board would experience the same gravity as they do while standing on the Earth's surface), you would be able to travel from the Earth to the Andromeda galaxy and back to Earth, a distance of around four million light years, in a time, measured by the ship's clock and your own subjective and biological perception of time, in less than six and a half years. But when you arrived back at the Earth, you'd discover that in its reference frame, more than four million years of time would have elapsed. What wonders would our descendants have accomplished in that distant future, or would they be digging for grubs with blunt sticks while living in a sustainable utopia having finally thrown off the shackles of race, class, and gender which make our present civilisation a living Hell?

This is genuine time travel into the future and, although it's far beyond our present technological capabilities, it violates no law of physics and, to a more modest yet still measurable degree, happens every time you travel in an automobile or airplane. But what about travel into the past? Travel into the future doesn't pose any potential paradoxes. It's entirely equivalent to going into hibernation and awaking after a long sleep—indeed, this is a frequently-used literary device in fiction depicting the future. Travel into the past is another thing entirely. For example, consider the grandfather paradox: suppose you have a time machine able to transport you into the past. You go back in time and kill your own grandfather (it's never the grandmother—beats me). Then who are you, and how did you come into existence in the first place? The grandfather paradox exists whenever altering an event in the past changes conditions in the future so as to be inconsistent with the alteration of that event.

Or consider the bootstrap paradox or causal loop. An elderly mathematician (say, age 39), having struggled for years and finally succeeded in proving a difficult theorem, travels back in time and provides a key hint to his twenty year old self to set him on the path to the proof—the same hint he remembers finding on his desk that morning so many years before. Where did the idea come from? In 1991, physicist David Deutsch demonstrated that a computer incorporating travel back in time (formally, a closed timelike curve) could solve NP problems in polynomial time. I wonder where he got that idea….

All of this would be academic were time travel into the past just a figment of fictioneers' imagination. This has been the view of many scientists, and the chronology protection conjecture asserts that the laws of physics conspire to prevent travel to the past which, in the words of a 1992 paper by Stephen Hawking, “makes the universe safe for historians.” But the laws of physics, as we understand them today, do not rule out travel into the past! Einstein's 1915 general theory of relativity, which so far has withstood every experimental test for over a century, admits solutions, such as the Gödel metric, discovered in 1949 by Einstein's friend and colleague Kurt Gödel, which contain closed timelike curves. In the Gödel universe, which consists of a homogeneous sea of dust particles, rotating around a centre point and with a nonzero cosmological constant, it is possible, by travelling on a closed path and never reaching or exceeding the speed of light, to return to a point in one's own past. Now, the Gödel solution is highly contrived, and there is no evidence that it describes the universe we actually inhabit, but the existence of such a solution leaves the door open that somewhere in the other exotica of general relativity such as spinning black holes, wormholes, naked singularities, or cosmic strings, there may be a loophole which allows travel into the past. If you discover one, could you please pop back and send me an E-mail about it before I finish this review?

This book is far more about the literary and cultural history of time travel than scientific explorations of its possibility and consequences. Thinking about time travel forces one to confront questions which can usually be swept under the rug: is the future ours to change, or do we inhabit a block universe where our perception of time is just a delusion as the cursor of our consciousness sweeps out a path in a space-time whose future is entirely determined by its past? If we have free will, where does it come from, when according to the laws of physics the future can be computed entirely from the past? If we can change the future, why not the past? If we changed the past, would it change the present for those living in it, or create a fork in the time line along which a different history would develop? All of these speculations are rich veins to be mined in literature and drama, and are explored here. Many technical topics are discussed only briefly, if at all, for example the Wheeler-Feynman absorber theory, which resolves a mystery in electrodynamics by positing a symmetrical solution to Maxwell's equations in which the future influences the past just as the present influences the future. Gleick doesn't go anywhere near my own experiments with retrocausality or the “presponse” experiments of investigators such as Dick Bierman and Dean Radin. I get it—pop culture beats woo-woo on the bestseller list.

The question of time has puzzled people for millennia. Only recently have we thought seriously about travel in time and its implications for our place in the universe. Time travel has been, and will doubtless continue to be the source of speculation and entertainment, and this book is an excellent survey of its short history as a genre of fiction and the science upon which its founded.

 Permalink

Rahe, Paul A. The Spartan Regime. New Haven, CT: Yale University Press, 2016. ISBN 978-0-300-21901-2.
This thin volume (just 232 pages in the hardcover edition, only around 125 of which are the main text and appendices—the rest being extensive source citations, notes, and indices of subjects and people and place names) is intended as the introduction to an envisioned three volume work on Sparta covering its history from the archaic period through the second battle of Mantinea in 362 b.c. where defeat of a Sparta-led alliance at the hands of the Thebans paved the way for the Macedonian conquest of Sparta.

In this work, the author adopts the approach to political science used in antiquity by writers such as Thucydides, Xenophon, and Aristotle: that the principal factor in determining the character of a political community is its constitution, or form of government, the rules which define membership in the community and which its members were expected to obey, their character being largely determined by the system of education and moral formation which shape the citizens of the community.

Discerning these characteristics in any ancient society is difficult, but especially so in the case of Sparta, which was a society of warriors, not philosophers and historians. Almost all of the contemporary information we have about Sparta comes from outsiders who either visited the city at various times in its history or based their work upon the accounts of others who had. Further, the Spartans were famously secretive about the details of their society, so when ancient accounts differ, it is difficult to determine which, if any, is correct. One gets the sense that all of the direct documentary information we have about Sparta would fit on one floppy disc: everything else is interpretations based upon that meagre foundation. In recent centuries, scholars studying Sparta have seen it as everything from the prototype of constitutional liberty to a precursor of modern day militaristic totalitarianism.

Another challenge facing the modern reader and, one suspects, many ancients, in understanding Sparta was how profoundly weird it was. On several occasions whilst reading the book, I was struck that rarely in science fiction does one encounter a description of a society so thoroughly alien to those with which we are accustomed from our own experience or a study of history. First of all, Sparta was tiny: there were never as many as ten thousand full-fledged citizens. These citizens were descended from Dorians who had invaded the Peloponnese in the archaic period and subjugated the original inhabitants, who became helots: essentially serfs who worked the estates of the Spartan aristocracy in return for half of the crops they produced (about the same fraction of the fruit of their labour the helots of our modern enlightened self-governing societies are allowed to retain for their own use). Every full citizen, or Spartiate, was a warrior, trained from boyhood to that end. Spartiates not only did not engage in trade or work as craftsmen: they were forbidden to do so—such work was performed by non-citizens. With the helots outnumbering Spartiates by a factor of from four to seven (and even more as the Spartan population shrunk toward the end), the fear of an uprising was ever-present, and required maintenance of martial prowess among the Spartiates and subjugation of the helots.

How were these warriors formed? Boys were taken from their families at the age of seven and placed in a barracks with others of their age. Henceforth, they would return to their families only as visitors. They were subjected to a regime of physical and mental training, including exercise, weapons training, athletics, mock warfare, plus music and dancing. They learned the poetry, legends, and history of the city. All learned to read and write. After intense scrutiny and regular tests, the young man would face a rite of passage, krupteίa, in which, for a full year, armed only with a dagger, he had to survive on his own in the wild, stealing what he needed, and instilling fear among the helots, who he was authorised to kill if found in violation of curfew. Only after surviving this ordeal would the young Spartan be admitted as a member of a sussιtίon, a combination of a men's club, a military mess, and the basic unit in the Spartan army. A Spartan would remain a member of this same group all his life and, even after marriage and fatherhood, would live and dine with them every day until the age of forty-five.

From the age of twelve, boys in training would usually have a patron, or surrogate father, who was expected to initiate him into the world of the warrior and instruct him in the duties of citizenship. It was expected that there would be a homosexual relationship between the two, and that this would further cement the bond of loyalty to his brothers in arms. Upon becoming a full citizen and warrior, the young man was expected to take on a boy and continue the tradition. As to many modern utopian social engineers, the family was seen as an obstacle to the citizen's identification with the community (or, in modern terminology, the state), and the entire process of raising citizens seems to have been designed to transfer this inherent biological solidarity with kin to peers in the army and the community as a whole.

The political structure which sustained and, in turn, was sustained by these cultural institutions was similarly alien and intricate—so much so that I found myself wishing that Professor Rahe had included a diagram to help readers understand all of the moving parts and how they interacted. After finishing the book, I found this one on Wikipedia.

Structure of Government in Sparta
Image by Wikipedia user Putinovac licensed under the
Creative Commons Attribution 3.0 Unported license.

The actual relationships are even more complicated and subtle than expressed in this diagram, and given the extent to which scholars dispute the details of the Spartan political institutions (which occupy many pages in the end notes), it is likely the author may find fault with some aspects of this illustration. I present it purely because it provides a glimpse of the complexity and helped me organise my thoughts about the description in the text.

Start with the kings. That's right, “kings”—there were two of them—both traditionally descended from Hercules, but through different lineages. The kings shared power and acted as a check on each other. They were commanders of the army in time of war, and high priests in peace. The kingship was hereditary and for life.

Five overseers, or ephors were elected annually by the citizens as a whole. Scholars dispute whether ephors could serve more than one term, but the author notes that no ephor is known to have done so, and it is thus likely they were term limited to a single year. During their year in office, the board of five ephors (one from each of the villages of Sparta) exercised almost unlimited power in both domestic and foreign affairs. Even the kings were not immune to their power: the ephors could arrest a king and bring him to trial on a capital charge just like any other citizen, and this happened. On the other hand, at the end of their one year term, ephors were subject to a judicial examination of their acts in office and liable for misconduct. (Wouldn't be great if present-day “public servants” received the same kind of scrutiny at the end of their terms in office? It would be interesting to see what a prosecutor could discover about how so many of these solons manage to amass great personal fortunes incommensurate with their salaries.) And then there was the “fickle meteor of doom” rule.

Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.

I can imagine the kings hoping they didn't pick a night in mid-August for their vigil!

The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders.

All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage.

Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text.

In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.

 Permalink

July 2017

Smith, L. Neil. Blade of p'Na. Rockville, MD: Phoenix Pick, 2017. ISBN 978-1-61242-218-3.
This novel is set in the “Elders” universe, originally introduced in the 1990 novels Contact and Commune and Converse and Conflict, and now collected in an omnibus edition with additional material, Forge of the Elders. Around four hundred million years ago the Elders, giant mollusc-like aquatic creatures with shells the size of automobiles, conquered aging, and since then none has died except due to accident or violence. And precious few have succumbed to those causes: accident because the big squid are famously risk averse, and violence because, after a societal adolescence in which they tried and rejected many political and economic bad ideas, they settled on p'Na as the central doctrine of their civilisation: the principle that nobody has the right to initiate physical force against anybody else for any reason—much like the Principle of Non-Aggression, don't you know.

On those rare occasions order is disturbed, the services of a p'Nan “debt assessor” are required. Trained in the philosophy of p'Na, martial arts, psychology, and burnished through a long apprenticeship, assessors are called in either after an event in which force has been initiated or by those contemplating a course which might step over the line. The assessor has sole discretion in determining culpability, the form and magnitude of restitution due, and when no other restitution is possible, enforcing the ultimate penalty on the guilty. The assessor's sword, the Blade of p'Na, is not just a badge of office but the means of restitution in such cases.

The Elders live on one of a multitude, possibly infinite, parallel Earths in a multiverse where each planet's history has diverged due to contingent events in its past. Some millennia after adopting p'Na, they discovered the means of observing, then moving among these different universes and their variant Earths. Some millennia after achieving biological immortality and peace through p'Na, their curiosity and desire for novelty prompted them to begin collecting beings from across the multiverse. Some were rescues of endangered species, while others would be more accurately described as abductions. They referred to this with the euphemism of “appropriation”, as if that made any difference. The new arrivals: insectoid, aquatic, reptilian, mammalian, avian, and even sentient plants, mostly seemed happy in their new world, where the Elders managed to create the most diverse and peaceful society known in the universe.

This went on for a million years or so until, just like the revulsion against slavery in the 19th century in our timeline, somesquid happened to notice that the practice violated the fundamental principle of their society. Appropriations immediately ceased, debt assessors were called in, and before long all of the Elders implicated in appropriation committed suicide (some with a little help). But that left the question of restitution to the appropriated. Dumping them back into their original universes, often war-torn, barbarous, primitive, or with hostile and unstable environments after up to a million years of peace and prosperity on the Elders' planet didn't make the ethical cut. They settled on granting full citizenship to all the appropriated, providing them the gift of biological immortality, cortical implants to upgrade the less sentient to full intelligence, and one more thing…. The Elders had developed an unusual property: the tips of their tentacles could be detached and sent on errands on behalf of their parent bodies. While not fully sentient, the tentacles could, by communicating via cortical implants, do all kinds of useful work and allow the Elders to be in multiple places at once (recall that the Elders, like terrestrial squid, have ten tentacles—if they had twelve, they'd call them twelvicles, wouldn't they?). So for each of the appropriated species, the Elders chose an appropriate symbiote who, upgraded in intelligence and self-awareness and coupled to the host by their own implant, provided a similar benefit to them. For humanoids, it was dogs, or their species' canids.

(You might think that all of this constitutes spoilers, but it's just the background for the Elders' universe which is laid out in the first few chapters for the benefit of readers who haven't read the earlier books in the series.)

Hundreds of millions of years after the Great Restitution Eichra Oren (those of his humanoid species always use both names) is a p'Na debt assessor. His symbiote, Oasam Otusam, a super-intelligent, indiscriminately libidinous, and wisecracking dog, prefers to go by “Sam”. So peaceful is the planet of the Elders that most of the cases Eichra Oren is called upon to resolve are routine and mundane, such as the current client, an arachnid about the size of a dinner table, seeking help in tracking down her fiancé, who has vanished three days before the wedding. This raises some ethical issues because, among their kind, traditionally “Saying ‘I do’ is the same as saying ‘bon appétit’ ”. Many, among sapient spiders, have abandoned the Old Ways, but some haven't. After discussion, in which Sam says, “You realize that in the end, she's going to eat him”, they decide, nonetheless, to take the case.

The caseload quickly grows as the assessor is retained by investors in a project led by an Elder named Misterthoggosh, whose fortune comes from importing reality TV from other universes (there is no multiverse copyright convention—the p'Na is cool with cultural appropriation) and distributing it to the multitude of species on the Elders' world. He (little is known of the Elders' biology…some say the females are non-sentient and vestigial) is now embarking on a new project, and the backers want a determination by an assessor that it will not violate p'Na, for which they would be jointly and separately responsible. The lead investor is a star-nosed mole obsessed by golf.

Things become even more complicated after a mysterious attack which appears to have been perpetrated by the “greys”, creatures who inhabit the mythology and nightmares of a million sapient species, and the suspicion and fear that somewhere else in the multiverse, another species has developed the technology of opening gates between universes, something so far achieved only by the now-benign Elders, with wicked intent by the newcomers.

What follows is a romp filled with interesting questions. Should you order the vegan plate in a restaurant run by intelligent plants? What are the ethical responsibilities of a cyber-assassin who is conscious yet incapable of refusing orders to kill? What is a giant squid's idea of a pleasure yacht? If two young spiders are amorously attracted, it only pupæ love? The climax forces the characters to confront the question of the extent to which beings which are part of a hive mind are responsible for the actions of the collective.

L. Neil Smith's books have sometimes been criticised for being preachy libertarian tracts with a garnish of science fiction. I've never found them to be such, but you certainly can't accuse this one of that. It's set in a world governed for æons by the principle of non-aggression, but that foundation of civil society works so well that it takes an invasion from another universe to create the conflict which is central to the plot. Readers are treated to the rich and sometime zany imagination of a world inhabited by almost all imaginable species where the only tensions among them are due to atavistic instincts such as those of dogs toward tall plants, combined with the humour, ranging from broad to wry, of our canine narrator, Sam.

 Permalink