Last 10 Books Read, Newest to Oldest

Bookmark this link to come directly here.

July 2018

Carreyrou, John. Bad Blood. New York: Alfred A. Knopf, 2018. ISBN 978-1-984833-63-1.
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor's office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor's assistant who draws the blood (whom I've nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I'd slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you're curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.

Wouldn't it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor's office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn't quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.

This was the vision that came to 19 year old Elizabeth Holmes after completing a summer internship at the Genome Institute of Singapore after her freshman year as a chemical engineering major at Stanford. Holmes had decided on a career in entrepreneurship from an early age and, after her first semester told her father, “No, Dad, I'm, not interested in getting a Ph.D. I want to make money.” And Stanford, in the heart of Silicon Valley, was surrounded by companies started by professors and graduates who had turned inventions into vast fortunes. With only one year of college behind her, she was sure she'd found her opportunity. She showed the patent application she'd drafted for an arm patch that would diagnose medical conditions to Channing Robertson, professor of chemical engineering at Stanford, and Shaunak Roy, the Ph.D. student in whose lab she had worked as an assistant during her freshman year. Robertson was enthusiastic, and when Holmes said she intended to leave Stanford and start a company to commercialise the idea, he encouraged her. When the company was incorporated in 2004, Roy, then a newly-minted Ph.D., became its first employee and Robertson joined the board.

From the outset, the company was funded by other people's money. Holmes persuaded a family friend, Tim Draper, a second-generation venture capitalist who had backed, among other companies, Hotmail, to invest US$ 1 million in first round funding. Draper was soon joined by Victor Palmieri, a corporate turnaround artist and friend of Holmes' father. The company was named Theranos, from “therapy” and “diagnosis”. Elizabeth, unlike this scribbler, had a lifelong aversion to needles, and the invention she described in the business plan pitched to investors was informed by this. A skin patch would draw tiny quantities of blood without pain by means of “micro-needles”, the blood would be analysed by micro-miniaturised sensors in the patch and, if needed, medication could be injected. A wireless data link would send results to the doctor.

This concept, and Elizabeth's enthusiasm and high-energy pitch allowed her to recruit additional investors, raising almost US$ 6 million in 2004. But there were some who failed to be persuaded: MedVentures Associates, a firm that specialised in medical technology, turned her down after discovering she had no answers for the technical questions raised in a meeting with the partners, who had in-depth experience with diagnostic technology. This would be a harbinger of the company's fund-raising in the future: in its entire history, not a single venture fund or investor with experience in medical or diagnostic technology would put money into the company.

Shaunak Roy, who, unlike Holmes, actually knew something about chemistry, quickly realised that Elizabeth's concept, while appealing to the uninformed, was science fiction, not science, and no amount of arm-waving about nanotechnology, microfluidics, or laboratories on a chip would suffice to build something which was far beyond the state of the art. This led to a “de-scoping” of the company's ambition—the first of many which would happen over succeeding years. Instead of Elizabeth's magical patch, a small quantity of blood would be drawn from a finger stick and placed into a cartridge around the size of a credit card. The disposable cartridge would then be placed into a desktop “reader” machine, which would, using the blood and reagents stored in the cartridge, perform a series of analyses and report the results. This was originally called Theranos 1.0, but after a series of painful redesigns, was dubbed the “Edison”. This was the prototype Theranos ultimately showed to potential customers and prospective investors.

This was a far cry from the original ambitious concept. The hundreds of laboratory tests doctors can order are divided into four major categories: immunoassays, general chemistry, hæmatology, and DNA amplification. In immunoassay tests, blood plasma is exposed to an antibody that detects the presence of a substance in the plasma. The antibody contains a marker which can be detected by its effect on light passed through the sample. Immunoassays are used in a number of common blood tests, such the 25(OH)D assay used to test for vitamin D deficiency, but cannot perform other frequently ordered tests such as blood sugar and red and white blood cell counts. Edison could only perform what is called “chemiluminescent immunoassays”, and thus could only perform a fraction of the tests regularly ordered. The rationale for installing an Edison in the doctor's office was dramatically reduced if it could only do some tests but still required a venous blood draw be sent off to the laboratory for the balance.

This didn't deter Elizabeth, who combined her formidable salesmanship with arm-waving about the capabilities of the company's products. She was working on a deal to sell four hundred Edisons to the Mexican government to cope with an outbreak of swine flu, which would generate immediate revenue. Money was much on the minds of Theranos' senior management. By the end of 2009, the company had burned through the US$ 47 million raised in its first three rounds of funding and, without a viable product or prospects for sales, would have difficulty keeping the lights on.

But the real bonanza loomed on the horizon in 2010. Drugstore giant Walgreens was interested in expanding their retail business into the “wellness market”: providing in-store health services to their mass market clientèle. Theranos pitched them on offering in-store blood testing. Doctors could send their patients to the local Walgreens to have their blood tested from a simple finger stick and eliminate the need to draw blood in the office or deal with laboratories. With more than 8,000 locations in the U.S., if each were to be equipped with one Edison, the revenue to Theranos (including the single-use testing cartridges) would put them on the map as another Silicon Valley disruptor that went from zero to hundreds of millions in revenue overnight. But here, as well, the Elizabeth effect was in evidence. Of the 192 tests she told Walgreens Theranos could perform, fewer than half were immunoassays the Edisons could run. The rest could be done only on conventional laboratory equipment, and certainly not on a while-you-wait basis.

Walgreens wasn't the only potential saviour on the horizon. Grocery godzilla Safeway, struggling with sales and earnings which seemed to have reached a peak, saw in-store blood testing with Theranos machines as a high-margin profit centre. They loaned Theranos US$ 30 million and began to plan for installation of blood testing clinics in their stores.

But there was a problem, and as the months wore on, this became increasingly apparent to people at both Walgreens and Safeway, although dismissed by those in senior management under the spell of Elizabeth's reality distortion field. Deadlines were missed. Simple requests, such as A/B comparison tests run on the Theranos hardware and at conventional labs were first refused, then postponed, then run but results not disclosed. The list of tests which could be run, how blood for them would be drawn, and how they would be processed seemed to dissolve into fog whenever specific requests were made for this information, which was essential for planning the in-store clinics.

There was, indeed, a problem, and it was pretty severe, especially for a start-up which had burned through US$ 50 million and sold nothing. The product didn't work. Not only could the Edison only run a fraction of the tests its prospective customers had been led by Theranos to believe it could, for those it did run the results were wildly unreliable. The small quantity of blood used in the test introduced random errors due to dilution of the sample; the small tubes in the cartridge were prone to clogging; and capillary blood collected from a finger stick was prone to errors due to “hemolysis”, the rupture of red blood cells, which is minimal in a venous blood draw but so prevalent in finger stick blood it could lead to some tests producing values which indicated the patient was dead.

Meanwhile, people who came to work at Theranos quickly became aware that it was not a normal company, even by the eccentric standards of Silicon Valley. There was an obsession with security, with doors opened by badge readers; logging of employee movement; information restricted to narrow silos prohibiting collaboration between, say, engineering and marketing which is the norm in technological start-ups; monitoring of employee Internet access, E-mail, and social media presence; a security detail of menacing-looking people in black suits and earpieces (which eventually reached a total of twenty); a propensity of people, even senior executives, to “vanish”, Stalin-era purge-like, overnight; and a climate of fear that anybody, employee or former employee, who spoke about the company or its products to an outsider, especially the media, would be pursued, harassed, and bankrupted by lawsuits. There aren't many start-ups whose senior scientists are summarily demoted and subsequently commit suicide. That happened at Theranos. The company held no memorial for him.

Throughout all of this, a curious presence in the company was Ramesh (“Sunny”) Balwani, a Pakistani-born software engineer who had made a fortune of more than US$ 40 million in the dot-com boom and cashed out before the bust. He joined Theranos in late 2009 as Elizabeth's second in command and rapidly became known as a hatchet man, domineering boss, and clueless when it came to the company's key technologies (on one occasion, an engineer mentioned a robotic arm's “end effector”, after which Sunny would frequently speak of its “endofactor”). Unbeknownst to employees and investors, Elizabeth and Sunny had been living together since 2005. Such an arrangement would be a major scandal in a public company, but even in a private firm, concealing such information from the board and investors is a serious breach of trust.

Let's talk about the board, shall we? Elizabeth was not only persuasive, but well-connected. She would parley one connection into another, and before long had recruited many prominent figures including:

  • George Schultz (former U.S. Secretary of State)
  • Henry Kissinger (former U.S. Secretary of State)
  • Bill Frist (former U.S. Senator and medical doctor)
  • James Mattis (General, U.S. Marine Corps)
  • Riley Bechtel (Chairman and former CEO, Bechtel Group)
  • Sam Nunn (former U.S. Senator)
  • Richard Kobacevich (former Wells Fargo chairman and CEO)

Later, super-lawyer David Boies would join the board, and lead its attacks against the company's detractors. It is notable that, as with its investors, not a single board member had experience in medical or diagnostic technology. Bill Frist was an M.D., but his speciality was heart and lung transplants, not laboratory tests.

By 2014, Elizabeth Holmes had come onto the media radar. Photogenic, articulate, and with a story of high-tech disruption of an industry much in the news, she began to be featured as the “female Steve Jobs”, which must have pleased her, since she affected black turtlenecks, kale shakes, and even a car with no license plates to emulate her role model. She appeared on the cover of Fortune in January 2014, made the Forbes list of 400 most wealthy shortly thereafter, was featured in puff pieces in business and general market media, and was named by Time as one of the hundred most influential people in the world. The year 2014 closed with another glowing profile in the New Yorker. This would be the beginning of the end, as it happened to be read by somebody who actually knew something about blood testing.

Adam Clapper, a pathologist in Missouri, spent his spare time writing Pathology Blawg, with a readership of practising pathologists. Clapper read what Elizabeth was claiming to do with a couple of drops of blood from a finger stick and it didn't pass the sniff test. He wrote a sceptical piece on his blog and, as it passed from hand to hand, he became a lightning rod for others dubious of Theranos' claims, including those with direct or indirect experience with the company. Earlier, he had helped a Wall Street Journal reporter comprehend the tangled web of medical laboratory billing, and he decided to pass on the tip to the author of this book.

Thus began the unravelling of one of the greatest scams and scandals in the history of high technology, Silicon Valley, and venture investing. At the peak, privately-held Theranos was valued at around US$ 9 billion, with Elizabeth Holmes holding around half of its common stock, and with one of those innovative capital structures of which Silicon Valley is so fond, 99.7% of the voting rights. Altogether, over its history, the company raised around US$ 900 million from investors (including US$ 125 million from Rupert Murdoch in the US$ 430 million final round of funding). Most of the investors' money was ultimately spent on legal fees as the whole fairy castle crumbled.

The story of the decline and fall is gripping, involving the grandson of a Secretary of State, gumshoes following whistleblowers and reporters, what amounts to legal terrorism by the ever-slimy David Boies, courageous people who stood their ground in the interest of scientific integrity against enormous personal and financial pressure, and the saga of one of the most cunning and naturally talented confidence women ever, equipped with only two semesters of freshman chemical engineering, who managed to raise and blow through almost a billion dollars of other people's money without checking off the first box on the conventional start-up check list: “Build the product”.

I have, in my career, met three world-class con men. Three times, I (just barely) managed to pick up the warning signs and beg my associates to walk away. Each time I was ignored. After reading this book, I am absolutely sure that had Elizabeth Holmes pitched me on Theranos (about which I never heard before the fraud began to be exposed), I would have been taken in. Walker's law is “Absent evidence to the contrary, assume everything is a scam”. A corollary is “No matter how cautious you are, there's always a confidence man (or woman) who can scam you if you don't do your homework.”

Here is Elizabeth Holmes at Stanford in 2013, when Theranos was riding high and she was doing her “female Steve Jobs” act.

Elizabeth Holmes at Stanford: 2013

This is a CNN piece, filmed after the Theranos scam had begun to collapse, in which you can still glimpse the Elizabeth Holmes reality distortion field at full intensity directed at CNN medical correspondent Sanjay Gupta. There are several curious things about this video. The machine that Gupta is shown is the “miniLab”, a prototype second-generation machine which never worked acceptably, not the Edison, which was actually used in the Walgreens and Safeway tests. Gupta's blood is drawn and tested, but the process used to perform the test is never shown. The result reported is a cholesterol test, but the Edison cannot perform such tests. In the plans for the Walgreens and Safeway roll-outs, such tests were performed on purchased Siemens analysers which had been secretly hacked by Theranos to work with blood diluted well below their regulatory-approved specifications (the dilution was required due to the small volume of blood from the finger stick). Since the miniLab never really worked, the odds are that Gupta's blood was tested on one of the Siemens machines, not a Theranos product at all.

CNN: Inside the Theranos Lab (2016)

In a June 2018 interview, author John Carreyrou recounts the story of Theranos and his part in revealing the truth.

John Carreyrou on investigating Theranos (2018)

If you are a connoisseur of the art of the con, here is a masterpiece. After the Wall Street Journal exposé had broken, after retracting tens of thousands of blood tests, and after Theranos had been banned from running a clinical laboratory by its regulators, Holmes got up before an audience of 2500 people at the meeting of the American Association of Clinical Chemistry and turned up the reality distortion field to eleven. Watch a master at work. She comes on the stage at the six minute mark.

Elizabeth Holmes at the American Association of Clinical Chemistry (2016)

 Permalink

June 2018

Nury, Fabien and Thierry Robin. La Mort de Staline. Paris: Dargaud, [2010, 2012] 2014. ISBN 978-2-205-07351-5.
The 2017 film, The Death of Stalin, was based upon this French bande dessinée (BD, graphic novel, or comic). The story is based around the death of Stalin and the events that ensued: the scheming and struggle for power among the members of his inner circle, the reactions and relationships of his daughter Svetlana and wastrel son Vasily, the conflict between the Red Army and NKVD, the maneuvering over the arrangements for Stalin's funeral, and the all-encompassing fear and suspicion that Stalin's paranoia had infused into the Soviet society. This is a fictional account, grounded in documented historical events, in which the major characters were real people. But the authors are forthright in saying they invented events and dialogue to tell a story which is intended to give one a sense of the «folie furieuse de Staline et de son entourage» rather than provide a historical narrative.

The film adaptation is listed as a comedy and, particularly if you have a taste for black humour, is quite funny. This BD is not explicitly funny, except in an ironic sense, illustrating the pathological behaviour of those surrounding Stalin. Many of the sequences in this work could have been used as storyboards for the movie, but there are significant events here which did make it into the screenplay. The pervasive strong language which earned the film an R rating is little in evidence here.

The principal characters and their positions are introduced by boxes overlaying the graphics, much as was done in the movie. Readers who aren't familiar with the players in Stalin's Soviet Union such as Beria, Zhukov, Molotov, Malenkov, Khrushchev, Mikoyan, and Bulganin, may miss some of the nuances of their behaviour here, which is driven by this back-story. Their names are given using the French transliteration of Russian, which is somewhat different from that used in English (for example, “Krouchtchev” instead of “Khrushchev”). The artwork is intricately drawn in the realistic style, with only a few comic idioms sparsely used to illustrate things like gunshots.

I enjoyed both the movie (which I saw first, not knowing until the end credits that it was based upon this work) and the BD. They're different takes on the same story, and both work on their own terms. This is not the kind of story for which “spoilers” apply, so you'll lose nothing by enjoying both in either order.

The album cited above contains both volumes of the original print edition. The Kindle edition continues to be published in two volumes (Vol. 1, Vol. 2). An English translation of the graphic novel is available. I have not looked at it beyond the few preview pages available on Amazon.

 Permalink

Suarez, Daniel. Influx. New York: Signet, [2014] 2015. ISBN 978-0-451-46944-1.
Doesn't it sometimes seem that, sometime in the 1960s, the broad march of technology just stopped? Certainly, there has been breathtaking progress in some fields, particularly computation and data communication, but what about clean, abundant fusion power too cheap to meter, opening up the solar system to settlement, prevention and/or effective treatment of all kinds of cancer, anti-aging therapy, artificial general intelligence, anthropomorphic robotics, and the many other wonders we expected to be commonplace by the year 2000?

Decades later, Jon Grady was toiling in his obscure laboratory to make one of those dreams—gravity control— a reality. His lab is invaded by notorious Luddite terrorists who plan to blow up his apparatus and team. The fuse burns down into the charge, and all flashes white, then black. When he awakes, he finds himself, in good condition, in a luxurious office suite in a skyscraper, where he is introduced to the director of the Federal Bureau of Technology Control (BTC). The BTC, which appears in no federal organisation chart or budget, is charged with detecting potentially emerging disruptive technologies, controlling and/or stopping them (including deploying Luddite terrorists, where necessary), co-opting their developers into working in deep secrecy with the BTC, and releasing the technologies only when human nature and social and political institutions were “ready” for them—as determined by the BTC.

But of course those technologies exist within the BTC, and it uses them: unlimited energy, genetically engineered beings, clones, artificial intelligence, and mind control weapons. Grady is offered a devil's bargain: join the BTC and work for them, or suffer the worst they can do to those who resist and see his life's work erased. Grady turns them down.

At first, his fate doesn't seem that bad but then, as the creative and individualistic are wont to do, he resists and discovers the consequences when half a century's suppressed technologies are arrayed against a defiant human mind. How is he to recover his freedom and attack the BTC? Perhaps there are others, equally talented and defiant, in the same predicament? And, perhaps, the BTC, with such great power at its command, is not so monolithic and immune from rivalry, ambition, and power struggles as it would like others to believe. And what about other government agencies, fiercely protective of their own turf and budgets, and jealous of any rivals?

Thus begins a technological thriller very different from the author's earlier Dæmon (August 2010) and Freedom™ (January 2011), but compelling. How does a band of individuals take on an adversary which can literally rain destruction from the sky? What is the truth beneath the public face of the BTC? What does a superhuman operative do upon discovering everything has been a lie? And how can one be sure it never happens again?

With this novel Daniel Suarez reinforces his reputation as an emerging grand master of the techno-thriller. This book won the 2015 Prometheus Award for best libertarian novel.

 Permalink

Mills, Kyle. Enemy of the State. New York: Atria Books, 2017. ISBN 978-1-4767-8351-2.
This is the third novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. It is the sixteenth novel in the Mitch Rapp series (Flynn's first novel, Term Limits [November 2009], is set in the same world and shares characters with the Mitch Rapp series, but Rapp does not appear in it, so it isn't considered a Rapp novel), Mills continues to develop the Rapp story in new directions, while maintaining the action-packed and detail-rich style which made the series so successful.

When a covert operation tracking the flow of funds to ISIS discovers that a (minor) member of the Saudi royal family is acting as a bagman, the secret deal between the U.S. and Saudi Arabia struck in the days after the 2001 terrorist attacks on the U.S.—the U.S. would hide the ample evidence of Saudi involvement in the plot in return for the Saudis dealing with terrorists and funders of terrorism within the Kingdom—is called into question. The president of the U.S., who might be described in modern jargon as “having an anger management problem” decides the time has come to get to the bottom of what the Saudis are up to: is it a few rogue ne'er-do-wells, or is the leadership up to their old tricks of funding and promoting radical Islamic infiltration and terrorism in the West? And if they are, he wants to make them hurt, so they don't even think about trying it again.

When it comes to putting the hurt on miscreants, the president's go-to-guy is Mitch Rapp, the CIA's barely controlled loose cannon, who has a way of getting the job done even if his superiors don't know, and don't want to know, the details. When the president calls Rapp into his office and says, “I think you need to have a talk … and at the end of that talk I think he needs to be dead” there is little doubt about what will happen after Rapp walks out of the office.

But there is a problem. Saudi Arabia is, nominally at least, an important U.S ally. It keeps the oil flowing and prices down, not only benefitting the world economy, but putting a lid on the revenue of troublemakers such as Russia and Iran. Saudi Arabia is a major customer of U.S. foreign military sales. Saudi Arabia is also a principal target of Islamic revolutionaries, and however bad it is today, one doesn't want to contemplate a post-Saudi regime raising the black flag of ISIS, crying havoc, and letting slip the goats of war. Wet work involving the royal family must not just be deniable but totally firewalled from any involvement by the U.S. government. In accepting the mission Rapp understands that if things blow up, he will not only be on his own but in all likelihood have the U.S. government actively hunting him down.

Rapp hands in his resignation to the CIA, ending a relationship which has existed over all of the previous novels. He meets with his regular mission team and informs them he “need[s] to go somewhere you … can't follow”: involving them would create too many visible ties back to the CIA. If he's going to go rogue, he decides he must truly do so, and sets off assembling a rogues' gallery, composed mostly of former adversaries we've met in previous books. When he recruits his friend Claudia, who previously managed logistics for an assassin Rapp confronted in the past, she says, “So, a criminal enterprise. And only one of the people at this table knows how to be a criminal.”

Assembling this band of dodgy, dangerous, and devious characters at the headquarters of an arms dealer in that paradise which is Juba, South Sudan, Rapp plots an operation to penetrate the security surrounding the Saudi princeling and find out how high the Saudi involvement in funding ISIS goes. What they learn is disturbing in the extreme.

After an operation gone pear-shaped, and with the CIA, FBI, Saudis, and Sudanese factions all chasing him, Rapp and his misfit mob have to improvise and figure out how to break the link between the Saudis and ISIS in way which will allow him to deny everything and get back to whatever is left of his life.

This is a thriller which is full of action, suspense, and characters fans of the series will have met before acting in ways which may be surprising. After a shaky outing in the previous installment, Order to Kill (December 2017), Kyle Mills has regained his stride and, while preserving the essentials of Mitch Rapp, is breaking new ground. It will be interesting to see if the next novel, Red War, expected in September 2018, continues to involve any of the new team. While you can read this as a stand-alone thriller, you'll enjoy it more if you've read the earlier books in which the members of Rapp's team were principal characters.

 Permalink

Oliver, Bernard M., John Billingham, et al. Project Cyclops. Stanford, CA: Stanford/NASA Ames Research Center, 1971. NASA-CR-114445 N73-18822.
There are few questions in science as simple to state and profound in their implications as “are we alone?”—are humans the only species with a technological civilisation in the galaxy, or in the universe? This has been a matter of speculation by philosophers, theologians, authors of fiction, and innumerable people gazing at the stars since antiquity, but it was only in the years after World War II, which had seen the development of high-power microwave transmitters and low-noise receivers for radar, that it dawned upon a few visionaries that this had now become a question which could be scientifically investigated.

The propagation of radio waves through the atmosphere and the interstellar medium is governed by basic laws of physics, and the advent of radio astronomy demonstrated that many objects in the sky, some very distant, could be detected in the microwave spectrum. But if we were able to detect these natural sources, suppose we connected a powerful transmitter to our radio telescope and sent a signal to a nearby star? It was easy to calculate that, given the technology of the time (around 1960), existing microwave transmitters and radio telescopes could transmit messages across interstellar distances.

But, it's one thing to calculate that intelligent aliens with access to microwave communication technology equal or better than our own could communicate over the void between the stars, and entirely another to listen for those communications. The problems are simple to understand but forbidding to face: where do you point your antenna, and where do you tune your dial? There are on the order of a hundred billion stars in our galaxy. We now know, as early researchers suspected without evidence, that most of these stars have planets, some of which may have conditions suitable for the evolution of intelligent life. Suppose aliens on one of these planets reach a level of technological development where they decide to join the “Galactic Club” and transmit a beacon which simply says “Yo! Anybody out there?” (The beacon would probably announce a signal with more information which would be easy to detect once you knew where to look.) But for the beacon to work, it would have to be aimed at candidate stars where others might be listening (a beacon which broadcasted in all directions—an “omnidirectional beacon”—would require so much energy or be limited to such a short range as to be impractical for civilisations with technology comparable to our own).

Then there's the question of how many technological communicating civilisations there are in the galaxy. Note that it isn't enough that a civilisation have the technology which enables it to establish a beacon: it has to do so. And it is a sobering thought that more than six decades after we had the ability to send such a signal, we haven't yet done so. The galaxy may be full of civilisations with our level of technology and above which have the same funding priorities we do and choose to spend their research budget on intersectional autoethnography of transgender marine frobdobs rather than communicating with nerdy pocket-protector types around other stars who tediously ask Big Questions.

And suppose a civilisation decides it can find the spare change to set up and operate a beacon, inviting others to contact it. How long will it continue to transmit, especially since it's unlikely, given the finite speed of light and the vast distances between the stars, there will be a response in the near term? Before long, scruffy professors will be marching in the streets wearing frobdob hats and rainbow tentacle capes, and funding will be called into question. This is termed the “lifetime” of a communicating civilisation, or L, which is how long that civilisation transmits and listens to establish contact with others. If you make plausible assumptions for the other parameters in the Drake equation (which estimates how many communicating civilisations there are in the galaxy), a numerical coincidence results in the estimate of the number of communicating civilisations in the galaxy being roughly equal to their communicating life in years, L. So, if a typical civilisation is open to communication for, say, 10,000 years before it gives up and diverts its funds to frobdob research, there will be around 10,000 such civilisations in the galaxy. With 100 billion stars (and around as many planets which may be hosts to life), that's a 0.00001% chance that any given star where you point your antenna may be transmitting, and that has to be multiplied by the same probability they are transmitting their beacon in your direction while you happen to be listening. It gets worse. The galaxy is huge—around 150 million light years in diameter, and our technology can only communicate with comparable civilisations out to a tiny fraction of this, say 1000 light years for high-power omnidirectional beacons, maybe ten to a hundred times that for directed beacons, but then you have the constraint that you have to be listening in their direction when they happen to be sending.

It seems hopeless. It may be. But the 1960s were a time very different from our constrained age. Back then, if you had a problem, like going to the Moon in eight years, you said, “Wow! That's a really big nail. How big a hammer do I need to get the job done?” Toward the end of that era when everything seemed possible, NASA convened a summer seminar at Stanford University to investigate what it would take to seriously investigate the question of whether we are alone. The result was Project Cyclops: A Design Study of a System for Detecting Extraterrestrial Intelligent Life, prepared in 1971 and issued as a NASA report (no Library of Congress catalogue number or ISBN was assigned) in 1973; the link will take you to a NASA PDF scan of the original document, which is in the public domain. The project assembled leading experts in all aspects of the technologies involved: antennas, receivers, signal processing and analysis, transmission and control, and system design and costing.

They approached the problem from what might be called the “Apollo perspective”: what will it cost, given the technology we have in hand right now, to address this question and get an answer within a reasonable time? What they came up with was breathtaking, although no more so than Apollo. If you want to listen for beacons from communicating civilisations as distant as 1000 light years and incidental transmissions (“leakage”, like our own television and radar emissions) within 100 light years, you're going to need a really big bucket to collect the signal, so they settled on 1000 dishes, each 100 metres in diameter. Putting this into perspective, 100 metres is about the largest steerable dish anybody envisioned at the time, and they wanted to build a thousand of them, densely packed.

But wait, there's more. These 1000 dishes were not just a huge bucket for radio waves, but a phased array, where signals from all of the dishes (or a subset, used to observe multiple targets) were combined to provide the angular resolution of a single dish the size of the entire array. This required breathtaking precision of electronic design at the time which is commonplace today (although an array of 1000 dishes spread over 16 km would still give most designers pause). The signals that might be received would not be fixed in frequency, but would drift due to Doppler shifts resulting from relative motion of the transmitter and receiver. With today's computing hardware, digging such a signal out of the raw data is something you can do on a laptop or mobile phone, but in 1971 the best solution was an optical data processor involving exposing, developing, and scanning film. It was exquisitely clever, although obsolete only a few years later, but recall the team had agreed to use only technologies which existed at the time of their design. Even more amazing (and today, almost bizarre) was the scheme to use the array as an imaging telescope. Again, with modern computers, this is a simple matter of programming, but in 1971 the designers envisioned a vast hall in which the signals from the antennas would be re-emitted by radio transmitters which would interfere in free space and produce an intensity image on an image surface where it would be measured by an array of receiver antennæ.

What would all of this cost? Lots—depending upon the assumptions used in the design (the cost was mostly driven by the antenna specifications, where extending the search to shorter wavelengths could double the cost, since antennas had to be built to greater precision) total system capital cost was estimated as between 6 and 10 billion dollars (1971). Converting this cost into 2018 dollars gives a cost between 37 and 61 billion dollars. (By comparison, the Apollo project cost around 110 billion 2018 dollars.) But since the search for a signal may “almost certainly take years, perhaps decades and possibly centuries”, that initial investment must be backed by a long-term funding commitment to continue the search, maintain the capital equipment, and upgrade it as technology matures. Given governments' record in sustaining long-term efforts in projects which do not line politicians' or donors' pockets with taxpayer funds, such perseverance is not the way to bet. Perhaps participants in the study should have pondered how to incorporate sufficient opportunities for graft into the project, but even the early 1970s were still an idealistic time when we didn't yet think that way.

This study is the founding document of much of the work in the Search for Extraterrestrial Intelligence (SETI) conducted in subsequent decades. Many researchers first realised that answering this question, “Are we alone?”, was within our technological grasp when chewing through this difficult but inspiring document. (If you have an equation or chart phobia, it's not for you; they figure on the majority of pages.) The study has held up very well over the decades. There are a number of assumptions we might wish to revise today (for example, higher frequencies may be better for interstellar communication than were assumed at the time, and spread spectrum transmissions may be more energy efficient than the extreme narrowband beacons assumed in the Cyclops study).

Despite disposing of wealth, technological capability, and computing power of which authors of the Project Cyclops report never dreamed, we only make little plans today. Most readers of this post, in their lifetimes, have experienced the expansion of their access to knowledge in the transition from being isolated to gaining connectivity to a global, high-bandwidth network. Imagine what it means to make the step from being confined to our single planet of origin to being plugged in to the Galactic Web, exchanging what we've learned with a multitude of others looking at things from entirely different perspectives. Heck, you could retire the entire capital and operating cost of Project Cyclops in the first three years just from advertising revenue on frobdob videos! (Did I mention they have very large eyes which are almost all pupil? Never mind the tentacles.)

This document has been subjected to intense scrutiny over the years. The SETI League maintains a comprehensive errata list for the publication.

 Permalink

May 2018

Schantz, Hans G. A Rambling Wreck. Huntsville, AL: ÆtherCzar, 2017. ISBN 978-1-5482-0142-5.
This the second novel in the author's Hidden Truth series. In the first book (December 2017) we met high schoolers and best friends Pete Burdell and Amit Patel who found, in dusty library books, knowledge apparently discovered by the pioneers of classical electromagnetism (many of whom died young), but which does not figure in modern works, even purported republications of the original sources they had consulted. As they try to sort through the discrepancies, make sense of what they've found, and scour sources looking for other apparently suppressed information, they become aware that dark and powerful forces seem bent on keeping this seemingly obscure information hidden. People who dig too deeply have a tendency to turn up dead in suspicious “accidents”, and Amit coins the monicker “EVIL”: the Electromagnetic Villains International League, for their adversaries. Events turn personal and tragic, and Amit and Pete learn tradecraft, how to deal with cops (real and fake), and navigate the legal system with the aid of mentors worthy of a Heinlein story.

This novel finds the pair entering the freshman class at Georgia Tech—they're on their way to becoming “rambling wrecks”. Unable to pay their way with their own resources, Pete and Amit compete for and win full-ride scholarships funded by the Civic Circle, an organisation they suspect may be in cahoots in some way with EVIL. As a condition of their scholarship, they must take a course, “Introduction to Social Justice Studies” (the “Studies” should be tip-off enough) to become “social justice ambassadors” to the knuckle-walking Tech community.

Pete's Uncle Ron feared this might be a mistake, but Amit and Pete saw it as a way to burrow from within, starting their own “long march through the institutions”, and, incidentally, having a great deal of fun and, especially for Amit, an aspiring master of Game, meet radical chicks. Once at Tech, it becomes clear that the first battles they must fight relate not to 19th century electrodynamics but the 21st century social justice wars.

Pete's family name resonates with history and tradition at Tech. In the 1920s, with a duplicate enrollment form in hand, enterprising undergraduates signed up the fictitious “George P. Burdell” for a full course load, submitted his homework, took his exams, and saw him graduate in 1930. Burdell went on to serve in World War II, and was listed on the Board of Directors of Mad magazine. Whenever Georgia Tech alumni gather, it is not uncommon to hear George P. Burdell being paged. Amit and Pete decide the time has come to enlist the school's most famous alumnus in the battle for its soul, and before long the merry pranksters of FOG—Friends of George—were mocking and disrupting the earnest schemes of the social justice warriors.

Meanwhile, Pete has taken a job as a laboratory assistant and, examining data that shouldn't be interesting, discovers a new phenomenon which might just tie in with his and Amit's earlier discoveries. These investigations, as his professor warns, can also be perilous, and before long he and Amit find themselves dealing with three separate secret conspiracies vying for control over the hidden knowledge, which may be much greater and rooted deeper in history than they had imagined. Another enigmatic document by an obscure missionary named Angus MacGuffin (!), who came to a mysterious and violent end in 1940, suggests a unification of the enigmas. And one of the greatest mysteries of twentieth century physics, involving one of its most brilliant figures, may be involved.

This series is a bit of Golden Age science fiction which somehow dropped into the early 21st century. It is a story of mystery, adventure, heroes, and villains, with interesting ideas and technical details which are plausible. The characters are interesting and grow as they are tested and learn from their experiences. And the story is related with a light touch, with plenty of smiles and laughs at the expense of those who richly deserve mockery and scorn. This book is superbly done and a worthy sequel to the first. I eagerly await the next, The Brave and the Bold.

I was delighted to see that Pete made the same discovery about triangles in physics and engineering problems that I made in my first year of engineering school. One of the first things any engineer should learn is to see if there's an easier way to get the answer out. I'll be adding “proglodytes”—progressive troglodytes—to my vocabulary.

For a self-published work, there are only a very few copy editing errors. The Kindle edition is free for Kindle Unlimited subscribers. In an “About the Author” section at the end, the author notes:

There's a growing fraternity of independent, self-published authors busy changing the culture one story at a time with their tales of adventure and heroism. Here are a few of my more recent discoveries.

With the social justice crowd doing their worst to wreck science fiction, the works of any of these authors are a great way to remember why you started reading science fiction in the first place.

 Permalink

Mercer, Ilana. Into the Cannibal's Pot. Mount Vernon, WA, 2011. ISBN 978-0-9849070-1-4.
The author was born in South Africa, the daughter of Rabbi Abraham Benzion Isaacson, a leader among the Jewish community in the struggle against apartheid. Due to her father's activism, the family, forced to leave the country, emigrated to Israel, where the author grew up. In the 1980s, she moved back to South Africa, where she married, had a daughter, and completed her university education. In 1995, following the first elections with universal adult suffrage which resulted in the African National Congress (ANC) taking power, she and her family emigrated to Canada with the proceeds of the sale of her apartment hidden in the soles of her shoes. (South Africa had adopted strict controls to prevent capital flight in the aftermath of the election of a black majority government.) After initially settling in British Columbia, her family subsequently emigrated to the United States where they reside today.

From the standpoint of a member of a small minority (the Jewish community) of a minority (whites) in a black majority country, Mercer has reason to be dubious of the much-vaunted benefits of “majority rule”. Describing herself as a “paleolibertarian”, her outlook is shaped not by theory but the experience of living in South Africa and the accounts of those who remained after her departure. For many in the West, South Africa scrolled off the screen as soon as a black majority government took power, but that was the beginning of the country's descent into violence, injustice, endemic corruption, expropriation of those who built the country and whose ancestors lived there since before the founding of the United States, and what can only be called a slow-motion genocide against the white farmers who were the backbone of the society.

Between 1994 and 2005, the white population of South Africa fell from 5.22 million to 4.37 million. Two of the chief motivations for emigration have been an explosion of violent crime, often racially motivated and directed against whites, a policy of affirmative action which amounts to overt racial discrimination against whites, endemic corruption, and expropriation of businesses in the interest of “fairness”.

In the forty-four years of apartheid in South Africa from 1950 to 1993, there were a total of 309,583 murders in the country: an average of 7,036 per year. In the first eight years after the end of apartheid (1994—2001), under one-party black majority rule, 193,649 murders were reported, or 24,206 per year. And the latter figure is according to the statistics of the ANC-controlled South Africa Police Force, which both Interpol and the South African Medical Research Council say may be understated by as much as a factor of two. The United States is considered to be a violent country, with around 4.88 homicides per 100,000 people (by comparison, the rate in the United Kingdom is 0.92 and in Switzerland is 0.69). In South Africa, the figure is 34.27 (all estimates are 2015 figures from the United Nations Office on Drugs and Crime). And it isn't just murder: in South Africa,where 65 people are murdered every day, around 200 are raped and 300 are victims of assault and violent robbery.

White farmers, mostly Afrikaner, have frequently been targets of violence. In the periods 1996–2007 and 2010–2016 (no data were published for the years 2008 and 2009), according to statistics from the South African Police Service (which may be understated), there were 11,424 violent attacks on farms in South Africa, with a total of 1609 homicides, in some cases killing entire farm families and some of their black workers. The motives for these attacks remain a mystery according to the government, whose leaders have been known to sing the stirring anthem “Kill the Boer” at party rallies. Farm attacks follow the pattern in Zimbabwe, where such attacks, condoned by the Mugabe regime, resulted in the emigration of almost all white farmers and the collapse of the country's agricultural sector (only 200 white farmers remain in the country, 5% of the number before black majority rule). In South Africa, white farmers who have not already emigrated find themselves trapped: they cannot sell to other whites who fear they would become targets of attacks and/or eventual expropriation without compensation, nor to blacks who expect they will eventually receive the land for free when it is expropriated.

What is called affirmative action in the U.S. is implemented in South Africa under the Black Economic Empowerment (BEE) programme, a set of explicitly racial preferences and requirements which cover most aspects of business operation including ownership, management, employment, training, supplier selection, and internal investment. Mining companies must cede co-ownership to blacks in order to obtain permits for exploration. Not surprisingly, in many cases the front men for these “joint ventures” are senior officials of the ruling ANC and their family members. So corrupt is the entire system that Archbishop Desmond Tutu, one of the most eloquent opponents of apartheid, warned that BEE has created a “powder keg”, where benefits accrue only to a small, politically-connected, black elite, leaving others in “dehumanising poverty”.

Writing from the perspective of one who got out of South Africa just at the point where everything started to go wrong (having anticipated in advance the consequences of pure majority rule) and settled in the U.S., Mercer then turns to the disturbing parallels between the two countries. Their histories are very different, and yet there are similarities and trends which are worrying. One fundamental problem with democracy is that people who would otherwise have to work for a living discover that they can vote for a living instead, and are encouraged in this by politicians who realise that a dependent electorate is a reliable electorate as long as the benefits continue to flow. Back in 2008, I wrote about the U.S. approaching a tipping point where nearly half of those who file income tax returns owe no income tax. At that point, among those who participate in the economy, there is a near-majority who pay no price for voting for increased government benefits paid for by others. It's easy to see how this can set off a positive feedback loop where the dependent population burgeons, the productive minority shrinks, the administrative state which extracts the revenue from that minority becomes ever more coercive, and those who channel the money from the producers to the dependent grow in numbers and power.

Another way to look at the tipping point is to compare the number of voters to taxpayers (those with income tax liability). In the U.S., this number is around two to one, which is dangerously unstable to the calamity described above. Now consider that in South Africa, this ratio is eleven to one. Is it any wonder that under universal adult suffrage the economy of that country is in a down-spiral?

South Africa prior to 1994 was in an essentially intractable position. By encouraging black and later Asian immigration over its long history (most of the ancestors of black South Africans arrived after the first white settlers), it arrived at a situation where a small white population (less than 10%) controlled the overwhelming majority of the land and wealth, and retained almost all of the political power. This situation, and the apartheid system which sustained it (which the author and her family vehemently opposed) was unjust and rightly was denounced and sanctioned by countries around the globe. But what was to replace it? The experience of post-colonial Africa was that democracy almost always leads to “One man, one vote, one time”: a leader of the dominant ethnic group wins the election, consolidates power, and begins to eliminate rival groups, often harking back to the days of tribal warfare which preceded the colonial era, but with modern weapons and a corresponding death toll. At the same time, all sources of wealth are plundered and “redistributed”, not to the general population, but to the generals and cronies of the Great Man. As the country sinks into savagery and destitution, whites and educated blacks outside the ruling clique flee. (Indeed, South Africa has a large black illegal immigrant population made of those who fled the Mugabe tyranny in Zimbabwe.)

Many expected this down-spiral to begin in South Africa soon after the ANC took power in 1994. The joke went, “What's the difference between Zimbabwe and South Africa? Ten years.” That it didn't happen immediately and catastrophically is a tribute to Nelson Mandela's respect for the rule of law and for his white partners in ending apartheid. But now he is gone, and a new generation of more radical leaders has replaced him. Increasingly, it seems like the punch line might be revised to be “Twenty-five years.”

The immediate priority one takes away from this book is the need to address the humanitarian crisis faced by the Afrikaner farmers who are being brutally murdered and face expropriation of their land without compensation as the regime becomes ever more radical. Civilised countries need to open immigration to this small, highly-productive, population. Due to persecution and denial of property rights, they may arrive penniless, but are certain to quickly become the backbone of the communities they join.

In the longer term, the U.S. and the rest of the Anglosphere and civilised world should be cautious and never indulge in the fantasy “it can't happen here”. None of these countries started out with the initial conditions of South Africa, but it seems like, over the last fifty years, much of their ruling class seems to have been bent on importing masses of third world immigrants with no tradition of consensual government, rule of law, or respect for property rights, concentrating them in communities where they can preserve the culture and language of the old country, and ensnaring them in a web of dependency which keeps them from climbing the ladder of assimilation and economic progress by which previous immigrant populations entered the mainstream of their adopted countries. With some politicians bent on throwing the borders open to savage, medieval, inbred “refugees” who breed much more rapidly than the native population, it doesn't take a great deal of imagination to see how the tragedy now occurring in South Africa could foreshadow the history of the latter part of this century in countries foolish enough to lay the groundwork for it now.

This book was published in 2011, but the trends it describes have only accelerated in subsequent years. It's an eye-opener to the risks of democracy without constraints or protection of the rights of minorities, and a warning to other nations of the grave risks they face should they allow opportunistic politicians to recreate the dire situation of South Africa in their own lands.

 Permalink

Brown, Dan. Origin. New York: Doubleday, 2017. ISBN 978-0-385-51423-1.
Ever since the breakthrough success of Angels & Demons, his first mystery/thriller novel featuring Harvard professor and master of symbology Robert Langdon, Dan Brown has found a formula which turns arcane and esoteric knowledge, exotic and picturesque settings, villains with grandiose ambitions, and plucky female characters into bestsellers, two of which, The Da Vinci Code and Angels & Demons, have been adapted into Hollywood movies.

This is the fifth novel in the Robert Langdon series. After reading the fourth, Inferno (May 2013), it struck me that Brown's novels have become so formulaic they could probably be generated by an algorithm. Since artificial intelligence figures in the present work, in lieu of a review, which would be difficult to write without spoilers, here are the parameters to the Marinchip Turbo Digital™ Thriller Wizard to generate the story.

Villain: Edmond Kirsch, billionaire computer scientist and former student of Robert Langdon. Made his fortune from breakthroughs in artificial intelligence, neuroscience, and robotics.

Megalomaniac scheme: “end the age of religion and usher in an age of science”.

Buzzword technologies: artificial general intelligence, quantum computing.

Big Questions: “Where did we come from?”, “Where are we going?”.

Religious adversary: The Palmarian Catholic Church.

Plucky female companion: Ambra Vidal, curator of the Guggenheim Museum in Bilbao (Spain) and fiancée of the crown prince of Spain.

Hero or villain? Details would be a spoiler but, as always, there is one.

Contemporary culture tie-in: social media, an InfoWars-like site called ConspiracyNet.com.

MacGuffins: the 47-character password from Kirsch's favourite poem (but which?), the mysterious “Winston”, “The Regent”.

Exotic and picturesque locales: The Guggenheim Museum Bilbao, Casa Milà and the Sagrada Família in Barcelona, Valle de los Caídos near Madrid.

Enigmatic symbol: a typographical mark one must treat carefully in HTML.

When Edmond Kirsch is assassinated moments before playing his presentation which will answer the Big Questions, Langdon and Vidal launch into a quest to discover the password required to release the presentation to the world. The murder of two religious leaders to whom Kirsch revealed his discoveries in advance of their public disclosure stokes the media frenzy surrounding Kirsch and his presentation, and spawns conspiracy theories about dark plots to suppress Kirsch's revelations which may involve religious figures and the Spanish monarchy.

After perils, adventures, conflict, and clues hidden in plain sight, Startling Revelations leave Langdon Stunned and Shaken but Cautiously Hopeful for the Future.

When the next Dan Brown novel comes along, see how well it fits the template. This novel will appeal to people who like this kind of thing: if you enjoyed the last four, this one won't disappoint. If you're looking for plausible speculation on the science behind the big questions or the technological future of humanity, it probably will. Now that I know how to crank them out, I doubt I'll buy the next one when it appears.

 Permalink

Hanson, Victor Davis. The Second World Wars. New York: Basic Books, 2017. ISBN 978-0-465-06698-8.
This may be the best single-volume history of World War II ever written. While it does not get into the low-level details of the war or its individual battles (don't expect to see maps with boxes, front lines, and arrows), it provides an encyclopedic view of the first truly global conflict with a novel and stunning insight every few pages.

Nothing like World War II had ever happened before and, thankfully, has not happened since. While earlier wars may have seemed to those involved in them as involving all of the powers known to them, they were at most regional conflicts. By contrast, in 1945, there were only eleven countries in the entire world which were neutral—not engaged on one side or the other. (There were, of course, far fewer countries then than now—most of Africa and South Asia were involved as colonies of belligerent powers in Europe.) And while war had traditionally been a matter for kings, generals, and soldiers, in this total war the casualties were overwhelmingly (70–80%) civilian. Far from being confined to battlefields, many of the world's great cities, from Amsterdam to Yokohama, were bombed, shelled, or besieged, often with disastrous consequences for their inhabitants.

“Wars” in the title refers to Hanson's observation that what we call World War II was, in reality, a collection of often unrelated conflicts which happened to occur at the same time. The settling of ethnic and territorial scores across borders in Europe had nothing to do with Japan's imperial ambitions in China, or Italy's in Africa and Greece. It was sometimes difficult even to draw a line dividing the two sides in the war. Japan occupied colonies in Indochina under the administration of Vichy France, notwithstanding Japan and Vichy both being nominal allies of Germany. The Soviet Union, while making a massive effort to defeat Nazi Germany on the land, maintained a non-aggression pact with Axis power Japan until days before its surrender and denied use of air bases in Siberia to Allied air forces for bombing campaigns against the home islands.

Combatants in different theatres might have well have been fighting in entirely different wars, and sometimes in different centuries. Air crews on long-range bombing missions above Germany and Japan had nothing in common with Japanese and British forces slugging it out in the jungles of Burma, nor with attackers and defenders fighting building to building in the streets of Stalingrad, or armoured combat in North Africa, or the duel of submarines and convoys to keep the Atlantic lifeline between the U.S. and Britain open, or naval battles in the Pacific, or the amphibious landings on islands they supported.

World War II did not start as a global war, and did not become one until the German invasion of the Soviet Union and the Japanese attack on U.S., British, and Dutch territories in the Pacific. Prior to those events, it was a collection of border wars, launched by surprise by Axis powers against weaker neighbours which were, for the most part, successful. Once what Churchill called the Grand Alliance (Britain, the Soviet Union, and the United States) was forged, the outcome was inevitable, yet the road to victory was long and costly, and its length impossible to foresee at the outset.

The entire war was unnecessary, and its horrific cost can be attributed to a failure of deterrence. From the outset, there was no way the Axis could have won. If, as seemed inevitable, the U.S. were to become involved, none of the Axis powers possessed the naval or air resources to strike the U.S. mainland, no less contemplate invading and occupying it. While all of Germany and Japan's industrial base and population were, as the war progressed, open to bombardment day and night by long-range, four engine, heavy bombers escorted by long-range fighters, the Axis possessed no aircraft which could reach the cities of the U.S. east coast, the oil fields of Texas and Oklahoma, or the industrial base of the midwest. While the U.S. and Britain fielded aircraft carriers which allowed them to project power worldwide, Germany and Italy had no effective carrier forces and Japan's were reduced by constant attacks by U.S. aviation.

This correlation of forces was known before the outbreak of the war. Why did Japan and then Germany launch wars which were almost certain to result in forces ranged against them which they could not possibly defeat? Hanson attributes it to a mistaken belief that, to use Hitler's terminology, the will would prevail. The West had shown itself unwilling to effectively respond to aggression by Japan in China, Italy in Ethiopia, and Germany in Czechoslovakia, and Axis leaders concluded from this, catastrophically for their populations, that despite their industrial, demographic, and strategic military weakness, there would be no serious military response to further aggression (the “bore war” which followed the German invasion of Poland and the declarations of war on Germany by France and Britain had to reinforce this conclusion). Hanson observes, writing of Hitler, “Not even Napoleon had declared war in succession on so many great powers without any idea how to destroy their ability to make war, or, worse yet, in delusion that tactical victories would depress stronger enemies into submission.” Of the Japanese, who attacked the U.S. with no credible capability or plan for invading and occupying the U.S. homeland, he writes, “Tojo was apparently unaware or did not care that there was no historical record of any American administration either losing or quitting a war—not the War of 1812, the Mexican War, the Civil War, the Spanish American War, or World War I—much less one that Americans had not started.” (Maybe they should have waited a few decades….)

Compounding the problems of the Axis was that it was essentially an alliance in name only. There was little or no co-ordination among its parties. Hitler provided Mussolini no advance notice of the attack on the Soviet Union. Mussolini did not warn Hitler of his attacks on Albania and Greece. The Japanese attack on Pearl Harbor was as much a surprise to Germany as to the United States. Japanese naval and air assets played no part in the conflict in Europe, nor did German technology and manpower contribute to Japan's war in the Pacific. By contrast, the Allies rapidly settled on a division of labour: the Soviet Union would concentrate on infantry and armoured warfare (indeed, four out of five German soldiers who died in the war were killed by the Red Army), while Britain and the U.S. would deploy their naval assets to blockade the Axis, keep the supply lines open, and deliver supplies to the far-flung theatres of the war. U.S. and British bomber fleets attacked strategic targets and cities in Germany day and night. The U.S. became the untouchable armoury of the alliance, delivering weapons, ammunition, vehicles, ships, aircraft, and fuel in quantities which eventually surpassed those all other combatants on both sides combined. Britain and the U.S. shared technology and cooperated in its development in areas such as radar, antisubmarine warfare, aircraft engines (including jet propulsion), and nuclear weapons, and shared intelligence gleaned from British codebreaking efforts.

As a classicist, Hanson examines the war in its incarnations in each of the elements of antiquity: Earth (infantry), Air (strategic and tactical air power), Water (naval and amphibious warfare), and Fire (artillery and armour), and adds People (supreme commanders, generals, workers, and the dead). He concludes by analysing why the Allies won and what they ended up winning—and losing. Britain lost its empire and position as a great power (although due to internal and external trends, that might have happened anyway). The Soviet Union ended up keeping almost everything it had hoped to obtain through its initial partnership with Hitler. The United States emerged as the supreme economic, industrial, technological, and military power in the world and promptly entangled itself in a web of alliances which would cause it to underwrite the defence of countries around the world and involve it in foreign conflicts far from its shores.

Hanson concludes,

The tragedy of World War II—a preventable conflict—was that sixty million people had perished to confirm that the United States, the Soviet Union, and Great Britain were far stronger than the fascist powers of Germany, Japan, and Italy after all—a fact that should have been self-evident and in no need of such a bloody laboratory, if not for prior British appeasement, American isolationism, and Russian collaboration.

At 720 pages, this is not a short book (the main text is 590 pages; the rest are sources and end notes), but there is so much wisdom and startling insights among those pages that you will be amply rewarded for the time you spend reading them.

 Permalink

Thor, Brad. Use of Force. New York: Atria Books, 2017. ISBN 978-1-4767-8939-2.
This is the seventeenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). As this book begins, Scot Harvath, operative for the Carlton Group, a private outfit that does “the jobs the CIA won't do” is under cover at the Burning Man festival in the Black Rock Desert of Nevada. He and his team are tracking a terrorist thought to be conducting advance surveillance for attacks within the U.S. Only as the operation unfolds does he realise he's walked into the middle of a mass casualty attack already in progress. He manages to disable his target, but another suicide bomber detonates in a crowded area, with many dead and injured.

Meanwhile, following the capsizing of a boat smuggling “migrants” into Sicily, the body of a much-wanted and long-sought terrorist chemist, known to be researching chemical and biological weapons of mass destruction, is fished out of the Mediterranean. Why would he, after flying under the radar for years in the Near East and Maghreb, be heading to Europe? The CIA reports, “Over the last several months, we've been picking up chatter about an impending series of attacks, culminating in something very big, somewhere in Europe” … “We think that whatever he was planning, it's ready to go operational.”

With no leads other than knowledge from a few survivors of the sinking that the boat sailed from Libya and the name of the migrant smuggler who arranged their passage, Harvath sets off under cover to that country to try to find who arranged the chemist's passage and his intended destination in Europe. Accompanied by his pick-up team from Burning Man (given the urgency, there wasn't time to recruit one more familiar with the region), Harvath begins, in his unsubtle way, to locate the smuggler and find out what he knows. Unfortunately, as is so often the case in such operations, there is somebody else with the team who doesn't figure in its official roster—a fellow named Murphy.

Libya is chaotic and dangerous enough under any circumstances, but when you whack the hornets' nest, things can get very exciting in short order, and not in a good way. Harvath and his team find themselves in a mad chase and shoot-out, and having to summon assets which aren't supposed to be there, in order to survive.

Meanwhile, another savage terrorist attack in Europe has confirmed the urgency of the threat and that more are likely to come. And back in the imperial capital, intrigue within the CIA seems aimed at targeting Harvath's boss and the head of the operation. Is it connected somehow? It's time to deploy the diminutive super-hacker Nicholas and one of the CIA's most secret and dangerous computer security exploits in a honeypot operation to track down the source of the compromise.

If it weren't bad enough being chased by Libyan militias while trying to unravel an ISIS terror plot, Harvath soon finds himself in the lair of the Calabrian Mafia, and being thwarted at every turn by civil servants insisting he play by the rules when confronting those who make their own rules. Finally, multiple clues begin to limn the outline of the final attack, and it is dire indeed. Harvath must make an improbable and uneasy alliance to confront it.

The pacing of the book is somewhat odd. There is a tremendous amount of shoot-’em-up action in the middle, but as the conclusion approaches and the ultimate threat must be dealt with, it's as if the author felt himself running out of typewriter ribbon (anybody remember what that was?) and having to wind things up in just a few pages. Were I his editor, I'd have suggested trimming some of the detail in the middle and making the finale more suspenseful. But then, what do I know? Brad Thor has sold nearly fifteen million books, and I haven't. This is a perfectly workable thriller which will keep you turning the pages, but I didn't find it as compelling as some of his earlier novels. The attention to detail and accuracy are, as one has come to expect, superb. You don't need to have read any of the earlier books in the series to enjoy this one; what few details you need to know are artfully mentioned in passing.

The next installment in the Scot Harvath saga, Spymaster, will be published in July, 2018.

 Permalink