2019 |
This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90. But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today's developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.
While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I'm talking about China, Korea, and [to a lesser extent] Japan.) If you've followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven's Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is. What's going on, then? The authors join James Flynn in concluding that what we're seeing is an increase in the population's proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn't make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we're seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening. Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.) There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since. If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China. And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn't make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we've learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans. Or, maybe we won't. In which case, “Winter has come and it's only going to get colder. Wrap up warm.” Here is a James Delingpole interview of the authors and discussion of the book.
First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).
Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.
It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:
over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance. Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds. The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.
What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)? Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.
Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111. What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes). The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.
Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.
Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.
The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.
Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.
Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.
Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.
The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.
So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”. If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere. The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs. A Kindle edition is available. All graphs in this article are as posted on the Global Consciousness Project Web site.
The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn't possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists. From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location. In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn't what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant. In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton's third law of motion holds that every action has an equal and opposite reaction, and in Einstein's general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm's theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers. Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations. The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein's geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don't fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn't so: a simple experiment shows that a clock runs slower when it's in a gravitational field. Quantum mechanics says that it isn't possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces. Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object). This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today's near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today. This needn't forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it's an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake's theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results. The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:
There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.Well, it doesn't puzzle me. Ever since I learned classical economics, I've always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today's institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today. Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.
To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies. And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.) But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state. With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union. Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course. Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution. Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production. On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border. After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere. But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine. Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping. The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn. This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses. Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia. Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.
Just imagine if William the Bastard had succeeded in conquering England. We'd probably be speaking some unholy crossbreed of French and English…. The Republic is the only country in the world that recognizes allodial title,…. When Congress declares war, they have to elect one of their own to be a sacrificial victim,…. “There was a man from the state capitol who wanted to give us government funding to build what he called a ‘proper’ school, but he was run out of town, the poor dear.”Pirates, of course, must always keenly scan the horizon for those who might want to put an end to the fun. And so it is for buccaneers sailing the Hertzian waves. You'll enjoy every minute getting to the point where you find out how it ends. And then, when you think it's all over, another door opens into a wider, and weirder, world in which we may expect further adventures. The second volume in the series, Five Million Watts, was published in April, 2019. At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is very inexpensive.
I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood. Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures. He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book. Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,
Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.This was not a man likely to disappear into the mass of back-benchers and not rock the boat. Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,
A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure. Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.” Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,
You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,
No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated. Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,
Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived. Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom. Here is an Uncommon Knowledge interview with the author about Churchill and this biography. This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.
Everything else in the modern world is of Christian origin, even everything that seems most anti-Christian. The French Revolution is of Christian origin. The newspaper is of Christian origin. The anarchists are of Christian origin. Physical science is of Christian origin. The attack on Christianity is of Christian origin. There is one thing, and one thing only, in existence at the present day which can in any sense accurately be said to be of pagan origin, and that is Christianity.Much more is at stake than one sect (albeit the largest) of Christianity. The infiltration, subversion, and overt attacks on the Roman Catholic church are an assault upon an institution which has been central to Western civilisation for two millennia. If it falls, and it is falling, in large part due to self-inflicted wounds, the forces of darkness will be coming for the smaller targets next. Whatever your religion, or whether you have one or not, collapse of one of the three pillars of our cultural identity is something to worry about and work to prevent. In the author's words, “What few on the political Right have grasped is that the most important component in this trifecta isn't capitalism, or even democracy, but Christianity.” With all three under assault from all sides, this book makes an eloquent argument to secular free marketeers and champions of consensual government not to ignore the cultural substrate which allowed both to emerge and flourish.
In June and July [1961], detailed specifications for the spacecraft hardware were completed. By the end of July, the Requests for Proposals were on the street. In August, the first hardware contract was awarded to M.I.T.'s Instrumentation Laboratory for the Apollo guidance system. NASA selected Merritt Island, Florida, as the site for a new spaceport and acquired 125 square miles of land. In September, NASA selected Michoud, Louisiana, as the production facility for the Saturn rockets, acquired a site for the Manned Spacecraft Center—the Space Task Group grown up—south of Houston, and awarded the contract for the second stage of the Saturn [V] to North American Aviation. In October, NASA acquired 34 square miles for a Saturn test facility in Mississippi. In November, the Saturn C-1 was successfully launched with a cluster of eight engines, developing 1.3 million pounds of thrust. The contract for the command and service module was awarded to North American Aviation. In December, the contract for the first stage of the Saturn [V] was awarded to Boeing and the contract for the third stage was awarded to Douglas Aircraft. By January of 1962, construction had begun at all of the acquired sites and development was under way at all of the contractors.Such was the urgency with which NASA was responding to Kennedy's challenge and deadline that all of these decisions and work were done before deciding on how to get to the Moon—the so-called “mission mode”. There were three candidates: direct-ascent, Earth orbit rendezvous (EOR), and lunar orbit rendezvous (LOR). Direct ascent was the simplest, and much like idea of a Moon ship in golden age science fiction. One launch from Earth would send a ship to the Moon which would land there, then take off and return directly to Earth. There would be no need for rendezvous and docking in space (which had never been attempted, and nobody was sure was even possible), and no need for multiple launches per mission, which was seen as an advantage at a time when rockets were only marginally reliable and notorious for long delays from their scheduled launch time. The downside of direct-ascent was that it would require an enormous rocket: planners envisioned a monster called Nova which would have dwarfed the Saturn V eventually used for Apollo and required new manufacturing, test, and launch facilities to accommodate its size. Also, it is impossible to design a ship which is optimised both for landing under rocket power on the Moon and re-entering Earth's atmosphere at high speed. Still, direct-ascent seemed to involve the least number of technological unknowns. Ever wonder why the Apollo service module had that enormous Service Propulsion System engine? When it was specified, the mission mode had not been chosen, and it was made powerful enough to lift the entire command and service module off the lunar surface and return them to the Earth after a landing in direct-ascent mode. Earth orbit rendezvous was similar to what Wernher von Braun envisioned in his 1950s popular writings about the conquest of space. Multiple launches would be used to assemble a Moon ship in low Earth orbit, and then, when it was complete, it would fly to the Moon, land, and then return to Earth. Such a plan would not necessarily even require a booster as large as the Saturn V. One might, for example, launch the lunar landing and return vehicle on one Saturn I, the stage which would propel it to the Moon on a second, and finally the crew on a third, who would board the ship only after it was assembled and ready to go. This was attractive in not requiring the development of a giant rocket, but required on-time launches of multiple rockets in quick succession, orbital rendezvous and docking (and in some schemes, refuelling), and still had the problem of designing a craft suitable both for landing on the Moon and returning to Earth. Lunar orbit rendezvous was originally considered a distant third in the running. A single large rocket (but smaller than Nova) would launch two craft toward the Moon. One ship would be optimised for flight through the Earth's atmosphere and return to Earth, while the other would be designed solely for landing on the Moon. The Moon lander, operating only in vacuum and the Moon's weak gravity, need not be streamlined or structurally strong, and could be potentially much lighter than a ship able to both land on the Moon and return to Earth. Finally, once its mission was complete and the landing crew safely back in the Earth return ship, it could be discarded, meaning that all of the hardware needed solely for landing on the Moon need not be taken back to the Earth. This option was attractive, requiring only a single launch and no gargantuan rocket, and allowed optimising the lander for its mission (for example, providing better visibility to its pilots of the landing site), but it not only required rendezvous and docking, but doing it in lunar orbit which, if they failed, would strand the lander crew in orbit around the Moon with no hope of rescue. After a high-stakes technical struggle, in the latter part of 1962, NASA selected lunar orbit rendezvous as the mission mode, with each landing mission to be launched on a single Saturn V booster, making the decision final with the selection of Grumman as contractor for the Lunar Module in November of that year. Had another mission mode been chosen, it is improbable in the extreme that the landing would have been accomplished in the 1960s. The Apollo architecture was now in place. All that remained was building machines which had never been imagined before, learning to do things (on-time launches, rendezvous and docking in space, leaving spacecraft and working in the vacuum, precise navigation over distances no human had ever travelled before, and assessing all of the “unknown unknowns” [radiation risks, effects of long-term weightlessness, properties of the lunar surface, ability to land on lunar terrain, possible chemical or biological threats on the Moon, etc.]) and developing plans to cope with them. This masterful book is the story of how what is possibly the largest collection of geeks and nerds ever assembled and directed at a single goal, funded with the abundant revenue from an economic boom, spurred by a geopolitical competition against the sworn enemy of liberty, took on these daunting challenges and, one by one, overcame them, found a way around, or simply accepted the risk because it was worth it. They learned how to tame giant rocket engines that randomly blew up by setting off bombs inside them. They abandoned the careful step-by-step development of complex rockets in favour of “all-up testing” (stack all of the untested pieces the first time, push the button, and see what happens) because “there wasn't enough time to do it any other way”. People were working 16–18–20 hours a day, seven days a week. Flight surgeons in Mission Control handed out “go and whoa pills”—amphetamines and barbiturates—to keep the kids on the console awake at work and asleep those few hours they were at home—hey, it was the Sixties! This is not a tale of heroic astronauts and their exploits. The astronauts, as they have been the first to say, were literally at the “tip of the spear” and would not have been able to complete their missions without the work of almost half a million uncelebrated people who made them possible, not to mention the hundred million or so U.S. taxpayers who footed the bill. This was not a straight march to victory. Three astronauts died in a launch pad fire the investigation of which revealed shockingly slapdash quality control in the assembly of their spacecraft and NASA's ignoring the lethal risk of fire in a pure oxygen atmosphere at sea level pressure. The second flight of the Saturn V was a near calamity due to multiple problems, some entirely avoidable (and yet the decision was made to man the next flight of the booster and send the crew to the Moon). Neil Armstrong narrowly escaped death in May 1968 when the Lunar Landing Research Vehicle he was flying ran out of fuel and crashed. And the division of responsibility between the crew in the spacecraft and mission controllers on the ground had to be worked out before it would be tested in flight where getting things right could mean the difference between life and death. What can we learn from Apollo, fifty years on? Other than standing in awe at what was accomplished given the technology and state of the art of the time, and on a breathtakingly short schedule, little or nothing that is relevant to the development of space in the present and future. Apollo was the product of a set of circumstances which happened to come together at one point in history and are unlikely to ever recur. Although some of those who worked on making it a reality were dreamers and visionaries who saw it as the first step into expanding the human presence beyond the home planet, to those who voted to pay the forbidding bills (at its peak, NASA's budget, mostly devoted to Apollo, was more than 4% of all Federal spending; in recent years, it has settled at around one half of one percent: a national commitment to space eight times smaller as a fraction of total spending) Apollo was seen as a key battle in the Cold War. Allowing the Soviet Union to continue to achieve milestones in space while the U.S. played catch-up or forfeited the game would reinforce the Soviet message to the developing world that their economic and political system was the wave of the future, leaving decadent capitalism in the dust. A young, ambitious, forward-looking president, smarting from being scooped once again by Yuri Gagarin's orbital flight and the humiliation of the débâcle at the Bay of Pigs in Cuba, seized on a bold stroke that would show the world the superiority of the U.S. by deploying its economic, industrial, and research resources toward a highly visible goal. And, after being assassinated two and a half years later, his successor, a space enthusiast who had directed a substantial part of NASA's spending to his home state and those of his political allies, presented the program as the legacy of the martyred president and vigorously defended it against those who tried to kill it or reduce its priority. The U.S. was in an economic boom which would last through most of the Apollo program until after the first Moon landing, and was the world's unchallenged economic powerhouse. And finally, the federal budget had not yet been devoured by uncontrollable “entitlement” spending and national debt was modest and manageable: if the national will was there, Apollo was affordable. This confluence of circumstances was unique to its time and has not been repeated in the half century thereafter, nor is it likely to recur in the foreseeable future. Space enthusiasts who look at Apollo and what it accomplished in such a short time often err in assuming a similar program: government funded, on a massive scale with lavish budgets, focussed on a single goal, and based on special-purpose disposable hardware suited only for its specific mission, is the only way to open the space frontier. They are not only wrong in this assumption, but they are dreaming if they think there is the public support and political will to do anything like Apollo today. In fact, Apollo was not even particularly popular in the 1960s: only at one point in 1965 did public support for funding of human trips to the Moon poll higher than 50% and only around the time of the Apollo 11 landing did 50% of the U.S. population believe Apollo was worth what was being spent on it. In fact, despite being motivated as a demonstration of the superiority of free people and free markets, Project Apollo was a quintessentially socialist space program. It was funded by money extracted by taxation, its priorities set by politicians, and its operations centrally planned and managed in a top-down fashion of which the Soviet functionaries at Gosplan could only dream. Its goals were set by politics, not economic benefits, science, or building a valuable infrastructure. This was not lost on the Soviets. Here is Soviet Minister of Defence Dmitriy Ustinov speaking at a Central Committee meeting in 1968, quoted by Boris Chertok in volume 4 of Rockets and People.
…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.This kind of socialist operation works fine in a wartime crash program driven by time pressure, where unlimited funds and manpower are available, and where there is plenty of capital which can be consumed or borrowed to pay for it. But it does not create sustainable enterprises. Once the goal is achieved, the war won (or lost), or it runs out of other people's money to spend, the whole thing grinds to a halt or stumbles along, continuing to consume resources while accomplishing little. This was the predictable trajectory of Apollo. Apollo was one of the noblest achievements of the human species and we should celebrate it as a milestone in the human adventure, but trying to repeat it is pure poison to the human destiny in the solar system and beyond. This book is a superb recounting of the Apollo experience, told mostly about the largely unknown people who confronted the daunting technical problems and, one by one, found solutions which, if not perfect, were good enough to land on the Moon in 1969. Later chapters describe key missions, again concentrating on the problem solving which went on behind the scenes to achieve their goals or, in the case of Apollo 13, get home alive. Looking back on something that happened fifty years ago, especially if you were born afterward, it may be difficult to appreciate just how daunting the idea of flying to the Moon was in May 1961. This book is the story of the people who faced that challenge, pulled it off, and are largely forgotten today. Both the 1989 first edition and 2004 paperback revised edition are out of print and available only at absurd collectors' prices. The Kindle edition, which is based upon the 2004 edition with small revisions to adapt to digital reader devices is available at a reasonable price, as is an unabridged audio book, which is a reading of the 2004 edition. You'd think there would have been a paperback reprint of this valuable book in time for the fiftieth anniversary of the landing of Apollo 11 (and the thirtieth anniversary of its original publication), but there wasn't. Project Apollo is such a huge, sprawling subject that no book can possibly cover every aspect of it. For those who wish to delve deeper, here is a reading list of excellent sources. I have read all of these books and recommend every one. For those I have reviewed, I link to my review; for others, I link to a source where you can obtain the book.
In the distance, glistening partitions, reminiscent of the algal membranes that formed the cages in some aquatic zoos, swayed back and forth gently, as if in time to mysterious currents. Behind each barrier the sea changed color abruptly, the green giving way to other bright hues, like a fastidiously segregated display of bioluminescent plankton.Oh, wow. And then, it stops. I don't mean ends, as that would imply that everything that's been thrown up in the air is somehow resolved. There is an attempt to close the circle with the start of the story, but a whole universe of questions are left unanswered. The human perspective is inadequate to describe a place where Planck length objects interact in Planck time intervals and the laws of physics are made up on the fly. Ultimately, the story failed for me since it never engaged me with the characters—I didn't care what happened to them. I'm a fan of hard science fiction, but this was just too adamantine to be interesting. The title, Schild's Ladder, is taken from a method in differential geometry which is used to approximate the parallel transport of a vector along a curve.
In his world, you didn't let wrongs go unanswered—not wrongs like this, and especially when you had the ability to do something. Vengeance was a necessary function of a civilized world, particularly at its margins, in its most remote and wild regions. Evildoers, unwilling to submit to the rule of law, needed to lie awake in their beds at night worried about when justice would eventually come for them. If laws and standards were not worth enforcing, then they certainly couldn't be worth following.Harvath forms tenuous alliances with those he encounters, and then must confront an all-out assault by élite mercenaries who, apparently unsatisfied with the fear induced by fanatic Russian operatives, model themselves on the Nazi SS. Then, after survival, it's time for revenge. Harvath has done his biochemistry homework and learned well the off-label applications of suxamethonium chloride. Sux to be you, Boris. This is a tightly-crafted thriller which is, in my opinion, one of best of Brad Thor's novels. There is no political message or agenda nor any of the Washington intrigue which has occupied recent books. Here it is a pure struggle between a resourceful individual, on his own against amoral forces of pure evil, in an environment as deadly as his human adversaries.
He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,
Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains. The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few. One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.
America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them. The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920. At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years. Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.” As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort. Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%. Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.” When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes. Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred. In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to. Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace. Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis. This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.
The only way to smash this racket is to conscript capital and industry and labor before the nations [sic] manhood can be conscripted. One month before the Government can conscript the young men of the nation—it must conscript capital and industry. Let the officers and the directors and the high-powered executives of our armament factories and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted—to get $30 a month, the same wage as the lads in the trenches get. Let the workers in these plants get the same wages—all the workers, all presidents, all directors, all managers, all bankers—yes, and all generals and all admirals and all officers and all politicians and all government office holders—everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches! Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors [I think “mayors” was intended —JW] pay half their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds. Why shouldn't they?Butler goes on to recommend that any declaration of war require approval by a national plebiscite in which voting would be restricted to those subject to conscription in a military conflict. (Writing in 1935, he never foresaw that young men and women would be sent into combat without so much as a declaration of war being voted by Congress.) Further, he would restrict all use of military force to genuine defence of the nation, in particular, limiting the Navy to operating no more than 200 miles (320 km) from the coastline. This is an impassioned plea against the folly of foreign wars by a man whose career was as a warrior. One can argue that there is a legitimate interest in, say assuring freedom of navigation in international waters, but looking back on the results of U.S. foreign wars in the 21st century, it is difficult to argue they can be justified any more than the “Banana Wars” Butler fought in his time.
Listen, I understand who you are, and what this is. Please let me be clear that I have no intention to cooperate with you. I'm not going to cooperate with any intelligence service. I mean no disrespect, but this isn't going to be that kind of meeting. If you want to search my bag, it's right here. But I promise you, there's nothing in it that can help you.And that was that. Edward Snowden could have kept quiet, done his job, collected his handsome salary, continued to live in a Hawaiian paradise, and share his life with Lindsay, but he threw it all away on a matter of principle and duty to his fellow citizens and the Constitution he had sworn to defend when taking the oath upon joining the Army and the CIA. On the basis of the law, he is doubtless guilty of the three federal crimes with which he has been charged, sufficient to lock him up for as many as thirty years should the U.S. lay its hands on him. But he believes he did the correct thing in an attempt to right wrongs which were intolerable. I agree, and can only admire his courage. If anybody is deserving of a Presidential pardon, it is Edward Snowden. There is relatively little discussion here of the actual content of the documents which were disclosed and the surveillance programs they revealed. For full details, visit the Snowden Surveillance Archive, which has copies of all of the documents which have been disclosed by the media to date. U.S. government employees and contractors should read the warning on the site before viewing this material.
But what I do know is that the U.S. isn't ready. If Halabi's figured out a way to hit us with something big—something biological—what's our reaction going to be? The politicians will run for the hills and point fingers at each other. And the American people…. They faint if someone uses insensitive language in their presence and half of them couldn't run up a set of stairs if you put a gun to their head. What'll happen if the real s*** hits the fan? What are they going to do if they're faced with something that can't be fixed by a Facebook petition?So Rapp is as ruthless with his superiors as with the enemy, and obtains the free hand he needs to get the job done. Eventually Rapp and his team identify what is a potentially catastrophic threat and must swing into action, despite the political and diplomatic repercussions, to avert disaster. And then it is time to settle some scores. Kyle Mills has delivered another thriller which is both in the tradition of Mitch Rapp and also further develops his increasingly complex character in new ways.
Again our computations have been flushed and the LM is still flying. In Cambridge someone says, “Something is stealing time.” … Some dreadful thing is active in our computer and we do not know what it is or what it will do next. Unlike Garman [AGC support engineer for Mission Control] in Houston I know too much. If it were in my hands, I would call an abort.As the Lunar Module passed 3000 feet, another alarm, this time a 1201—VAC areas exhausted—flashed. This is another indication of overload, but of a different kind. Mission control immediately calls up “We're go. Same type. We're go.” Well, it wasn't the same type, but they decided to press on. Descending through 2000 feet, the DSKY (computer display and keyboard) goes blank and stays blank for ten agonising seconds. Seventeen seconds later another 1202 alarm, and a blank display for two seconds—Armstrong's heart rate reaches 150. A total of five program alarms and resets had occurred in the final minutes of landing. But why? And could the computer be trusted to fly the return from the Moon's surface to rendezvous with the Command Module? While the Lunar Module was still on the lunar surface Instrumentation Laboratory engineer George Silver figured out what happened. During the landing, the Lunar Module's rendezvous radar (used only during return to the Command Module) was powered on and set to a position where its reference timing signal came from an internal clock rather than the AGC's master timing reference. If these clocks were in a worst case out of phase condition, the rendezvous radar would flood the AGC with what we used to call “nonsense interrupts” back in the day, at a rate of 800 per second, each consuming one 11.72 microsecond memory cycle. This imposed an additional load of more than 13% on the AGC, which pushed it over the edge and caused tasks deemed non-critical (such as updating the DSKY) not to be completed on time, resulting in the program alarms and restarts. The fix was simple: don't enable the rendezvous radar until you need it, and when you do, put the switch in the position that synchronises it with the AGC's clock. But the AGC had proved its excellence as a real-time system: in the face of unexpected and unknown external perturbations it had completed the mission flawlessly, while alerting its developers to a problem which required their attention. The creativity of the AGC software developers and the merit of computer systems sufficiently simple that the small number of people who designed them completely understood every aspect of their operation was demonstrated on Apollo 14. As the Lunar Module was checked out prior to the landing, the astronauts in the spacecraft and Mission Control saw the abort signal come on, which was supposed to indicate the big Abort button on the control panel had been pushed. This button, if pressed during descent to the lunar surface, immediately aborted the landing attempt and initiated a return to lunar orbit. This was a “one and done” operation: no Microsoft-style “Do you really mean it?” tea ceremony before ending the mission. Tapping the switch made the signal come and go, and it was concluded the most likely cause was a piece of metal contamination floating around inside the switch and occasionally shorting the contacts. The abort signal caused no problems during lunar orbit, but if it should happen during descent, perhaps jostled by vibration from the descent engine, it would be disastrous: wrecking a mission costing hundreds of millions of dollars and, coming on the heels of Apollo 13's mission failure and narrow escape from disaster, possibly bring an end to the Apollo lunar landing programme. The Lunar Module AGC team, with Don Eyles as the lead, was faced with an immediate challenge: was there a way to patch the software to ignore the abort switch, protecting the landing, while still allowing an abort to be commanded, if necessary, from the computer keyboard (DSKY)? The answer to this was obvious and immediately apparent: no. The landing software, like all AGC programs, ran from read-only rope memory which had been woven on the ground months before the mission and could not be changed in flight. But perhaps there was another way. Eyles and his colleagues dug into the program listing, traced the path through the logic, and cobbled together a procedure, then tested it in the simulator at the Instrumentation Laboratory. While the AGC's programming was fixed, the AGC operating system provided low-level commands which allowed the crew to examine and change bits in locations in the read-write memory. Eyles discovered that by setting the bit which indicated that an abort was already in progress, the abort switch would be ignored at the critical moments during the descent. As with all software hacks, this had other consequences requiring their own work-arounds, but by the time Apollo 14's Lunar Module emerged from behind the Moon on course for its landing, a complete procedure had been developed which was radioed up from Houston and worked perfectly, resulting in a flawless landing. These and many other stories of the development and flight experience of the AGC lunar landing software are related here by the person who wrote most of it and supported every lunar landing mission as it happened. Where technical detail is required to understand what is happening, no punches are pulled, even to the level of bit-twiddling and hideously clever programming tricks such as using an overflow condition to skip over an EXTEND instruction, converting the following instruction from double precision to single precision, all in order to save around forty words of precious non-bank-switched memory. In addition, this is a personal story, set in the context of the turbulent 1960s and early ’70s, of the author and other young people accomplishing things no humans had ever before attempted. It was a time when everybody was making it up as they went along, learning from experience, and improvising on the fly; a time when a person who had never written a line of computer code would write, as his first program, the code that would land men on the Moon, and when the creativity and hard work of individuals made all the difference. Already, by the end of the Apollo project, the curtain was ringing down on this era. Even though a number of improvements had been developed for the LM AGC software which improved precision landing capability, reduced the workload on the astronauts, and increased robustness, none of these were incorporated in the software for the final three Apollo missions, LUMINARY 210, which was deemed “good enough” and the benefit of the changes not worth the risk and effort to test and incorporate them. Programmers seeking this kind of adventure today will not find it at NASA or its contractors, but instead in the innovative “New Space” and smallsat industries.
It's a maxim among popular science writers that every equation you include cuts your readership by a factor of two, so among the hardy half who remain, let's see how this works. It's really very simple (and indeed, far simpler than actual population dynamics in a real environment). The left side, “dP/dt” simply means “the rate of growth of the population P with respect to time, t”. On the right hand side, “rP” accounts for the increase (or decrease, if r is less than 0) in population, proportional to the current population. The population is limited by the carrying capacity of the habitat, K, which is modelled by the factor “(1 − P/K)”. Now think about how this works: when the population is very small, P/K will be close to zero and, subtracted from one, will yield a number very close to one. This, then, multiplied by the increase due to rP will have little effect and the growth will be largely unconstrained. As the population P grows and begins to approach K, however, P/K will approach unity and the factor will fall to zero, meaning that growth has completely stopped due to the population reaching the carrying capacity of the environment—it simply doesn't produce enough vegetation to feed any more rabbits. If the rabbit population overshoots, this factor will go negative and there will be a die-off which eventually brings the population P below the carrying capacity K. (Sorry if this seems tedious; one of the great things about learning even a very little about differential equations is that all of this is apparent at a glance from the equation once you get over the speed bump of understanding the notation and algebra involved.) This is grossly over-simplified. In fact, real populations are prone to oscillations and even chaotic dynamics, but we don't need to get into any of that for what follows, so I won't. Let's complicate things in our bunny paradise by introducing a population of wolves. The wolves can't eat the vegetation, since their digestive systems cannot extract nutrients from it, so their only source of food is the rabbits. Each wolf eats many rabbits every year, so a large rabbit population is required to support a modest number of wolves. Now if we go back and look at the equation for wolves, K represents the number of wolves the rabbit population can sustain, in the steady state, where the number of rabbits eaten by the wolves just balances the rabbits' rate of reproduction. This will often result in a rabbit population smaller than the carrying capacity of the environment, since their population is now constrained by wolf predation and not K. What happens as this (oversimplified) system cranks away, generation after generation, and Darwinian evolution kicks in? Evolution consists of two processes: variation, which is largely random, and selection, which is sensitively dependent upon the environment. The rabbits are unconstrained by K, the carrying capacity of their environment. If their numbers increase beyond a population P substantially smaller than K, the wolves will simply eat more of them and bring the population back down. The rabbit population, then, is not at all constrained by K, but rather by r: the rate at which they can produce new offspring. Population biologists call this an r-selected species: evolution will select for individuals who produce the largest number of progeny in the shortest time, and hence for a life cycle which minimises parental investment in offspring and against mating strategies, such as lifetime pair bonding, which would limit their numbers. Rabbits which produce fewer offspring will lose a larger fraction of them to predation (which affects all rabbits, essentially at random), and the genes which they carry will be selected out of the population. An r-selected population, sometimes referred to as r-strategists, will tend to be small, with short gestation time, high fertility (offspring per litter), rapid maturation to the point where offspring can reproduce, and broad distribution of offspring within the environment. Wolves operate under an entirely different set of constraints. Their entire food supply is the rabbits, and since it takes a lot of rabbits to keep a wolf going, there will be fewer wolves than rabbits. What this means, going back to the Verhulst equation, is that the 1 − P/K factor will largely determine their population: the carrying capacity K of the environment supports a much smaller population of wolves than their food source, rabbits, and if their rate of population growth r were to increase, it would simply mean that more wolves would starve due to insufficient prey. This results in an entirely different set of selection criteria driving their evolution: the wolves are said to be K-selected or K-strategists. A successful wolf (defined by evolution theory as more likely to pass its genes on to successive generations) is not one which can produce more offspring (who would merely starve by hitting the K limit before reproducing), but rather highly optimised predators, able to efficiently exploit the limited supply of rabbits, and to pass their genes on to a small number of offspring, produced infrequently, which require substantial investment by their parents to train them to hunt and, in many cases, acquire social skills to act as part of a group that hunts together. These K-selected species tend to be larger, live longer, have fewer offspring, and have parents who spend much more effort raising them and training them to be successful predators, either individually or as part of a pack. “K or r, r or K: once you've seen it, you can't look away.” Just as our island of bunnies and wolves was over-simplified, the dichotomy of r- and K-selection is rarely precisely observed in nature (although rabbits and wolves are pretty close to the extremes, which it why I chose them). Many species fall somewhere in the middle and, more importantly, are able to shift their strategy on the fly, much faster than evolution by natural selection, based upon the availability of resources. These r/K shape-shifters react to their environment. When resources are abundant, they adopt an r-strategy, but as their numbers approach the carrying capacity of their environment, shift to life cycles you'd expect from K-selection. What about humans? At a first glance, humans would seem to be a quintessentially K-selected species. We are large, have long lifespans (about twice as long as we “should” based upon the number of heartbeats per lifetime of other mammals), usually only produce one child (and occasionally two) per gestation, with around a one year turn-around between children, and massive investment by parents in raising infants to the point of minimal autonomy and many additional years before they become fully functional adults. Humans are “knowledge workers”, and whether they are hunter-gatherers, farmers, or denizens of cubicles at The Company, live largely by their wits, which are a combination of the innate capability of their hypertrophied brains and what they've learned in their long apprenticeship through childhood. Humans are not just predators on what they eat, but also on one another. They fight, and they fight in bands, which means that they either develop the social skills to defend themselves and meet their needs by raiding other, less competent groups, or get selected out in the fullness of evolutionary time. But humans are also highly adaptable. Since modern humans appeared some time between fifty and two hundred thousand years ago they have survived, prospered, proliferated, and spread into almost every habitable region of the Earth. They have been hunter-gatherers, farmers, warriors, city-builders, conquerors, explorers, colonisers, traders, inventors, industrialists, financiers, managers, and, in the Final Days of their species, WordPress site administrators. In many species, the selection of a predominantly r or K strategy is a mix of genetics and switches that get set based upon experience in the environment. It is reasonable to expect that humans, with their large brains and ability to override inherited instinct, would be especially sensitive to signals directing them to one or the other strategy. Now, finally, we get back to politics. This was a post about politics. I hope you've been thinking about it as we spent time in the island of bunnies and wolves, the cruel realities of natural selection, and the arcana of differential equations. What does r-selection produce in a human population? Well, it might, say, be averse to competition and all means of selection by measures of performance. It would favour the production of large numbers of offspring at an early age, by early onset of mating, promiscuity, and the raising of children by single mothers with minimal investment by them and little or none by the fathers (leaving the raising of children to the State). It would welcome other r-selected people into the community, and hence favour immigration from heavily r populations. It would oppose any kind of selection based upon performance, whether by intelligence tests, academic records, physical fitness, or job performance. It would strive to create the ideal r environment of unlimited resources, where all were provided all their basic needs without having to do anything but consume. It would oppose and be repelled by the K component of the population, seeking to marginalise it as toxic, privileged, or exploiters of the real people. It might even welcome conflict with K warriors of adversaries to reduce their numbers in otherwise pointless foreign adventures. And K-troop? Once a society in which they initially predominated creates sufficient wealth to support a burgeoning r population, they will find themselves outnumbered and outvoted, especially once the r wave removes the firebreaks put in place when K was king to guard against majoritarian rule by an urban underclass. The K population will continue to do what they do best: preserving the institutions and infrastructure which sustain life, defending the society in the military, building and running businesses, creating the basic science and technologies to cope with emerging problems and expand the human potential, and governing an increasingly complex society made up, with every generation, of a population, and voters, who are fundamentally unlike them. Note that the r/K model completely explains the “crunchy to soggy” evolution of societies which has been remarked upon since antiquity. Human societies always start out, as our genetic heritage predisposes us to, K-selected. We work to better our condition and turn our large brains to problem-solving and, before long, the privation our ancestors endured turns into a pretty good life and then, eventually, abundance. But abundance is what selects for the r strategy. Those who would not have reproduced, or have as many children in the K days of yore, now have babies-a-poppin' as in the introduction to Idiocracy, and before long, not waiting for genetics to do its inexorable work, but purely by a shift in incentives, the rs outvote the Ks and the Ks begin to count the days until their society runs out of the wealth which can be plundered from them. But recall that equation. In our simple bunnies and wolves model, the resources of the island were static. Nothing the wolves could do would increase K and permit a larger rabbit and wolf population. This isn't the case for humans. K humans dramatically increase the carrying capacity of their environment by inventing new technologies such as agriculture, selective breeding of plants and animals, discovering and exploiting new energy sources such as firewood, coal, and petroleum, and exploring and settling new territories and environments which may require their discoveries to render habitable. The rs don't do these things. And as the rs predominate and take control, this momentum stalls and begins to recede. Then the hard times ensue. As Heinlein said many years ago, “This is known as bad luck.” And then the Gods of the Copybook Headings will, with terror and slaughter return. And K-selection will, with them, again assert itself. Is this a complete model, a Rosetta stone for human behaviour? I think not: there are a number of things it doesn't explain, and the shifts in behaviour based upon incentives are much too fast to account for by genetics. Still, when you look at those eleven issues I listed so many words ago through the r/K perspective, you can almost immediately see how each strategy maps onto one side or the other of each one, and they are consistent with the policy preferences of “liberals” and “conservatives”. There is also some rather fuzzy evidence for genetic differences (in particular the DRD4-7R allele of the dopamine receptor and size of the right brain amygdala) which appear to correlate with ideology. Still, if you're on one side of the ideological divide and confronted with somebody on the other and try to argue from facts and logical inference, you may end up throwing up your hands (if not your breakfast) and saying, “They just don't get it!” Perhaps they don't. Perhaps they can't. Perhaps there's a difference between you and them as great as that between rabbits and wolves, which can't be worked out by predator and prey sitting down and voting on what to have for dinner. This may not be a hopeful view of the political prospect in the near future, but hope is not a strategy and to survive and prosper requires accepting reality as it is and acting accordingly.
People say sometimes that Beauty is only superficial. That may be so. But at least it is not as superficial as Thought. To me, Beauty is the wonder of wonders. It is only shallow people who do not judge by appearances.From childhood, however, we have been exhorted not to judge people by their appearances. In Skin in the Game (August 2019), Nassim Nicholas Taleb advises choosing the surgeon who “doesn't look like a surgeon” because their success is more likely due to competence than first impressions. Despite this, physiognomy, assessing a person's characteristics from their appearance, is as natural to humans as breathing, and has been an instinctual part of human behaviour as old as our species. Thinkers and writers from Aristotle through the great novelists of the 19th century believed that an individual's character was reflected in, and could be inferred from their appearance, and crafted and described their characters accordingly. Jules Verne would often spend a paragraph describing the appearance of his characters and what that implied for their behaviour. Is physiognomy all nonsense, a pseudoscience like phrenology, which purported to predict mental characteristics by measuring bumps on the skull which were claimed indicate the development of “cerebral organs” with specific functions? Or, is there something to it, after all? Humans are a social species and, as such, have evolved to be exquisitely sensitive to signals sent by others of their kind, conveyed through subtle means such as a tone of voice, facial expression, or posture. Might we also be able to perceive and interpret messages which indicate properties such as honesty, intelligence, courage, impulsiveness, criminality, diligence, and more? Such an ability, if possible, would be advantageous to individuals in interacting with others and, contributing to success in reproducing and raising offspring, would be selected for by evolution. In this short book (or long essay—the text is just 85 pages), the author examines the evidence and concludes that there are legitimate correlations between appearance and behaviour, and that human instincts are picking up genuine signals which are useful in interacting with others. This seems perfectly plausible: the development of the human body and face are controlled by the genetic inheritance of the individual and modulated through the effects of hormones, and it is well-established that both genetics and hormones are correlated with a variety of behavioural traits. Let's consider a reasonably straightforward example. A study published in 2008 found a statistically significant correlation between the width of the face (cheekbone to cheekbone distance compared to brow to upper lip) and aggressiveness (measured by the number of penalty minutes received) among a sample of 90 ice hockey players. Now, a wide face is also known to correlate with a high testosterone level in males, and testosterone correlates with aggressiveness and selfishness. So, it shouldn't be surprising to find the wide face morphology correlated with the consequences of high-testosterone behaviour. In fact, testosterone and other hormone levels play a substantial part in many of the correlations between appearance and behaviour discussed by the author. Many people believe they can identify, with reasonable reliability, homosexuals just from their appearance: the term “gaydar” has come into use for this ability. In 2017, researchers trained an artificial intelligence program with a set of photographs of individuals with known sexual orientations and then tested the program on a set of more than 35,000 images. The program correctly identified the sexual orientation of men 81% of the time and women with 74% accuracy. Of course, appearance goes well beyond factors which are inherited or determined by hormones. Tattoos, body piercings, and other irreversible modifications of appearance correlate with low time preference, which correlates with low intelligence and the other characteristics of r-selected lifestyle. Choices of clothing indicate an individual's self-identification, although fashion trends change rapidly and differ from region to region, so misinterpretation is a risk. The author surveys a wide variety of characteristics including fat/thin body type, musculature, skin and hair, height, face shape, breast size in women, baldness and beards in men, eye spacing, tattoos, hair colour, facial symmetry, handedness, and finger length ratio, and presents citations to research, most published recently, supporting correlations between these aspects of appearance and behaviour. He cautions that while people may be good at sensing and interpreting these subtle signals among members of their own race, there are substantial and consistent differences between the races, and no inferences can be drawn from them, nor are members of one race generally able to read the signals from members of another. One gets the sense (although less strongly) that this is another field where advances in genetics and data science are piling up a mass of evidence which will roll over the stubborn defenders of the “blank slate” like a truth tsunami. And again, this is an area where people's instincts, honed by millennia of evolution, are still relied upon despite the scorn of “experts”. (So afraid were the authors of the Wikipedia page on physiognomy [retrieved 2019-12-16] of the “computer gaydar” paper mentioned above that they declined to cite the peer reviewed paper in the Journal of Personality and Social Psychology but instead linked to a BBC News piece which dismissed it as “dangerous” and “junk science”. Go on whistling, folks, as the wave draws near and begins to crest….) Is the case for physiognomy definitively made? I think not, and as I suspect the author would agree, there are many aspects of appearance and a multitude of personality traits, some of which may be significantly correlated and others not at all. Still, there is evidence for some linkage, and it appears to be growing as more work in the area (which is perilous to the careers of those who dare investigate it) accumulates. The scientific evidence, summarised here, seems to be, as so often happens, confirming the instincts honed over hundreds of generations by the inexorable process of evolution: you can form some conclusions just by observing people, and this information is useful in the competition which is life on Earth. Meanwhile, when choosing programmers for a project team, the one who shows up whose eyebrows almost meet their hairline, sporting a plastic baseball cap worn backward with the adjustment strap on the smallest peg, with a scraggly soybeard, pierced nose, and visible tattoos isn't likely to be my pick. She's probably a WordPress developer.