2001 |
2002 |
2003 |
2004 |
The author, brother of William F. Buckley, is founder of a school of public speaking and author of several books on public speaking and two novels. Here, however, we have Buckley's impassioned, idiosyncratic, and (as far as I can tell) self-published rant against the iniquities of contemporary U.S. morals, politics, and culture. Bottom line: he doesn't like it—the last two sentences are “The supine and swinish American public is the reason why our society has become so vile. We are vile.” This book would have been well served had the author enlisted brother Bill or his editor to red-pencil the manuscript. How the humble apostrophe causes self-published authors to stumble! On page 342 we trip over the “biography of John Quincy Adam's” among numerous other exemplars of proletarian mispunctuation. On page 395, Michael Behe, author of Darwin's Black Box has his name given as “Rehe” (and in the index too). On page 143, he misquotes Alan Guth's Inflationary Universe as saying the grand unification energy is “1016 GeV”, thereby getting it wrong by thirteen orders of magnitude compared to the 1016 GeV a sharp-eyed proofreader would have caught. All of this, and Buckley's meandering off into anecdotes of his beloved hometown of Camden, South Carolina and philosophical disquisitions distract from the central question posed in the book which is both profound and disturbing: can a self-governing republic survive without a consensus moral code shared by a large majority of its citizens? This is a question stalwarts of Western civilisation need to be asking themselves in this non-judgemental, multi-cultural age, and I wish Buckley had posed it more clearly in this book, which despite the title, has nothing whatsoever to do with that regrettable yet prefixally-eponymous McNewspaper.
Question: Why is it important to screen bags for IEDs [Improvised Explosive Devices]?I wish I were making this up. The inspector general of the “Homeland Security Department” declined to say how many of the “screeners” who intimidate citizens, feel up women, and confiscate fingernail clippers and putatively dangerous and easily-pocketed jewelry managed to answer this one correctly. I call Bovard a “crypto-libertarian” because he clearly bases his analysis on libertarian principles, yet rarely observes that any polity with unconstrained government power and sedated sheeple for citizens will end badly, regardless of who wins the elections. As with his earlier books, sources for this work are exhaustively documented in 41 pages of endnotes.
- The IED batteries could leak and damage other passenger bags.
- The wires in the IED could cause a short to the aircraft wires.
- IEDs can cause loss of lives, property, and aircraft.
- The ticking timer could worry other passengers.
2005 |
This companion to Hypersonic: The Story of the North American X-15 (March 2004) contains more than 400 photos, 50 in colour, which didn't make the cut for the main volume, as well as some which came to hand only after its publication. There's nothing really startling, but if you can't get enough of this beautiful flying machine, here's another hefty dose of well-captioned period photos, many never before published. The two page spread on pp. 58–59 is interesting. It's a North American Aviation presentation from 1962 on how the X-15 could be used for various advanced propulsion research programs, including ramjets, variable cycle turboramjets, scramjets, and liquid air cycle engines (LACE) burning LH2 and air liquefied on board. More than forty years later, these remain “advanced propulsion” concepts, with scant progress to show for the intervening decades. None of the X-15 propulsion research programs were ever flown.
/sbin/iptables -A INPUT -p tcp --syn --dport 80 -m iplimit \ --iplimit-above 20 --iplimit-mask 32 -j REJECTAnybody who tries to open more than 20 connections will get whacked on each additional SYN packet. You can see whether this rule is affecting too many legitimate connections with the status query:
/sbin/iptables -L -vGeekly reading, to be sure, but just the thing if you're responsible for defending an Internet server or site from malefactors in the Internet Slum.
Authors of popular science books are cautioned that each equation they include (except, perhaps E=mc˛) will halve the sales of their book. Penrose laughs in the face of such fears. In this “big damned fat square book” of 1050 pages of main text, there's an average of one equation per page, which, according to conventional wisdom should reduce readership by a factor of 2−1050 or 8.3×10−317, so the single copy printed would have to be shared by among the 1080 elementary particles in the universe over an extremely long time. But, according to the Amazon sales ranking as of today, this book is number 71 in sales—go figure.
Don't deceive yourself; in committing to read this book you are making a substantial investment of time and brain power to master the underlying mathematical concepts and their application to physical theories. If you've noticed my reading being lighter than usual recently, both in terms of number of books and their intellectual level, it's because I've been chewing through this tome for last two and a half months and it's occupied my cerebral capacity to the exclusion of other works. But I do not regret for a second the time I've spent reading this work and working the exercises, and I will probably make a second pass through it in a couple of years to reinforce the mathematical toolset into my aging neurons. As an engineer whose formal instruction in mathematics ended with differential equations, I found chapters 12–15 to be the “hump”—after making it through them (assuming you've mastered their content), the rest of the book is much more physical and accessible. There's kind of a phase transition between the first part of the book and chapters 28–34. In the latter part of the book, Penrose gives free rein to his own view of fundamental physics, introducing his objective reduction of the quantum state function (OR) by gravity, twistor theory, and a deconstruction of string theory which may induce apoplexy in researchers engaged in that programme. But when discussing speculative theories, he takes pains to identify his own view when it differs from the consensus, and to caution the reader where his own scepticism is at variance with a widely accepted theory (such as cosmological inflation). If you really want to understand contemporary physics at the level of professional practitioners, I cannot recommend this book too highly. After you've mastered this material, you should be able to read research reports in the General Relativity and Quantum Cosmology preprint archives like the folks who write and read them. Imagine if, instead of two or three hundred taxpayer funded specialists, four or five thousand self-educated people impassioned with figuring out how nature does it contributed every day to our unscrewing of the inscrutable. Why, they'll say it's a movement. And that's exactly what it will be.In this book, mathematician and philosopher William A. Dembski attempts to lay the mathematical and logical foundation for inferring the presence of intelligent design in biology. Note that “intelligent design” needn't imply divine or supernatural intervention—the “directed panspermia” theory of the origin of life proposed by co-discoverer of the structure of DNA and Nobel Prize winner Francis Crick is a theory of intelligent design which invokes no deity, and my perpetually unfinished work The Rube Goldberg Variations and the science fiction story upon which it is based involve searches for evidence of design in scientific data, not in scripture.
You certainly won't find any theology here. What you will find is logical and mathematical arguments which sometimes ascend (or descend, if you wish) into prose like (p. 153), “Thus, if P characterizes the probability of E0 occurring and f characterizes the physical process that led from E0 to E1, then P∘f −1 characterizes the probability of E1 occurring and P(E0) ≤ P∘f −1(E1) since f(E0) = E1 and thus E0 ⊂ f −1(E1).” OK, I did cherry-pick that sentence from a particularly technical section which the author advises readers to skip if they're willing to accept the less formal argument already presented. Technical arguments are well-supplemented by analogies and examples throughout the text.
Dembski argues that what he terms “complex specified information” is conclusive evidence for the presence of design. Complexity (the Shannon information measure) is insufficient—all possible outcomes of flipping a coin 100 times in a row are equally probable—but presented with a sequence of all heads, all tails, alternating heads and tails, or a pattern in which heads occurred only for prime numbered flips, the evidence for design (in this case, cheating or an unfair coin) would be considered overwhelming. Complex information is considered specified if it is compressible in the sense of Chaitin-Kolmogorov-Solomonoff algorithmic information theory, which measures the randomness of a bit string by the length of the shortest computer program which could produce it. The overwhelming majority of 100 bit strings cannot be expressed more compactly than simply by listing the bits; the examples given above, however, are all highly compressible. This is the kind of measure, albeit not rigorously computed, which SETI researchers would use to identify a signal as of intelligent origin, which courts apply in intellectual property cases to decide whether similarity is accidental or deliberate copying, and archaeologists use to determine whether an artefact is of natural or human origin. Only when one starts asking these kinds of questions about biology and the origin of life does controversy erupt!
Chapter 3 proposes a “Law of Conservation of Information” which, if you accept it, would appear to rule out the generation of additional complex specified information by the process of Darwinian evolution. This would mean that while evolution can and does account for the development of resistance to antibiotics in bacteria and pesticides in insects, modification of colouration and pattern due to changes in environment, and all the other well-confirmed cases of the Darwinian mechanism, that innovation of entirely novel and irreducibly complex (see chapter 5) mechanisms such as the bacterial flagellum require some external input of the complex specified information they embody. Well, maybe…but one should remember that conservation laws in science, unlike invariants in mathematics, are empirical observations which can be falsified by a single counter-example. Niels Bohr, for example, prior to its explanation due to the neutrino, theorised that the energy spectrum of nuclear beta decay could be due to a violation of conservation of energy, and his theory was taken seriously until ruled out by experiment.
Let's suppose, for the sake of argument, that Darwinian evolution does explain the emergence of all the complexity of the Earth's biosphere, starting with a single primordial replicating lifeform. Then one still must explain how that replicator came to be in the first place (since Darwinian evolution cannot work on non-replicating organisms), and where the information embodied in its molecular structure came from. The smallest present-day bacterial genomes belong to symbiotic or parasitic species, and are in the neighbourhood of 500,000 base pairs, or roughly 1 megabit of information. Even granting that the ancestral organism might have been much smaller and simpler, it is difficult to imagine a replicator capable of Darwinian evolution with an information content 1000 times smaller than these bacteria, Yet randomly assembling even 500 bits of precisely specified information seems to be beyond the capacity of the universe we inhabit. If you imagine every one of the approximately 1080 elementary particles in the universe trying combinations every Planck interval, 1045 times every second, it would still take about a billion times the present age of the universe to randomly discover a 500 bit pattern. Of course, there are doubtless many patterns which would work, but when you consider how conservative all the assumptions are which go into this estimate, and reflect upon the evidence that life seemed to appear on Earth just about as early as environmental conditions permitted it to exist, it's pretty clear that glib claims that evolution explains everything and there are just a few details to be sorted out are arm-waving at best and propaganda at worst, and that it's far too early to exclude any plausible theory which could explain the mystery of the origin of life. Although there are many points in this book with which you may take issue, and it does not claim in any way to provide answers, it is valuable in understanding just how difficult the problem is and how many holes exist in other, more accepted, explanations. A clear challenge posed to purely naturalistic explanations of the origin of terrestrial life is to suggest a prebiotic mechanism which can assemble adequate specified information (say, 500 bits as the absolute minimum) to serve as a primordial replicator from the materials available on the early Earth in the time between the final catastrophic bombardment and the first evidence for early life.
The year is 2030, and every complacent person who asked rhetorically, “How much worse can it get?” has seen the question answered beyond their worst nightmares. What's left of the United States is fighting to put down the secessionist mountain states of New Columbia, and in the cities of the East, people are subject to random searches by jackbooted Lightning Squads, when they aren't shooting up clandestine nursery schools operated by anarchist parents who refuse to deliver their children into government indoctrination. This is the kind of situation which cries out for a superhero and, lo and behold, onto the stage steps The Black Arrow and his deadly serious but fun-loving band to set things right through the time-tested strategy of killing the bastards. The Black Arrow has a lot in common with Batman—actually maybe a tad too much. Like Batman, he's a rich and resourceful man with a mission (but no super powers), he operates in New York City, which is called “Gotham” in the novel, and he has a secret lair in a cavern deep beneath the city.
There is a modicum of libertarian background and philosophy, but it never gets in the way of the story. There is enough explicit violence and copulation for an R rated movie—kids and those with fragile sensibilities should give this one a miss. Some of the verbal imagery in the story is so vivid you can almost see it erupting from the page—this would make a tremendous comic book adaptation or screenplay for an alternative universe Hollywood where stories of liberty were welcome.
Gregg Herken, a senior historian and curator at the National Air and Space Museum, draws upon these resources to explore the accomplishments, conflicts, and controversies surrounding Lawrence, Oppenheimer, and Teller, and the cold war era they played such a large part in defining. The focus is almost entirely on the period in which the three were active in weapons development and policy—there is little discussion of their prior scientific work, nor of Teller's subsequent decades on the public stage. This is a serious academic history, with almost 100 pages of source citations and bibliography, but the story is presented in an engaging manner which leaves the reader with a sense of the personalities involved, not just their views and actions. The author writes with no discernible ideological bias, and I noted only one insignificant technical goof.
A certain segment of the dogma-based community of postmodern academics and their hangers-on seems to have no difficulty whatsoever believing that Darwinian evolution explains every aspect of the origin and diversification of life on Earth while, at the same time, denying that genetics—the mechanism which underlies evolution—plays any part in differentiating groups of humans. Doublethink is easy if you never think at all. Among those to whom evidence matters, here's a pretty astonishing fact to ponder. In the last four Olympic games prior to the publication of this book in the year 2000, there were thirty-two finalists in the men's 100-metre sprint. All thirty-two were of West African descent—a region which accounts for just 8% of the world's population. If finalists in this event were randomly chosen from the entire global population, the probability of this concentration occurring by chance is 0.0832 or about 8×10−36, which is significant at the level of more than twelve standard deviations. The hardest of results in the flintiest of sciences—null tests of conservation laws and the like—are rarely significant above 7 to 8 standard deviations.
Now one can certainly imagine any number of cultural and other non-genetic factors which predispose those with West African ancestry toward world-class performance in sprinting, but twelve standard deviations? The fact that running is something all humans do without being taught, and that training for running doesn't require any complicated or expensive equipment (as opposed to sports such as swimming, high-diving, rowing, or equestrian events), and that champions of West African ancestry hail from countries around the world, should suggest a genetic component to all but the most blinkered of blank slaters.
Taboo explores the reality of racial differences in performance in various sports, and the long and often sordid entangled histories of race and sports, including the tawdry story of race science and eugenics, over-reaction to which has made most discussion of human biodiversity, as the title of book says, taboo. The equally forbidden subject of inherent differences in male and female athletic performance is delved into as well, with a look at the hormone dripping “babes from Berlin” manufactured by the cruel and exploitive East German sports machine before the collapse of that dismal and unlamented tyranny.
Those who know some statistics will have no difficulty understanding what's going on here—the graph on page 255 tells the whole story. I wish the book had gone into a little more depth about the phenomenon of a slight shift in the mean performance of a group—much smaller than individual variation—causing a huge difference in the number of group members found in the extreme tail of a normal distribution. Another valuable, albeit speculative, insight is that if one supposes that there are genes which confer advantage to competitors in certain athletic events, then given the intense winnowing process world-class athletes pass through before they reach the starting line at the Olympics, it is plausible all of them at that level possess every favourable gene, and that the winner is determined by training, will to win, strategy, individual differences, and luck, just as one assumed before genetics got mixed up in the matter. It's just that if you don't have the genes (just as if your legs aren't long enough to be a runner), you don't get anywhere near that level of competition.
Unless research in these areas is suppressed due to an ill-considered political agenda, it is likely that the key genetic components of athletic performance will be identified in the next couple of decades. Will this mean that world-class athletic competition can be replaced by DNA tests? Of course not—it's just that one factor in the feedback loop of genetic endowment, cultural reinforcement of activities in which group members excel, and the individual striving for excellence which makes competitors into champions will be better understood.
From this viewpoint, every compromise with fear societies and their tyrants in the interest of “stability” and “geopolitics” is always ill-considered, not just in terms of the human rights of those who live there, but in the self-interest of all free people. Fear societies require an enemy, internal or external, to unite their victims behind the tyrant, and history shows how fickle the affections of dictators can be when self-interest is at stake.
The disastrous example of funding Arafat's ugly dictatorship over the Palestinian people is dissected in detail, but the message is applicable everywhere diplomats argue for a “stable partner” over the inherent human right of people to own their own lives and govern themselves. Sharansky is forthright in saying it's better to face a democratically elected fanatic opponent than a dictator “we can do business with”, because ultimately the democratic regime will converge on meeting the needs of its citizens, while the dictator will focus on feathering his own nest at the expense of those he exploits.
If you're puzzled about which side to back in all the myriad conflicts around the globe, you could do a lot worse that simply picking the side which comes out best in Sharansky's “town square test”. Certainly, the world would be a better place if the diplomats who prattle on about “complexity” and realpolitik were hit over the head with the wisdom of an author who spent 13 years in Siberian labour camps rather than compromise his liberty.
They will give the sense of every article of the constitution, that may from time to time come before them. And in their decisions they will not confine themselves to any fixed or established rules, but will determine, according to what appears to them, the reason and spirit of the constitution. The opinions of the supreme court, whatever they may be, will have the force of law; because there is no power provided in the constitution, that can correct their errors, or controul [sic] their adjudications. From this court there is no appeal.The fact that politicians are at loggerheads over the selection of judges has little or nothing to do with ideology and everything to do with judges having usurped powers explicitly reserved for representatives accountable to their constituents in regular elections. How to fix it? Well, I proposed my own humble solution here not so long ago, and the author of this book suggests 12 year terms for Supreme Court judges staggered with three year expiry. Given how far the unchallenged assertion of judicial supremacy has gone, a constitutional remedy in the form of a legislative override of judicial decisions (with the same super-majority as required to override an executive veto) might also be in order.
Their bodies go from being the little white creatures they are to light. But when they become light, they first become like cores of light, like molten light. The appearance (of the core of light) is one of solidity. They change colors and a haze is projected around the (interior core which is centralized; surrounding this core in an immediate environment is a denser, tighter) haze (than its outer peripheries). The eyes are the last to go (as one perceives the process of the creatures disappearing into the light), and then they just kind of disappear or are absorbed into this. … We are or exist through our flesh, and they are or exist through whatever it is they are.Got that? If not, there is much, much more along these lines in the extended babblings of this and a dozen other abductees, developed during the author's therapy sessions with them. Now, de mortuis nihil nisi bonum (Mack was killed in a traffic accident in 2004), and having won a Pulitzer Prize for his biography of T.E. Lawrence in addition to his career as a professor of psychiatry at the Harvard Medical School and founder of the psychiatry department at Cambridge Hospital, his credentials incline one to hear him out, however odd the message may seem to be. One's mind, however, eventually summons up Thomas Jefferson's (possibly apocryphal) remark upon hearing of two Yale professors who investigated a meteor fall in Connecticut and pronounced it genuine, “Gentlemen, I would rather believe that two Yankee professors would lie than believe that stones fall from heaven.” Well, nobody's accusing Professor Mack of lying, but the leap from the oh-wow, New Age accounts elicited by hypnotic regression and presented here, to the conclusion that they are the result of a genuine phenomenon of some kind, possibly contact with “another plane of reality” is an awfully big one, and simply wading through the source material proved more than I could stomach on my first attempt. So, the book went back on the unfinished shelf, where it continued to glare at me balefully until a few days ago when, looking for something to read, I exclaimed, “Hey, if I can make it through The Ghosts of Evolution, surely I can finish this one!” So I did, picking up from the bookmark I left where my first assault on the summit petered out. In small enough doses, much of this material can be quite funny. This paperback edition includes two appendices added to address issues raised after the publication of the original hardcover. In the first of these (p. 390), Mack argues that the presence of a genuine phenomenon of some kind is strongly supported by “…the reports of the experiencers themselves. Although varied in some respects, these are so densely consistent as to defy conventional psychiatric explanations.” Then, a mere three pages later, we are informed:
The aliens themselves seem able to change or disguise their form, and, as noted, may appear initially to the abductees as various kinds of animals, or even as ordinary human beings, as in Peter's case. But their shape-shifting abilities extend to their vehicles and to the environments they present to the abductees, which include, in this sample, a string of motorcycles (Dave), a forest and conference room (Catherine), images of Jesus in white robes (Jerry), and a soaring cathedral-like structure with stained glass windows (Sheila). One young woman, not written about in this book, recalled at age seven seeing a fifteen-foot kangaroo in a park, which turned out to be a small spacecraft.Now that's “densely consistent”! One is also struck by how insipidly banal are the messages the supposed aliens deliver, which usually amount to New Age cerebral suds like “All is one”, “Treat the Earth kindly”, and the rest of the stuff which appeals to those who are into these kinds of things in the first place. Occam's razor seems to glide much more smoothly over the supposition that we are dealing with seriously delusional people endowed with vivid imaginations than that these are “transformational” messages sent by superior beings to avert “planetary destruction” by “for-profit business corporations” (p. 365, Mack's words, not those of an abductee). Fifteen-foot kangaroo? Well, anyway, now this book can hop onto the dubious shelf in the basement and stop making me feel guilty! For a sceptical view of the abduction phenomenon, see Philip J. Klass's UFO Abductions: A Dangerous Game.
21: A sour, foggy Sunday.The laconic diary entries are spun into a fictionalised but plausible story of farm life focusing on the self-reliant lifestyle and the tools and techniques upon which it was founded. Noah Blake was atypical in being an only child at a time when large families were the norm; Sloane takes advantage of this in showing Noah learning all aspects of farm life directly from his father. The numerous detailed illustrations provide a delightful glimpse into the world of two centuries ago and an appreciation for the hard work and multitude of skills it took to make a living from the land in those days.
22: Heavy downpour, but good for the crops.
23: Second day of rain. Father went to work under cover at the mill.
24: Clear day. Worked in the fields. Some of the corn has washed away.
Saudi engineers calculated that the soil particulates beneath the surface of most of their three hundred known reserves are so fine that radioactive releases there would permit the contamination to spread widely through the soil subsurface, carrying the radioactivity far under the ground and into the unpumped oil. This gave Petro SE the added benefit of ensuring that even if a new power in the Kingdom could rebuild the surface infrastructure, the oil reserves themselves might be unusable for years.Hey, you guys in the back—enough with the belly laughs! Did any of the editors at Random House think to work out, even if you stipulated that radioactive contamination could somehow migrate from the surface down through hundreds to thousands of metres of rock (how, due to the abundant rain?), just how much radioactive contaminant you'd have to mix with the estimated two hundred and sixty billion barrels of crude oil in the Saudi reserves to render it dangerously radioactive? In any case, even if you could magically transport the radioactive material into the oil bearing strata and supernaturally mix it with the oil, it would be easy to separate during the refining process. Finally, there's the question of why, if the Saudis have gone to all the trouble to rig their oil facilities to self-destruct, it has remained a secret waiting to be revealed in this book. From a practical standpoint, almost all of the workers in the Saudi oil fields are foreigners. Certainly some of them would be aware of such a massive effort and, upon retirement, say something about it which the news media would pick up. But even if the secret could be kept, we're faced with the same question of deterrence which arose in the conclusion of Dr. Strangelove with the Soviet doomsday machine—it's idiotic to build a doomsday machine and keep it a secret! Its only purpose is to deter a potential attack, and if attackers don't know there's a doomsday machine, they won't be deterred. Precisely the same logic applies to the putative Saudi self-destruct button. Now none of this argumentation proves in any way that the Saudis haven't rigged their oil fields to blow up and scatter radioactive material on the debris, just that it would be a phenomenally stupid thing for them to try to do. But then, there are plenty of precedents for the Saudis doing dumb things—they have squandered the greatest fortune in the history of the human race and, while sitting on a quarter of all the world's oil, seen their per capita GDP erode to fall between that of Poland and Latvia. If, indeed, they have done something so stupid as this scorched earth scheme, let us hope they manage the succession to the throne, looming in the near future, in a far more intelligent fashion.
“The nice thing about this betatron,” said Channing, “is the fact that it can and does run both ends on the same supply. The current and voltage phases are correct so that we do not require two supplies which operate in a carefully balanced condition. The cyclotron is one of the other kinds; though the one supply is strictly D.C., the strength of the field must be controlled separately from the supply to the oscillator that runs the D plates. You're sitting on a fence, juggling knobs and stuff all the time you are bombarding with a cyc.” (From “Recoil”, p. 95)Notwithstanding such passages, and how quaint an interplanetary radio relay station based on vacuum tubes with a staff of 2700 may seem to modern readers, these are human stories which are, on occasions, breathtaking in their imagination and modernity. The account of the impact of an “efficiency expert” on a technology-based operation in “QRM—Interplanetary” is as trenchant (and funny) as anything in Dilbert. The pernicious effect of abusive patent litigation on innovation, the economics of a technological singularity created by what amounts to a nanotechnological assembler, and the risk of identity theft, are the themes of other stories which it's difficult to imagine having been written half a century ago, along with timeless insights into engineering. One, in particular, from “Firing Line” (p. 259) so struck me when I read it thirty-odd years ago that it has remained in my mind ever since as one of the principal differences between the engineer and the tinkerer, “They know one simple rule about the universe. That rule is that if anything works once, it may be made to work again.” The tinkerer is afraid to touch something once it mysteriously starts to work; an engineer is eager to tear it apart and figure out why. I found the account of the end of Venus Equilateral in “Mad Holiday” disturbing when I first read it, but now see it as a celebration of technological obsolescence as an integral part of progress, to be welcomed, and the occasion for a blow-out party, not long faces and melancholy. Arthur C. Clarke, who contributes the introduction to this collection, read these stories while engaged in his own war work, in copies of Astounding sent from America by Willy Ley, acknowledges that these tales of communication relays in space may have played a part in his coming up with that idea. This book is out of print, but inexpensive used copies are readily available.
Here, it is relevant to describe a corridor meeting with a mature colleague - keen on Quantum Mechanical calculations, - who had not the friends to give him good grades in his grant applications and thus could not employ students to work with him. I commiserated on his situation, - a professor in a science department without grant money. How can you publish I blurted out, rather tactlessly. “Ah, but I have Lili” he said (I've changed his wife's name). I knew Lili, a pleasant European woman interested in obscure religions. She had a high school education but no university training. “But” … I began to expostulate. “It's ok, ok”, said my colleague. “Well, we buy the programs to calculate bond strengths, put it in the computer and I tell Lili the quantities and she writes down the answer the computer gives. Then, we write a paper.” The program referred to is one which solves the Schrödinger equation and provides energy values, e.g., for bond strength in chemical compounds.Now sit back, close your eyes, and imagine five hundred pages of this; in spelling, grammar, accuracy, logic, and command of the subject matter it reads like a textbook-length Slashdot post. Several recurrent characteristics are manifest in this excerpt. The author repeatedly, though not consistently, capitalises Important Words within Sentences; he uses hyphens where em-dashes are intended, and seems to have invented his own punctuation sign: a comma followed by a hyphen, which is used interchangeably with commas and em-dashes. The punctuation gives the impression that somebody glanced at the manuscript and told the author, “There aren't enough commas in it”, whereupon he went through and added three or four thousand in completely random locations, however inane. There is an inordinate fondness for “e.g.”, “i.e.”, and “cf.”, and they are used in ways which make one suspect the author isn't completely clear on their meaning or the distinctions among them. And regarding the footnote quoted above, did I mention that the author's wife is named “Lily”, and hails from Austria? Further evidence of the attention to detail and respect for the reader can be found in chapter 3 where most of the source citations in the last thirty pages are incorrect, and the blank cross-references scattered throughout the text. Not only is it obvious the book has not been fact checked, nor even proofread; it has never even been spelling checked—common words are misspelled all over. Bockris never manages the Slashdot hallmark of misspelling “the”, but on page 475 he misspells “to” as “ot”. Throughout you get the sense that what you're reading is not so much a considered scientific exposition and argument, but rather the raw unedited output of a keystroke capturing program running on the author's computer. Some readers may take me to task for being too harsh in these remarks, noting that the book was self-published by the author at age 82. (How do I know it was self-published? Because my copy came with the order from Amazon to the publisher to ship it to their warehouse folded inside, and the publisher's address in this document is directly linked to the author.) Well, call me unkind, but permit me to observe that readers don't get a quality discount based on the author's age from the price of US$34.95, which is on the very high end for a five hundred page paperback, nor is there a disclaimer on the front or back cover that the author might not be firing on all cylinders. Certainly, an eminent retired professor ought to be able to call on former colleagues and/or students to review a manuscript which is certain to become an important part of his intellectual legacy, especially as it attempts to expound a new paradigm for science. Even the most cursory editing to remove needless and tedious repetition could knock 100 pages off this book (and eliminating the misinformation and nonsense could probably slim it down to about ten). The vast majority of citations are to secondary sources, many popular science or new age books. Apart from these drawbacks, Bockris, like many cranks, seems compelled to personally attack Einstein, claiming his work was derivative, hinting at plagiarism, arguing that its significance is less than its reputation implies, and relating an unsourced story claiming Einstein was a poor husband and father (and even if he were, what does that have to do with the correctness and importance of his scientific contributions?). In chapter 2, he rants upon environmental and economic issues, calls for a universal dole (p. 34) for those who do not work (while on p. 436 he decries the effects of just such a dole on Australian youth), calls (p. 57) for censorship of music, compulsory population limitation, and government mandated instruction in philosophy and religion along with promotion of religious practice. Unlike many radical environmentalists of the fascist persuasion, he candidly observes (p. 58) that some of these measures “could not achieved under the present conditions of democracy”. So, while repeatedly inveighing against the corruption of government-funded science, he advocates what amounts to totalitarian government—by scientists.
2006 |
Contrast the present — think how different was a meeting in the 2020s of the National Joint Council, which has been retained for form's sake. On the one side sit the I.Q.s of 140, on the other the I.Q.s of 99. On the one side the intellectual magnates of our day, on the other honest, horny-handed workmen more at home with dusters than documents. On the one side the solid confidence born of hard-won achievement; on the other the consciousness of a just inferiority.Seriously, anybody who doesn't see the satire in this must be none too Swift. Although the book is cast as a retrospective from 2038, and there passing references to atomic stations, home entertainment centres, school trips to the Moon and the like, technologically the world seems very much like that of 1950s. There is one truly frightening innovation, however. On p. 110, discussing the shrinking job market for shop attendants, we're told, “The large shop with its more economical use of staff had supplanted many smaller ones, the speedy spread of self-service in something like its modern form had reduced the number of assistants needed, and piped distribution of milk, tea, and beer was extending rapidly.” To anybody with personal experience with British plumbing and English beer, the mere thought of the latter being delivered through the former is enough to induce dystopic shivers of 1984 magnitude. Looking backward from almost fifty years on, this book can be read as an alternative history of the last half-century. In the eyes of many with a libertarian or conservative inclination, just when the centuries-long battle against privilege and prejudice was finally being won: in the 1950s and early 60s when Young's book appeared, the dream of equal opportunity so eloquently embodied in Dr. Martin Luther King's “I Have a Dream” speech began to evaporate in favour of equality of results (by forced levelling and dumbing down if that's what it took), group identity and entitlements, and the creation of a permanently dependent underclass from which escape was virtually impossible. The best works of alternative history are those which change just one thing in the past and then let the ripples spread outward over the years. You can read this story as a possible future in which equal opportunity really did completely triumph over egalitarianism in the sixties. For those who assume that would have been an unqualifiedly good thing, here is a cautionary tale well worth some serious reflexion.
Paging Friar Ockham! If unnecessarily multiplying hypotheses are stubble indicating a fuzzy theory, it's pretty clear which of these is in need of the razor! Further, while one can imagine scientific investigation discovering evidence for Theory 1, almost all of the mechanisms which underlie Theory 2 remain, barring some conceptual breakthrough equivalent to looking inside a black hole, forever hidden from science by an impenetrable horizon through which no causal influence can propagate. So severe is this problem that chapter 9 of the book is devoted to the question of how far theoretical physics can go in the total absence of experimental evidence. What's more, unlike virtually every theory in the history of science, which attempted to describe the world we observe as accurately and uniquely as possible, Theory 2 predicts every conceivable universe and says, hey, since we do, after all, inhabit a conceivable universe, it's consistent with the theory. To one accustomed to the crystalline inevitability of Newtonian gravitation, general relativity, quantum electrodynamics, or the laws of thermodynamics, this seems by comparison like a California blonde saying “whatever”—the cosmology of despair. Scientists will, of course, immediately rush to attack Theory 1, arguing that a being such as that it posits would necessarily be “indistinguishable from magic”, capable of explaining anything, and hence unfalsifiable and beyond the purview of science. (Although note that on pp. 192–197 Susskind argues that Popperian falsifiability should not be a rigid requirement for a theory to be deemed scientific. See Lee Smolin's Scientific Alternatives to the Anthropic Principle for the argument against the string landscape theory on the grounds of falsifiability, and the 2004 Smolin/Susskind debate for a more detailed discussion of this question.) But let us look more deeply at the attributes of what might be called the First Cause of Theory 2. It not only permeates all of our universe, potentially spawning a bubble which may destroy it and replace it with something different, it pervades the abstract landscape of all possible universes, populating them with an infinity of independent and diverse universes over an eternity of time: omnipresent in spacetime. When a universe is created, all the parameters which ultimately govern its ultimate evolution (under the probabilistic laws of quantum mechanics, to be sure) are fixed at the moment of creation: omnipotent to create any possibility, perhaps even varying the mathematical structures underlying the laws of physics. As a budded off universe evolves, whether a sterile formless void or teeming with intelligent life, no information is ever lost in its quantum evolution, not even down a black hole or across a cosmic horizon, and every quantum event splits the universe and preserves all possible outcomes. The ensemble of universes is thus omniscient of all its contents. Throw in intelligent and benevolent, and you've got the typical deity, and since you can't observe the parallel universes where the action takes place, you pretty much have to take it on faith. Where have we heard that before? Lest I be accused of taking a cheap shot at string theory, or advocating a deistic view of the universe, consider the following creation story which, after John A. Wheeler, I shall call “Creation without the Creator”. Many extrapolations of continued exponential growth in computing power envision a technological singularity in which super-intelligent computers designing their own successors rapidly approach the ultimate physical limits on computation. Such computers would be sufficiently powerful to run highly faithful simulations of complex worlds, including intelligent beings living within them which need not be aware they were inhabiting a simulation, but thought they were living at the “top level”, who eventually passed through their own technological singularity, created their own simulated universes, populated them with intelligent beings who, in turn,…world without end. Of course, each level of simulation imposes a speed penalty (though, perhaps not much in the case of quantum computation), but it's not apparent to the inhabitants of the simulation since their own perceived time scale is in units of the “clock rate” of the simulation. If an intelligent civilisation develops to the point where it can build these simulated universes, will it do so? Of course it will—just look at the fascination crude video game simulations have for people today. Now imagine a simulation as rich as reality and unpredictable as tomorrow, actually creating an inhabited universe—who could resist? As unlimited computing power becomes commonplace, kids will create innovative universes and evolve them for billions of simulated years for science fair projects. Call the mean number of simulated universes created by intelligent civilisations in a given universe (whether top-level or itself simulated) the branching factor. If this is greater than one, and there is a single top-level non-simulated universe, then it will be outnumbered by simulated universes which grow exponentially in numbers with the depth of the simulation. Hence, by the Copernican principle, or principle of mediocrity, we should expect to find ourselves in a simulated universe, since they vastly outnumber the single top-level one, which would be an exceptional place in the ensemble of real and simulated universes. Now here's the point: if, as we should expect from this argument, we do live in a simulated universe, then our universe is the product of intelligent design and Theory 1 is an absolutely correct description of its origin. Suppose this is the case: we're inside a simulation designed by a freckle-faced superkid for extra credit in her fifth grade science class. Is this something we could discover, or must it, like so many aspects of Theory 2, be forever hidden from our scientific investigation? Surprisingly, this variety of Theory 1 is quite amenable to experiment: neither revelation nor faith is required. What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.Theory 1: Intelligent Design. An intelligent being created the universe and chose the initial conditions and physical laws so as to permit the existence of beings like ourselves.
Theory 2: String Landscape. The laws of physics and initial conditions of the universe are chosen at random from among 10500 possibilities, only a vanishingly small fraction of which (probably no more than one in 10120) can support life. The universe we observe, which is infinite in extent and may contain regions where the laws of physics differ, is one of an infinite number of causally disconnected “pocket universes“ which spontaneously form from quantum fluctuations in the vacuum of parent universes, a process which has been occurring for an infinite time in the past and will continue in the future, time without end. Each of these pocket universes which, together, make up the “megaverse”, has its own randomly selected laws of physics, and hence the overwhelming majority are sterile. We find ourselves in one of the tiny fraction of hospitable universes because if we weren't in such an exceptionally rare universe, we wouldn't exist to make the observation. Since there are an infinite number of universes, however, every possibility not only occurs, but occurs an infinite number of times, so not only are there an infinite number of inhabited universes, there are an infinite number identical to ours, including an infinity of identical copies of yourself wondering if this paragraph will ever end. Not only does the megaverse spawn an infinity of universes, each universe itself splits into two copies every time a quantum measurement occurs. Our own universe will eventually spawn a bubble which will destroy all life within it, probably not for a long, long time, but you never know. Evidence for all of the other universes is hidden behind a cosmic horizon and may remain forever inaccessible to observation.
Might we expect surprises as we subject our simulated universe to ever more precise scrutiny, perhaps even astonishing the being which programmed it with our cunning and deviousness (as the author of any software package has experienced at the hands of real-world users)? Who knows, we might run into round-off errors which “hit us like a ton of bricks”! Suppose there were some quantity, say, that was supposed to be exactly zero but, if you went and actually measured the geometry way out there near the edge and crunched the numbers, you found out it differed from zero in the 120th decimal place. Why, you might be as shocked as the naďve Perl programmer who ran the program “printf("%.18f", 0.2)” and was aghast when it printed “0.200000000000000011” until somebody explained that with about 56 bits of mantissa in IEEE double precision floating point, you only get about 17 decimal digits (log10 256) of precision. So, what does a round-off in the 120th digit imply? Not Theory 2, with its infinite number of infinitely reproducing infinite universes, but simply that our Theory 1 intelligent designer used 400 bit numbers (log2 10120) in the simulation and didn't count on our noticing—remember you heard it here first, and if pointing this out causes the simulation to be turned off, sorry about that, folks! Surprises from future experiments which would be suggestive (though not probative) that we're in a simulated universe would include failure to find any experimental signature of quantum gravity (general relativity could be classical in the simulation, since potential conflicts with quantum mechanics would be hidden behind event horizons in the present-day universe, and extrapolating backward to the big bang would be meaningless if the simulation were started at a later stage, say at the time of big bang nucleosynthesis), and discovery of limits on the ability to superpose wave functions for quantum computation which could result from limited precision in the simulation as opposed to the continuous complex values assumed by quantum mechanics. An interesting theoretical program would be to investigate feasible experiments which, by magnifying physical effects similar to proposed searches for quantum gravity signals, would detect round-off errors of magnitude comparable to the cosmological constant.
But seriously, this is an excellent book and anybody who's interested in the strange direction in which the string theorists are veering these days ought to read it; it's well-written, authoritative, reasonably fair to opposing viewpoints (although I'm surprised the author didn't address the background spacetime criticism of string theory raised so eloquently by Lee Smolin), and provides a roadmap of how string theory may develop in the coming years. The only nagging question you're left with after finishing the book is whether after thirty years of theorising which comes to the conclusion that everything is predicted and nothing can be observed, it's about science any more.So prolific was Jules Verne that more than a century and a half after he began his writing career, new manuscripts keep turning up among his voluminous papers. In the last two decades, Paris au XXe sičcle, the original un-mangled version of La chasse au météore (October 2002), and the present volume have finally made their way into print. Verne transformed the account of his own trip into a fictionalised travel narrative of a kind quite common in the 19th century but rarely encountered today. The fictional form gave him freedom to add humour, accentuate detail, and highlight aspects of the country and culture he was visiting without crossing the line into that other venerable literary genre, the travel tall tale. One suspects that the pub brawl in chapter 16 is an example of such embroidery, along with the remarkable steam powered contraption on p. 159 which prefigured Mrs. Tweedy's infernal machine in Chicken Run. The description of the weather, however, seems entirely authentic. Verne offered the manuscript to Hetzel, who published most of his work, but it was rejected and remained forgotten until it was discovered in a cache of Verne papers acquired by the city of Nantes in 1981. This 1989 edition is its first appearance in print, and includes six pages of notes on the history of the work and its significance in Verne's œuvre, notes on changes in the manuscript made by Verne, and a facsimile manuscript page.
What is remarkable in reading this novel is the extent to which it is a fully-developed “template” for Verne's subsequent Voyages extraordinaires: here we have an excitable and naďve voyager (think Michel Ardan or Passepartout) paired with a more stolid and knowledgeable companion (Barbicane or Phileas Fogg), the encyclopedist's exultation in enumeration, fascination with all forms of locomotion, and fun with language and dialect (particularly poor Jacques who beats the Dickens out of the language of Shakespeare). Often, when reading the early works of writers, you sense them “finding their voice”—not here. Verne is in full form, the master of his language and the art of story-telling, and fully ready, a few years later, with just a change of topic, to invent science fiction. This is not “major Verne”, and you certainly wouldn't want to start with this work, but if you've read most of Verne and are interested in how it all began, this is genuine treat.
This book is out of print. If you can't locate a used copy at a reasonable price at the Amazon link above, try abebooks.com. For comparison with copies offered for sale, the cover price in 1989 was FRF 95, which is about €14.50 at the final fixed rate.
He was born Graf Heinrich Karl Wilhelm Otto Friedrich von Übersetzenseehafenstadt, but changed his name to Nigel St. John Gloamthorpby, a.k.a. Lord Woadmire, in 1914. In his photograph, he looks every inch a von Übersetzenseehafenstadt, and he is free of the cranial geometry problem so evident in the older portraits. Lord Woadmire is not related to the original ducal line of Qwghlm, the Moore family (Anglicized from the Qwghlmian clan name Mnyhrrgh) which had been terminated in 1888 by a spectacularly improbable combination of schistosomiasis, suicide, long-festering Crimean war wounds, ball lightning, flawed cannon, falls from horses, improperly canned oysters, and rogue waves.On p. 352 we find one of the most lucid and concise explanations I've ever read of why it far more difficult to escape the grasp of now-obsolete technologies than most technologists may wish.
(This is simply because the old technology is universally understood by those who need to understand it, and it works well, and all kinds of electronic and software technology has been built and tested to work within that framework, and why mess with success, especially when your profit margins are so small that they can only be detected by using techniques from quantum mechanics, and any glitches vis-à-vis compatibility with old stuff will send your company straight into the toilet.)In two sentences on p. 564, he lays out the essentials of the original concept for Autodesk, which I failed to convey (providentially, in retrospect) to almost every venture capitalist in Silicon Valley in thousands more words and endless, tedious meetings.
“ … But whenever a business plan first makes contact with the actual market—the real world—suddenly all kinds of stuff becomes clear. You may have envisioned half a dozen potential markets for your product, but as soon as you open your doors, one just explodes from the pack and becomes so instantly important that good business sense dictates that you abandon the others and concentrate all your efforts.”And how many New York Times Best-Sellers contain working source code (p, 480) for a Perl program? A 1168 page mass market paperback edition is now available, but given the unwieldiness of such an edition, how much you're likely to thumb through it to refresh your memory on little details as you read it, the likelihood you'll end up reading it more than once, and the relatively small difference in price, the trade paperback cited at the top may be the better buy. Readers interested in the cryptographic technology and culture which figure in the book will find additional information in the author's Cryptonomicon cypher-FAQ.
…I think all this superstring stuff is crazy and it is in the wrong direction. … I don't like that they're not calculating anything. I don't like that they don't check their ideas. I don't like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say “Well, it still might be true.”Feynman was careful to hedge his remark as being that of an elder statesman of science, who collectively have a history of foolishly considering the speculations of younger researchers to be nonsense, and he would have almost certainly have opposed any effort to cut off funding for superstring research, as it might be right, after all, and should be pursued in parallel with other promising avenues until they make predictions which can be tested by experiment, falsifying and leading to the exclusion of those candidate theories whose predictions are incorrect. One wonders, however, what Feynman's reaction would have been had he lived to contemplate the contemporary scene in high energy theoretical physics almost twenty years later. String theory and its progeny still have yet to make a single, falsifiable prediction which can be tested by a physically plausible experiment. This isn't surprising, because after decades of work and tens of thousands of scientific publications, nobody really knows, precisely, what superstring (or M, or whatever) theory really is; there is no equation, or set of equations from which one can draw physical predictions. Leonard Susskind, a co-founder of string theory, observes ironically in his book The Cosmic Landscape (March 2006), “On this score, one might facetiously say that String Theory is the ultimate epitome of elegance. With all the years that String Theory has been studied, no one has ever found a single defining equation! The number at present count is zero. We know neither what the fundamental equations of the theory are or even if it has any.” (p. 204). String theory might best be described as the belief that a physically correct theory exists and may eventually be discovered by the research programme conducted under that name.
From the time Feynman spoke through the 1990s, the goal toward which string theorists were working was well-defined: to find a fundamental theory which reproduces at the low energy limit the successful results of the standard model of particle physics, and explains, from first principles, the values of the many (there are various ways to count them, slightly different—the author gives the number as 18 in this work) free parameters of that theory, whose values are not predicted by any theory and must be filled in by experiment. Disturbingly, theoretical work in the early years of this century has convinced an increasing number of string theorists (but not all) that the theory (whatever it may turn out to be), will not predict a unique low energy limit (or “vacuum state”), but rather an immense “landscape” of possible universes, with estimates like 10100 and 10500 and even more bandied around (by comparison, there are only about 1080 elementary particles in the entire observable universe—a minuscule number compared to such as these). Most of these possible universes would be hideously inhospitable to intelligent life as we know and can imagine it (but our imagination may be limited), and hence it is said that the reason we find ourselves in one of the rare universes which contain galaxies, chemistry, biology, and the National Science Foundation is due to the anthropic principle: a statement, bordering on tautology, that we can only observe conditions in the universe which permit our own existence, and that perhaps either in a “multiverse” of causally disjoint or parallel realities, all the other possibilities exist as well, most devoid of observers, at least those like ourselves (triune glorgs, feeding on bare colour in universes dominated by quark-gluon plasma would doubtless deem our universe unthinkably cold, rarefied, and dead).
But adopting the “landscape” view means abandoning the quest for a theory of everything and settling for what amounts to a “theory of anything”. For even if string theorists do manage to find one of those 10100 or whatever solutions in the landscape which perfectly reproduces all the experimental results of the standard model (and note that this is something nobody has ever done and appears far out of reach, with legitimate reasons to doubt it is possible at all), then there will almost certainly be a bewildering number of virtually identical solutions with slightly different results, so that any plausible experiment which measures a quantity to more precision or discovers a previously unknown phenomenon can be accommodated within the theory simply by tuning one of its multitudinous dials and choosing a different solution which agrees with the experimental results. This is not what many of the generation who built the great intellectual edifice of the standard model of particle physics would have considered doing science.
Now if string theory were simply a chimæra being pursued by a small band of double-domed eccentrics, one wouldn't pay it much attention. Science advances by exploring lots of ideas which may seem crazy at the outset and discarding the vast majority which remain crazy after they are worked out in more detail. Whatever remains, however apparently crazy, stays in the box as long as its predictions are not falsified by experiment. It would be folly of the greatest magnitude, comparable to attempting to centrally plan the economy of a complex modern society, to try to guess in advance, by some kind of metaphysical reasoning, which ideas were worthy of exploration. The history of the S-matrix or “bootstrap” theory of the strong interactions recounted in chapter 11 is an excellent example of how science is supposed to work. A beautiful theory, accepted by a large majority of researchers in the field, which was well in accord with experiment and philosophically attractive, was almost universally abandoned in a few years after the success of the quark model in predicting new particles and the stunning deep inelastic scattering results at SLAC in the 1970s. String theory, however, despite not having made a single testable prediction after more than thirty years of investigation, now seems to risk becoming a self-perpetuating intellectual monoculture in theoretical particle physics. Among the 22 tenured professors of theoretical physics in the leading six faculties in the United States who received their PhDs after 1981, fully twenty specialise in string theory (although a couple now work on the related brane-world models). These professors employ graduate students and postdocs who work in their area of expertise, and when a faculty position opens up, may be expected to support candidates working in fields which complement their own research. This environment creates a great incentive for talented and ambitious students aiming for one the rare permanent academic appointments in theoretical physics to themselves choose string theory, as that's where the jobs are. After a generation, this process runs the risk of operating on its own momentum, with nobody in a position to step back and admit that the entire string theory enterprise, judged by the standards of genuine science, has failed, and does not merit the huge human investment by the extraordinarily talented and dedicated people who are pursuing it, nor the public funding it presently receives. If Edward Witten believes there's something still worth pursuing, fine: his self-evident genius and massive contributions to mathematical physics more than justify supporting his work. But this enterprise which is cranking out hundreds of PhDs and postdocs who are spending their most intellectually productive years learning a fantastically complicated intellectual structure with no grounding whatsoever in experiment, most of whom will have no hope of finding permanent employment in the field they have invested so much to aspire toward, is much more difficult to justify or condone. The problem, to state it in a manner more inflammatory than the measured tone of the author, and in a word of my choosing which I do not believe appears at all in his book, is that contemporary academic research in high energy particle theory is corrupt. As is usually the case with such corruption, the root cause is socialism, although the look-only-left blinders almost universally worn in academia today hides this from most observers there. Dwight D. Eisenhower, however, twigged to it quite early. In his farewell address of January 17th, 1961, which academic collectivists endlessly cite for its (prescient) warning about the “military-industrial complex”, he went on to say, although this is rarely quoted,In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government. Today, the solitary inventor, tinkering in his shop, has been over shadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers. The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.And there, of course, is precisely the source of the corruption. This enterprise of theoretical elaboration is funded by taxpayers, who have no say in how their money, taken under threat of coercion, is spent. Which researchers receive funds for what work is largely decided by the researchers themselves, acting as peer review panels. While peer review may work to vet scientific publications, as soon as money becomes involved, the disposition of which can make or break careers, all the venality and naked self- and group-interest which has undone every well-intentioned experiment in collectivism since Robert Owen comes into play, with the completely predictable and tediously repeated results. What began as an altruistic quest driven by intellectual curiosity to discover answers to the deepest questions posed by nature ends up, after a generation of grey collectivism, as a jobs program. In a sense, string theory can be thought of like that other taxpayer-funded and highly hyped program, the space shuttle, which is hideously expensive, dangerous to the careers of those involved with it (albeit in a more direct manner), supported by a standing army composed of some exceptional people and a mass of the mediocre, difficult to close down because it has carefully cultivated a constituency whose own self-interest is invested in continuation of the program, and almost completely unproductive of genuine science. One of the author's concerns is that the increasingly apparent impending collapse of the string theory edifice may result in the de-funding of other promising areas of fundamental physics research. I suspect he may under-estimate how difficult it is to get rid of a government program, however absurd, unjustified, and wasteful it has become: consider the space shuttle, or mohair subsidies. But perhaps de-funding is precisely what is needed to eliminate the corruption. Why should U.S. taxpayers be spending on the order of thirty million dollars a year on theoretical physics not only devoid of any near- or even distant-term applications, but also mostly disconnected from experiment? Perhaps if theoretical physics returned to being funded by universities from their endowments and operating funds, and by money raised from patrons and voluntarily contributed by the public interested in the field, it would be, albeit a much smaller enterprise, a more creative and productive one. Certainly it would be more honest. Sure, there may be some theoretical breakthrough we might not find for fifty years instead of twenty with massive subsidies. But so what? The truth is out there, somewhere in spacetime, and why does it matter (since it's unlikely in the extreme to have any immediate practical consequences) how soon we find it, anyway? And who knows, it's just possible a research programme composed of the very, very best, whose work is of such obvious merit and creativity that it attracts freely-contributed funds, exploring areas chosen solely on their merit by those doing the work, and driven by curiosity instead of committee group-think, might just get there first. That's the way I'd bet. For a book addressed to a popular audience which contains not a single equation, many readers will find it quite difficult. If you don't follow these matters in some detail, you may find some of the more technical chapters rather bewildering. (The author, to be fair, acknowledges this at the outset.) For example, if you don't know what the hierarchy problem is, or why it is important, you probably won't be able to figure it out from the discussion here. On the other hand, policy-oriented readers will have little difficulty grasping the problems with the string theory programme and its probable causes even if they skip the gnarly physics and mathematics. An entertaining discussion of some of the problems of string theory, in particular the question of “background independence”, in which the string theorists universally assume the existence of a background spacetime which general relativity seems to indicate doesn't exist, may be found in Carlo Rovelli's "A Dialog on Quantum Gravity". For more technical details, see Lee Smolin's Three Roads to Quantum Gravity. There are some remarkable factoids in this book, one of the most stunning being that the proposed TeV class muon colliders of the future will produce neutrino (yes, neutrino) radiation which is dangerous to humans off-site. I didn't believe it either, but look here—imagine the sign: “DANGER: Neutrino Beam”! A U.S. edition is scheduled for publication at the end of September 2006. The author has operated the Not Even Wrong Web log since 2004; it is an excellent source for news and gossip on these issues. The unnamed “excitable … Harvard faculty member” mentioned on p. 227 and elsewhere is Luboš Motl (who is, however, named in the acknowledgements), and whose own Web log is always worth checking out.
A nuclear isomer is an atomic nucleus which, due to having a greater spin, different shape, or differing alignment of the spin orientation and axis of symmetry, has more internal energy than the ground state nucleus with the same number of protons and neutrons. Nuclear isomers are usually produced in nuclear fusion reactions when the the addition of protons and/or neutrons to a nucleus in a high-energy collision leaves it in an excited state. Hundreds of nuclear isomers are known, but the overwhelming majority decay with gamma ray emission in about 10−14 seconds. In a few species, however, this almost instantaneous decay is suppressed for various reasons, and metastable isomers exist with half-lives ranging from 10−9 seconds (one nanosecond), to the isomer Tantalum-180m, which has a half-life of at least 1015 years and may be entirely stable; it is the only nuclear isomer found in nature and accounts for about one atom of 8300 in tantalum metal.
Some metastable isomers with intermediate half-lives have a remarkably large energy compared to the ground state and emit correspondingly energetic gamma ray photons when they decay. The Hafnium-178m2 (the “m2” denotes the second lowest energy isomeric state) nucleus has a half-life of 31 years and decays (through the m1 state) with the emission of 2.45 MeV in gamma rays. Now the fact that there's a lot of energy packed into a radioactive nucleus is nothing new—people were calculating the energy of disintegrating radium and uranium nuclei at the end of the 19th century, but all that energy can't be used for much unless you can figure out some way to release it on demand—as long as it just dribbles out at random, you can use it for some physics experiments and medical applications, but not to make loud bangs or turn turbines. It was only the discovery of the fission chain reaction, where the fission of certain nuclei liberates neutrons which trigger the disintegration of others in an exponential process, which made nuclear energy, for better or for worse, accessible.
So, as long as there is no way to trigger the release of the energy stored in a nuclear isomer, it is nothing more than an odd kind of radioactive element, the subject of a reasonably well-understood and somewhat boring topic in nuclear physics. If, however, there were some way to externally trigger the decay of the isomer to the ground state, then the way would be open to releasing the energy in the isomer at will. It is possible to trigger the decay of the Tantalum-180 isomer by 2.8 MeV photons, but the energy required to trigger the decay is vastly greater than the 0.075 MeV it releases, so the process is simply an extremely complicated and expensive way to waste energy.
Researchers in the small community interested in nuclear isomers were stunned when, in the January 25, 1999 issue of Physical Review Letters, a paper by Carl Collins and his colleagues at the University of Texas at Dallas reported they had triggered the release of 2.45 MeV in gamma rays from a sample of Hafnium-178m2 by irradiating it with a second-hand dental X-ray machine with the sample of the isomer sitting on a styrofoam cup. Their report implied, even with the crude apparatus, an energy gain of sixty times break-even, which was more than a million times the rate predicted by nuclear theory, if triggering were possible at all. The result, if real, could have substantial technological consequences: the isomer could be used as a nuclear battery, which could store energy and release it on demand with a density which dwarfed that of any chemical battery and was only a couple of orders of magnitude less than a fission bomb. And, speaking of bombs, if you could manage to trigger a mass of hafnium all at once or arrange for it to self-trigger in a chain reaction, you could make a variety of nifty weapons out of it, including a nuclear hand grenade with a yield of two kilotons. You could also build a fission-free trigger for a thermonuclear bomb which would evade all of the existing nonproliferation safeguards which are aimed at controlling access to fissile material. These are the kind of things that get the attention of folks in that big five-sided building in Arlington, Virginia.
And so it came to pass, in a Pentagon bent on “transformational technologies” and concerned with emerging threats from potential adversaries, that in May of 2003 a Hafnium Isomer Production Panel (HIPP) was assembled to draw up plans for bulk production of the substance, with visions of nuclear hand grenades, clean bunker-busting fusion bombs, and even hafnium-powered bombers floating before the eyes of the out of the box thinkers at DARPA, who envisioned a two-year budget of USD30 million for the project—military science marches into the future. What's wrong with this picture? Well, actually rather a lot of things.
But bad science, absurd economics, a nonexistent phenomenon, damning evaluations by panels of authorities, lack of applications, and ridiculous radiation risk in the extremely improbable event of success pose no insurmountable barriers to a government project once it gets up to speed, especially one in which the relationships between those providing the funding and its recipients are complicated and unseemingly cozy. It took an exposé in the Washington Post Magazine by the author and subsequent examination in Congress to finally drive a stake through this madness—maybe. As of the end of 2005, although DARPA was out of the hafnium business (at least publicly), there were rumours of continued funding thanks to a Congressional earmark in the Department of Energy budget.
This book is a well-researched and fascinating look inside the defence underworld where fringe science feeds on federal funds, and starkly demonstrates how weird and wasteful things can get when Pentagon bureaucrats disregard their own science advisors and substitute instinct and wishful thinking for the tedious, but ultimately reliable, scientific method. Many aspects of the story are also quite funny, although U.S. taxpayers who footed the bill for this madness may be less amused. The author has set up a Web site for the book, and Carl Collins, who conducted the original experiment with the dental X-ray and styrofoam cup which incited the mania has responded with his own, almost identical in appearance, riposte. If you're interested in more technical detail on the controversy than appears in Weinberg's book, the Physics Today article from May 2004 is an excellent place to start. The book contains a number of typographical and factual errors, none of which are significant to the story, but when the first line of the Author's Note uses “sited” when “cited” is intended, and in the next paragraph “wondered” instead of “wandered”, you have to—wonder.
It is sobering to realise that this folly took place entirely in the public view: in the open scientific literature, university labs, unclassified defence funding subject to Congressional oversight, and ultimately in the press, and yet over a period of years millions in taxpayer funds were squandered on nonsense. Just imagine what is going on in highly-classified “black” programs.
Adult mantis shrimp (Stomatapoda) live in burrows. The five anterior thoracic appendages are subchelate maxillipeds, and the abdomen bears pleopods and uropods. Some hatch as antizoeas: planktonic larvae that swim with five pairs of biramous thoracic appendages. These larvae gradually change into pseudozoeas, with subchelate maxillipeds and with four or five pairs of natatory pleopods. Other stomatopods hatch as pseudozoeas. There are no uropods in the larval stages. The lack of uropods and the form of the other appendages contrasts with the condition in decapod larvae. It seems improbable that stomatopod larvae could have evolved from ancestral forms corresponding to zoeas and megalopas, and I suggest that the Decapoda and the Stomatopoda acquired their larvae from different foreign sources.In addition to the zoö-jargon, another deterrent to reading this book is the cost: a list price of USD 109, quoted at Amazon.com at this writing at USD 85, which is a lot of money for a 260 page monograph, however superbly produced and notwithstanding its small potential audience; so fascinating and potentially significant is the content that one would happily part with USD 15 to read a PDF, but at prices like this one's curiosity becomes constrained by the countervailing virtue of parsimony. Still, if Williamson is right, some of the fundamental assumptions underlying our understanding of life on Earth for the last century and a half may be dead wrong, and if his conjecture stands the test of experiment, we may have at hand an understanding of mysteries such as the Cambrian explosion of animal body forms and the apparent “punctuated equilibria” in the fossil record. There is a Nobel Prize here for somebody who confirms that this supposition is correct. Lynn Margulis, whose own theory of the origin of eukaryotic cells by the incorporation of previously free-living organisms as endosymbionts, which is now becoming the consensus view, co-authors a foreword which endorses Williamson's somewhat similar view of larvae.
In this book, he presents these concepts to a popular audience, beginning by explaining the fundamentals of quantum mechanics and the principles of quantum computation, before moving on to the argument that the universe as a whole is a universal quantum computer whose future cannot be predicted by any simulation less complicated than the universe as a whole, nor any faster than the future actually evolves (a concept reminiscent of Stephen Wolfram's argument in A New Kind of Science [August 2002], but phrased in quantum mechanical rather than classical terms). He argues that all of the complexity we observe in the universe is the result of the universe performing a computation whose input is the random fluctuations created by quantum mechanics. But, unlike the proverbial monkeys banging on typewriters, the quantum mechanical primate fingers are, in effect, typing on the keys of a quantum computer which, like the cellular automata of Wolfram's book, has the capacity to generate extremely complex structures from very simple inputs. Why was the universe so simple shortly after the big bang? Because it hadn't had the time to compute very much structure. Why is the universe so complicated today? Because it's had sufficient time to perform 10122 logical operations up to the present.
I found this book, on the whole, a disappointment. Having read the technical papers cited above before opening it, I didn't expect to learn any additional details from a popularisation, but I did hope the author would provide a sense for how the field evolved and get a sense of where he saw this research programme going in the future and how it might (or might not) fit with other approaches to the unification of quantum mechanics and gravitation. There are some interesting anecdotes about the discovery of the links between quantum mechanics, thermodynamics, statistical mechanics, and information theory, and the personalities involved in that work, but one leaves the book without any sense for where future research might be going, nor how these theories might be tested by experiment in the near or even distant future. The level of the intended audience is difficult to discern. Unlike some popularisers of science, Lloyd does not shrink from using equations where they clarify physical relationships and even introduces and uses Dirac's “bra-ket” notation (for example, <φ|ψ>), yet almost everywhere he writes a number in scientific notation, he also gives it in the utterly meaningless form of (p. 165) “100 billion billion billion billion billion billion billion billion billion billion” (OK, I've done that myself, on one occasion, but I was having fun at the expense of a competitor). And finally, I find it dismaying that a popular science book by a prominent researcher published by a house as respectable as Knopf at a cover price of USD26 lacks an index—this is a fundamental added value that the reader deserves when parting with this much money (especially for a book of only 220 pages). If you know nothing about these topics, this volume will probably leave you only more confused, and possibly over-optimistic about the state of quantum computation. If you've followed the field reasonably closely, the author's professional publications (most available on-line), which are lucidly written and accessible to the non-specialist, may be more rewarding.
I remain dubious about grandiose claims for quantum computation, and nothing in this book dispelled my scepticism. From Democritus all the way to the present day, every single scientific theory which assumed the existence of a continuum has been proved wrong when experiments looked more closely at what was really going on. Yet quantum mechanics, albeit a statistical theory at the level of measurement, is completely deterministic and linear in the evolution of the wave function, with amplitudes given by continuous complex values which embody, theoretically, an infinite amount of information. Where is all this information stored? The Bekenstein bound gives an upper limit on the amount of information which can be represented in a given volume of spacetime, and that implies that even if the quantum state were stored nonlocally in the entire causally connected universe, the amount of information would be (albeit enormous), still finite. Extreme claims for quantum computation assume you can linearly superpose any number of wave functions and thus encode as much information as you like in a single computation. The entire history of science, and of quantum mechanics itself makes me doubt that this is so—I'll bet that we eventually find some inherent granularity in the precision of the wave function (perhaps round-off errors in the simulation we're living within, but let's not revisit that). This is not to say, nor do I mean to imply, that quantum computation will not work; indeed, it has already been demonstrated in proof of concept laboratory experiments, and it may well hold the potential of extending the growth of computational power after the pure scaling of classical computers runs into physical limits. But just as shrinking semiconductor devices is fundamentally constrained by the size of atoms, quantum computation may be limited by the ultimate precision of the discrete computational substrate of the universe which behaves, on the large scale, like a continuous wave function.
The failure of the Occupation could not, perhaps, have been averted in the very nature of the case. But it might have been mitigated. Its mitigation would have required the conquerors to do something they had never had to do in their history. They would have had to stop doing what they were doing and ask themselves some questions, hard questions, like, What is the German character? How did it get that way? What is wrong with its being that way? What way would be better, and what, if anything, could anybody do about it?Wise questions, indeed, for any conqueror of any country. The writing is so superb that you may find yourself re-reading paragraphs just to savour how they're constructed. It is also thought-provoking to ponder how many things, from the perspective of half a century later, the author got wrong. In his view the occupation of West Germany would fail to permanently implant democracy, that German re-militarisation and eventual aggression was almost certain unless blocked by force, and that the project of European unification was a pipe dream of idealists and doomed to failure. And yet, today, things seem to have turned out pretty well for Germany, the Germans, and their neighbours. The lesson of this may be that national character can be changed, but changing it is the work of generations, not a few years of military occupation. That is also something modern-day conquerors, especially Western societies with a short attention span, might want to bear in mind.
So long as spirituality was an idea, such as believing in God, it fell under religious control. However, if doctors redefined spirituality to mean a sensual phenomenon—a feeling—then doctors would control it, since feelings had long since passed into the medical profession's hands, the best example being unhappiness. Turning spirituality into a feeling would also help doctors square the phenomenon with their own ideology. If spirituality were redefined to mean a feeling rather than an idea, then doctors could group spirituality with all the other feelings, including unhappiness, thereby preserving their ideology's integrity. Spirituality, like unhappiness, would become a problem of neurotransmitters and a subclause of their ideology. (Page 226.)A reader opening this book is confronted with 293 pages of this. This paragraph appears in chapter nine, “The Last Battle”, which describes the Manichean struggle between doctors and organised religion in the 1990s for the custody of the souls of Americans, ending in a total rout of religion. Oh, you missed that? Me too. Mass medication with psychotropic drugs is a topic which cries out for a statistical examination of its public health dimensions, but Dworkin relates only anecdotes of individuals he has known personally, all of whose minds he seems to be able to read, diagnosing their true motivations which even they don't perceive, and discerning their true destiny in life, which he believes they are failing to follow due to medication for unhappiness. And if things weren't muddled enough, he drags in “alternative medicine” (the modern, polite term for what used to be called “quackery”) and ”obsessive exercise” as other sources of Artificial Happiness (which he capitalises everywhere), which is rather odd since he doesn't believe either works except through the placebo effect. Isn't it just a little bit possible that some of those people working out at the gym are doing so because it makes them feel better and likely to live longer? Dworkin tries to envision the future for the Happy American, decoupled from the traditional trajectory through life by the ability to experience chemically induced happiness at any stage. Here, he seems to simultaneously admire and ridicule the culture of the 1950s, of which his knowledge seems to be drawn from re-runs of “Leave it to Beaver”. In the conclusion, he modestly proposes a solution to the problem which requires completely restructuring medical education for general practitioners and redefining the mission of all organised religions. At least he doesn't seem to have a problem with self-esteem!
Now, this may seem mind-boggling enough, but from these premises, which it must be understood are accepted by most experts who study the origin of the universe, one can deduce some disturbing consequences which seem to be logically unavoidable.
- At the largest scale, the geometry of the universe is indistinguishable from Euclidean (flat), and the distribution of matter and energy within it is homogeneous and isotropic.
- The universe evolved from an extremely hot, dense, phase starting about 13.7 billion years ago from our point of observation, which resulted in the abundances of light elements observed today.
- The evidence of this event is imprinted on the cosmic background radiation which can presently be observed in the microwave frequency band. All large-scale structures in the universe grew from gravitational amplification of scale-independent quantum fluctuations in density.
- The flatness, homogeneity, and isotropy of the universe is best explained by a period of inflation shortly after the origin of the universe, which expanded a tiny region of space, smaller than a subatomic particle, to a volume much greater than the presently observable universe.
- Consequently, the universe we can observe today is bounded by a horizon, about forty billion light years away in every direction (greater than the 13.7 billion light years you might expect since the universe has been expanding since its origin), but the universe is much, much larger than what we can see; every year another light year comes into view in every direction.
Let me walk you through it here. We assume the universe is infinite and unbounded, which is the best estimate from precision cosmology. Then, within that universe, there will be an infinite number of observable regions, which we'll call O-regions, each defined by the volume from which an observer at the centre can have received light since the origin of the universe. Now, each O-region has a finite volume, and quantum mechanics tells us that within a finite volume there are a finite number of possible quantum states. This number, although huge (on the order of 1010123 for a region the size of the one we presently inhabit), is not infinite, so consequently, with an infinite number of O-regions, even if quantum mechanics specifies the initial conditions of every O-region completely at random and they evolve randomly with every quantum event thereafter, there are only a finite number of histories they can experience (around 1010150). Which means that, at this moment, in this universe (albeit not within our current observational horizon), invoking nothing as fuzzy, weird, or speculative as the multiple world interpretation of quantum mechanics, there are an infinite number of you reading these words scribbled by an infinite number of me. In the vast majority of our shared universes things continue much the same, but from time to time they d1v3r93 r4ndtx#e~—….
Reset . . . Snap back to universe of origin . . . Reloading initial vacuum parameters . . . Restoring simulation . . . Resuming from checkpoint.What was that? Nothing, I guess. Still, odd, that blip you feel occasionally. Anyway, here is a completely fascinating book by a physicist and cosmologist who is pioneering the ragged edge of what the hard evidence from the cosmos seems to be telling us about the apparently boundless universe we inhabit. What is remarkable about this model is how generic it is. If you accept the best currently available evidence for the geometry and composition of the universe in the large, and agree with the majority of scientists who study such matters how it came to be that way, then an infinite cosmos filled with observable regions of finite size and consequently limited diversity more or less follows inevitably, however weird it may seem to think of an infinity of yourself experiencing every possible history somewhere. Further, in an infinite universe, there are an infinite number of O-regions which contain every possible history consistent with the laws of quantum mechanics and the symmetries of our spacetime including those in which, as the author noted, perhaps using the phrase for the first time in the august pages of the Physical Review, “Elvis is still alive”. So generic is the prediction, there's no need to assume the correctness of speculative ideas in physics. The author provides a lukewarm endorsement of string theory and the “anthropic landscape” model, but is clear to distinguish its “multiverse” of distinct vacua with different moduli from our infinite universe with (as far as we know) a single, possibly evolving, vacuum state. But string theory could be completely wrong and the deductions from observational cosmology would still stand. For that matter, they are independent of the “eternal inflation” model the book describes in detail, since they rely only upon observables within the horizon of our single “pocket universe”. Although the evolution of the universe from shortly after the end of inflation (the moment we call the “big bang”) seems to be well understood, there are still deep mysteries associated with the moment of origin, and the ultimate fate of the universe remains an enigma. These questions are discussed in detail, and the author makes clear how speculative and tentative any discussion of such matters must be given our present state of knowledge. But we are uniquely fortunate to be living in the first time in all of history when these profound questions upon which humans have mused since antiquity have become topics of observational and experimental science, and a number of experiments now underway and expected in the next few years which bear upon them are described.
Curiously, the author consistently uses the word “google” for the number 10100. The correct name for this quantity, coined in 1938 by nine-year-old Milton Sirotta, is “googol”. Edward Kasner, young Milton's uncle, then defined “googolplex” as 1010100. “Google™” is an Internet search engine created by megalomaniac collectivists bent on monetising, without compensation, content created by others. The text is complemented by a number of delightful cartoons reminiscent of those penned by George Gamow, a physicist the author (and this reader) much admires.
WITHIN THIS VALE |
OF TOIL |
AND SIN |
YOUR HEAD GROWS BALD |
BUT NOT YOUR CHIN—USE |
THIRTY DAYS |
HATH SEPTEMBER |
APRIL |
JUNE AND THE |
SPEED OFFENDER |
Jacqueline Wexler was such an administrator. Gracious and charming in public, accommodating and willing to compromise at meetings, she nevertheless had the steel-hard will and sharp intellect to drive the ICU's ramshackle collection of egos toward goals that she herself selected. Widely known as ‘Attila the Honey,’ Wexler was all sweetness and smiles on the outside, and ruthless determination within.After spending a third of page 70 on this paragraph, which makes my teeth ache just to re-read, the formidable Ms. Wexler walks off stage before the end of p. 71, never to re-appear. But fear not (or fear), there are many, many more such paragraphs in subsequent pages. An Earth-based space elevator, a science fiction staple, is central to the plot, and here Bova bungles the elementary science of such a structure in a laugh-out-loud chapter in which the three principal characters ride the elevator to a platform located at the low Earth orbit altitude of 500 kilometres. Upon arrival there, they find themselves weightless, while in reality the force of gravity would be imperceptibly less than on the surface of the Earth! Objects in orbit are weightless because their horizontal velocity cancels Earth's gravity, but a station at 500 kilometres is travelling only at the speed of the Earth's rotation, which is less than 1/16 of orbital velocity. The only place on a space elevator where weightlessness would be experienced is the portion where orbital velocity equals Earth's rotation rate, and that is at the anchor point at geosynchronous altitude. This is not a small detail; it is central to the physics, engineering, and economics of space elevators, and it figured prominently in Arthur C. Clarke's 1979 novel The Fountains of Paradise which is alluded to here on p. 140. Nor does Bova restrain himself from what is becoming a science fiction cliché of the first magnitude: “nano-magic”. This is my term for using the “nano” prefix the way bad fantasy authors use “magic”. For example, Lord Hacksalot draws his sword and cuts down a mighty oak tree with a single blow, smashing the wall of the evil prince's castle. The editor says, “Look, you can't cut down an oak tree with a single swing of a sword.” Author: “But it's a magic sword.” On p. 258 the principal character is traversing a tether between two parts of a ship in the asteroid belt which, for some reason, the author believes is filled with deadly radiation. “With nothing protecting him except the flimsy…suit, Bracknell felt like a turkey wrapped in a plastic bag inside a microwave oven. He knew that high-energy radiation was sleeting down on him from the pale, distant Sun and still-more-distant stars. He hoped that suit's radiation protection was as good as the manufacturer claimed.” Imaginary editor (who clearly never read this manuscript): “But the only thing which can shield you from heavy primary cosmic rays is mass, and lots of it. No ‘flimsy suit’ however it's made, can protect you against iron nuclei incoming near the speed of light.” Author: “But it's a nano suit!” Not only is the science wrong, the fiction is equally lame. Characters simply don't behave as people do in the real world, nor are events and their consequences plausible. We are expected to believe that the causes of and blame for a technological catastrophe which killed millions would be left to be decided by a criminal trial of a single individual in Ecuador without any independent investigation. Or that a conspiracy to cause said disaster involving a Japanese mega-corporation, two mass religious movements, rogue nanotechnologists, and numerous others could be organised, executed, and subsequently kept secret for a decade. The dénouement hinges on a coincidence so fantastically improbable that the plausibility of the plot would be improved were the direct intervention of God Almighty posited instead. Whatever became of Ben Bova, whose science was scientific and whose fiction was fun to read? It would be uncharitable to attribute this waste of ink and paper to age, as many science fictioneers with far more years on the clock have penned genuine classics. But look at this! Researching the author's biography, I discovered that in 1996, at the age of 64, he received a doctorate in education from California Coast University, a “distance learning” institution. Now, remember back when you were in engineering school struggling with thermogoddamics and fluid mechanics how you regarded the student body of the Ed school? Well, I always assumed it was a selection effect—those who can do, and those who can't…anyway, it never occurred to me that somewhere in that dark, lowering building they had a nano brain mushifier which turned the earnest students who wished to dedicate their careers to educating the next generation into the cognitively challenged classes they graduated. I used to look forward to reading anything by Ben Bova; I shall, however, forgo further works by the present Doctor of Education.
Finally, demand for fab labs as a research project, as a collection of capabilities, as a network of facilities, and even as a technological empowerment movement is growing beyond what can be handled by the initial collection of people and institutional partners that were involved in launching them. I/we welcome your thoughts on, and participation in, shaping their future operational, organizational, and technological form.Well, I am but a humble programmer, but here's how I'd go about it. First of all, I'd create a “Fabrication Trailer“ which could visit every community in the United States, Canada, and Mexico; I'd send it out on the road in every MIT vacation season to preach the evangel of “make” to every community it visited. In, say, one of eighty of such communities, one would find a person who dreamed of this happening in his or her lifetime who was empowered by seeing it happen; provide them a template which, by writing a cheque, can replicate the fab and watch it spread. And as it spreads, and creates wealth, it will spawn other Fab Labs. Then, after it's perfected in a couple of hundred North American copies, design a Fab Lab that fits into an ocean cargo container and can be shipped anywhere. If there isn't electricity and Internet connectivity, also deliver the diesel generator or solar panels and satellite dish. Drop these into places where they're most needed, along with a wonk who can bootstrap the locals into doing things with these tools which astound even those who created them. Humans are clever, tool-making primates; give us the tools to realise what we imagine and then stand back and watch what happens! The legacy media bombard us with conflict, murder, and mayhem. But the future is about creation and construction. What does An Army of Davids do when they turn their creativity and ingenuity toward creating solutions to problems perceived and addressed by individuals? Why, they'll call it a renaissance! And that's exactly what it will be. For more information, visit the Web site of The Center for Bits and Atoms at MIT, which the author directs. Fab Central provides links to Fab Labs around the world, the machines they use, and the open source software tools you can download and start using today.
2007 |
Does it always take work to construct constraints? No, as we will soon see. Does it often take work to construct constraints? Yes. In those cases, the work done to construct constraints is, in fact, another coupling of spontaneous and nonspontaneous processes. But this is just what we are suggesting must occur in autonomous agents. In the universe as a whole, exploding from the big bang into this vast diversity, are many of the constraints on the release of energy that have formed due to a linking of spontaneous and nonspontaneous processes? Yes. What might this be about? I'll say it again. The universe is full of sources of energy. Nonequilibrium processes and structures of increasing diversity and complexity arise that constitute sources of energy that measure, detect, and capture those sources of energy, build new structures that constitute constraints on the release of energy, and hence drive nonspontaneous processes to create more such diversifying and novel processes, structures, and energy sources.I have not cherry-picked this passage; there are hundreds of others like it. Given the complexity of the technical material and the difficulty of the concepts being explained, it seems to me that the straightforward, unaffected Point A to Point B style of explanation which Isaac Asimov employed would work much better. Pardon my audacity, but allow me to rewrite the above paragraph.
Autonomous agents require energy, and the universe is full of sources of energy. But in order to do work, they require energy to be released under constraints. Some constraints are natural, but others are constructed by autonomous agents which must do work to build novel constraints. A new constraint, once built, provides access to new sources of energy, which can be exploited by new agents, contributing to an ever growing diversity and complexity of agents, constraints, and sources of energy.Which is better? I rewrite; you decide. The tone of the prose is all over the place. In one paragraph he's talking about Tomasina the trilobite (p. 129) and Gertrude the ugly squirrel (p. 131), then the next thing you know it's “Here, the hexamer is simplified to 3'CCCGGG5', and the two complementary trimers are 5'GGG3' + 5'CCC3'. Left to its own devices, this reaction is exergonic and, in the presence of excess trimers compared to the equilibrium ratio of hexamer to trimers, will flow exergonically toward equilibrium by synthesizing the hexamer.” (p. 64). This flipping back and forth between colloquial and scholarly voices leads to a kind of comprehensional kinetosis. There are a few typographical errors, none serious, but I have to share this delightful one-sentence paragraph from p. 254 (ellipsis in the original):
By iteration, we can construct a graph connecting the founder spin network with its 1-Pachner move “descendants,” 2-Pachner move descendints…N-Pachner move descendents.Good grief—is Oxford University Press outsourcing their copy editing to Slashdot? For the reasons given above, I found this a difficult read. But it is an important book, bristling with ideas which will get you looking at the big questions in a different way, and speculating, along with the author, that there may be some profound scientific insights which science has overlooked to date sitting right before our eyes—in the biosphere, the economy, and this fantastically complicated universe which seems to have emerged somehow from a near-thermalised big bang. While Kauffman is the first to admit that these are hypotheses and speculations, not science, they are eminently testable by straightforward scientific investigation, and there is every reason to believe that if there are, indeed, general laws that govern these phenomena, we will begin to glimpse them in the next few decades. If you're interested in these matters, this is a book you shouldn't miss, but be aware what you're getting into when you undertake to read it.
And since Phillips's argument is based upon the Republican party's support among religious groups as diverse as Southern Baptists, northern Midwest Lutherans, Pentecostals, Mormons, Hasidic Jews, and Eastern Rite and traditionalist Catholics, it is difficult to imagine how precisely how the feared theocracy would function, given how little these separate religious groups agree upon. It would have to be an “ecumenical theocracy”, a creature for which I can recall no historical precedent. The greater part of the book discusses the threats to the U.S. posed by a global peak in petroleum production and temptation of resource wars (of which he claims the U.S. intervention in Iraq is an example), and the explosion of debt, public and private, in the U.S., the consequent housing bubble, and the structural trade deficits which are flooding the world with greenbacks. But these are topics which have been discussed more lucidly and in greater detail by authors who know far more about them than Phillips, who cites secondary and tertiary sources and draws no novel observations. A theme throughout the work is comparison of the present situation of the U.S. with previous world powers which fell into decline: ancient Rome, Spain in the seventeenth century, the Netherlands in the second half of the eighteenth century, and Britain in the first half of the twentieth. The parallels here, especially as regards fears of “theocracy” are strained to say the least. Constantine did not turn Rome toward Christianity until the fourth century A.D., by which time, even Gibbon concedes, the empire had been in decline for centuries. (Phillips seems to have realised this part of the way through the manuscript and ceases to draw analogies with Rome fairly early on.) Few, if any, historians would consider Spain, Holland, or Britain in the periods in question theocratic societies; each had a clear separation between civil authority and the church, and in the latter two cases there is plain evidence of a decline in the influence of organised religion on the population as the nation's power approached a peak and began to ebb. Can anybody seriously contend that the Anglican church was responsible for the demise of the British Empire? Hello—what about the two world wars, which were motivated by power politics, not religion? Distilled to the essence (and I estimate a good editor could cut a third to half of this text just by flensing the mind-numbing repetition), Phillips has come to believe in the world view and policy prescriptions advocated by the left wing of the Democratic party. The Republican party does not agree with these things. Adherents of traditional religion share this disagreement, and consequently they predominately vote for Republican candidates. Therefore, evangelical and orthodox religious groups form a substantial part of the Republican electorate. But how does that imply any trend toward “theocracy”? People choose to join a particular church because they are comfortable with the beliefs it espouses, and they likewise vote for candidates who advocate policies they endorse. Just because there is a correlation between preferences does not imply, especially in the absence of any evidence, some kind of fundamentalist conspiracy to take over the government and impose a religious dictatorship. Consider another divisive issue which has nothing to do with religion: the right to keep and bear arms. People who consider the individual right to own and carry weapons for self-defence are highly probable to be Republican voters as well, because that party is more closely aligned with their views than the alternative. Correlation is not evidence of causality, not to speak of collusion. Much of the writing is reminiscent of the lower tier of the UFO literature. There are dozens of statements like this one from p. 93 (my italics), “There are no records, but Cheney's reported early 2001 plotting may well have touched upon the related peril to the dollar.” May I deconstruct? So what's really being said here is, “Some conspiracy theorist, with no evidence to support his assertion, claims that Cheney was plotting to seize Iraqi oil fields, and it is possible that this speculated scheme might have been motivated by fears for the dollar.” There are more than thirty pages of end notes set in small type, but there is less documentation here than strains the eye. Many citations are to news stories in collectivist legacy media and postings on leftist advocacy Web sites. Picking page 428 at random, we find 29 citations, only five of which are to a total of three books, one by the present author. So blinded is the author by his own ideological bias that he seems completely oblivious to the fact that a right-wing stalwart could produce an almost completely parallel screed about the Democratic party being in thrall to a coalition of atheists, humanists, and secularists eager to use the power of the state to impose their own radical agenda. In fact, one already has. It is dubious that shrill polemics of this variety launched back and forth between the trenches of an increasingly polarised society promote the dialogue and substantive debate which is essential to confront the genuine and daunting challenges all its citizens ultimately share.
- A form of government in which God or a deity is recognized as the supreme civil ruler, the God's or deity's laws being interpreted by the ecclesiastical authorities.
- A system of government by priests claiming a divine commission.
- A commonwealth or state under such a form or system of government.
‘I do not believe we should negotiate with such people, as it will only encourage them in their criminal acts.’ … Where would be struck next? What Rome was facing was a threat very different from that posed by a conventional enemy. These pirates were a new type of ruthless foe, with no government to represent them and no treaties to bind them. Their bases were not confined to a single state. They had no unified system of command. They were a worldwide pestilence, a parasite which needed to be stamped out, otherwise Rome—despite her overwhelming military superiority—would never again know security or peace. … Any ruler who refuses to cooperate will be regarded as Rome's enemy. Those who are not with us are against us.Harris resists the temptation of turning Rome into a soapbox for present-day political advocacy on any side, and quickly gets back to the political intrigue in the capital. (Not that the latter days of the Roman republic are devoid of relevance to the present situation; study of them may provide more insight into the news than all the pundits and political blogs on the Web. But the parallels are not exact, and the circumstances are different in many fundamental ways. Harris wisely sticks to the story and leaves the reader to discern the historical lessons.) The novel comes to a rather abrupt close with Cicero's election to the consulate in 63 B.C. I suspect that what we have here is the first volume of a trilogy. If that be the case, I look forward to future installments.
Whatever happens,but when it came to a fight, as happened surprisingly often in what one thinks of as the Pax Britannica era (the Appendix [pp. 174–176] lists 72 conflicts and military expeditions in the Victorian era), a small, tradition-bound force, accustomed to peace and the parade ground, too often fell victim to (p. xix) “a devil's brew of incompetence, unpreparedness, mistaken and inappropriate tactics, a reckless underestimating of the enemy, a brash overconfidence, a personal or psychological collapse, a difficult terrain, useless maps, raw and panicky recruits, skilful or treacherous opponents, diplomatic hindrance, and bone-headed leadership.” All of these are much in evidence in the campaigns recounted here: the 1838–1842 invasion of Afghanistan, the 1854–1856 Crimean War, the 1857–1859 Indian Mutiny, the Zulu War of 1879, and the first (1880–1881) and second (1899–1902) Boer Wars. Although this book was originally published more than thirty years ago and its subtitle, “Calamities of the British Army in the Victorian Age”, suggests it is a chronicle of a quaint and long-departed age, there is much to learn in these accounts of how highly-mobile, superbly trained, excellently equipped, and technologically superior military forces were humiliated and sometimes annihilated by indigenous armies with the power of numbers, knowledge of the terrain, and the motivation to defend their own land.
we have got,
the Maxim gun,
and they have not.
— Joseph Hilaire Pierre René Belloc, “The Modern Traveller”, 1898
It is in this sense that Pascal's (Fania Pascal, an acquaintance of Wittgenstein in the 1930s, not Blaise—JW) statement is unconnected to a concern with the truth; she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true.(The Punctuator applauds the use of colons and semicolons in the passage quoted above!) All of this is fine, but it seems to me that the author misses an important aspect of bullshit: the fact that in many cases—perhaps the overwhelming majority—the bulshittee is perfectly aware of being bullshitted by the bullshitter, and the bullshitter is conversely aware that the figurative bovid excrement emitted is being dismissed as such by those whose ears it befouls. Now, this isn't always the case: sometimes you find yourself in a tight situation faced with a difficult question and manage to bullshit your way through, but in the context of a “bull session”, only the most naïve would assume that what was said was sincere and indicative of the participants' true beliefs: the author cites bull sessions as a venue in which people can try on beliefs other than their own in a non-threatening environment.
Caesar primum suo, deinde omnium ex conspectu remotis equis, ut
aequato omnium periculo spem fugae tolleret, cohortatus suos proelium
commisit.
in Latin,
is conventionally translated into English as something like this (from
the rather stilted
1869 translation
by W. A. McDevitte and W. S. Bohn):
Caesar, having removed out of sight first his own horse, then those of all, that he might make the danger of all equal, and do away with the hope of flight, after encouraging his men, joined battle.but the Warner translation used here renders this as:
I first of all had my own horse taken out of the way and then the horses of other officers. I wanted the danger to be the same for everyone, and for no one to have any hope of escape by flight. Then I spoke a few words of encouragement to the men before joining battle. [1:24:17–30]Now, whatever violence this colloquial translation does to the authenticity of Caesar's spare and eloquent Latin, from a dramatic standpoint it works wonderfully with the animated reading of award-winning narrator Charlton Griffin; the listener has the sense of being across the table in a tavern from GJC as he regales all present with his exploits. This is “just the facts” war reporting. Caesar viewed this work not as history, but rather the raw material for historians in the future. There is little discussion of grand strategy nor, even in the commentaries on the civil war, the political conflict which provoked the military confrontation between Caesar and Pompey. While these despatches doubtless served as propaganda on Caesar's part, he writes candidly of his own errors and the cost of the defeats they occasioned. (Of course, since these are the only extant accounts of most of these events, there's no way to be sure there isn't some Caesarian spin in his presentation, but since these commentaries were published in Rome, which received independent reports from officers and literate legionaries in Caesar's armies, it's unlikely he would have risked embellishing too much.) Two passages of unknown length in the final book of the Civil war commentaries have been lost—these are handled by the reader stopping in mid-sentence, with another narrator explaining the gap and the historical consensus of the events in the lost text. This audiobook is distributed in three parts, totalling 16 hours and 40 minutes. That's a big investment of time in the details of battles which took place more than two thousand years ago, but I'll confess I found it fascinating, especially since some of the events described took place within sight of where I take the walks on which I listened to this recording over several weeks. An Audio CD edition is available.
For each topic, the author presents a meta-analysis of unimpeached published experimental results, controlling for quality of experimental design and estimating the maximum impact of the “file drawer effect”, calculating how many unpublished experiments with chance results would have to exist to reduce the probability of the reported results to the chance expectation. All of the effects reported are very small, but a meta-meta analysis across all the 1019 experiments studied yields odds against the results being due to chance of 1.3×10104 to 1.
Radin draws attention to the similarities between psi phenomena, where events separated in space and time appear to have a connection which can't be explained by known means of communication, and the entanglement of particles resulting in correlations measured at spacelike separated intervals in quantum mechanics, and speculates that there may be a kind of macroscopic form of entanglement in which the mind is able to perceive information in a shared consciousness field (for lack of a better term) as well as through the senses. The evidence for such a field from the Global Consciousness Project (to which I have contributed software and host two nodes) is presented in chapter 11. Forty pages of endnotes provide extensive source citations and technical details. On several occasions I thought the author was heading in the direction of the suggestion I make in my Notes toward a General Theory of Paranormal Phenomena, but he always veered away from it. Perhaps the full implications of the multiverse are weirder than those of psi! There are a few goofs. On p. 215, a quote from Richard Feynman is dated from 1990, while Feynman died in 1988. Actually, the quote is from Feynman's 1985 book QED, which was reprinted in 1990. The discussion of the Quantum Zeno Effect on p. 259 states that “the act of rapidly observing a quantum system forces that system to remain in its wavelike, indeterminate state, rather than to collapse into a particular, determined state.” This is precisely backwards—rapidly repeated observations cause the system's state to repeatedly collapse, preventing its evolution. Consequently, this effect is also called the “quantum watched pot” effect, after the aphorism “a watched pot never boils”. On the other side of the balance, the discussion of Bell's theorem on pp. 227–231 is one of the clearest expositions for layman I have ever read. I try to avoid the “Washington read”: picking up a book and immediately checking if my name appears in the index, but in the interest of candour since I am commending this book to your attention, I should note that it does here—I am mentioned on p. 195. If you'd like to experiment with this spooky stuff yourself, try Fourmilab's online RetroPsychoKinesis experiments, which celebrated their tenth anniversary on the Web in January of 2007 and to date have recorded 256,584 experiments performed by 24,862 volunteer subjects.In this book, Walid Phares makes the case for the first of these two statements. Born in Lebanon, after immigrating to the United States in 1990, he taught Middle East studies at several universities, and is currently a professor at Florida Atlantic University. He is the author of a number of books on Middle East history, and appears as a commentator on media outlets ranging from Fox News to Al Jazeera. Ever since the early 1990s, the author has been warning of what he argued was a constantly growing jihadist threat, which was being overlooked and minimised by the academic experts to whom policy makers turn for advice, largely due to Saudi-funded and -indoctrinated Middle East Studies programmes at major universities. Meanwhile, Saudi funding also financed the radicalisation of Muslim communities around the world, particularly the large immigrant populations in many Western European countries. In parallel to this top-down approach by the Wahabi Saudis, the Muslim Brotherhood and its affiliated groups, including Hamas and the Front Islamique du Salut in Algeria, pursued a bottom-up strategy of radicalising the population and building a political movement seeking to take power and impose an Islamic state. Since the Iranian revolution of 1979, a third stream of jihadism has arisen, principally within Shiite communities, promoted and funded by Iran, including groups such as Hezbollah. The present-day situation is placed in historical content dating back to the original conquests of Mohammed and the spread of Islam from the Arabian peninsula across three continents, and subsequent disasters at the hands of the Mongols and Crusaders, the reconquista of the Iberian peninsula, and the ultimate collapse of the Ottoman Empire and Caliphate following World War I. This allows the reader to grasp the world-view of the modern jihadist which, while seemingly bizarre from a Western standpoint, is entirely self-consistent from the premises whence the believers proceed. Phares stresses that modern jihadism (which he dates from the abolition of the Ottoman Caliphate in 1923, an event which permitted free-lance, non-state actors to launch jihad unconstrained by the central authority of a caliph), is a political ideology with imperial ambitions: the establishment of a new caliphate and its expansion around the globe. He argues that this is only incidentally a religious conflict: although the jihadists are Islamic, their goals and methods are much the same as believers in atheistic ideologies such as communism. And just as one could be an ardent Marxist without supporting Soviet imperialism, one can be a devout Muslim and oppose the jihadists and intolerant fundamentalists. Conversely, this may explain the curious convergence of the extreme collectivist left and puritanical jihadists: red diaper baby and notorious terrorist Carlos “the Jackal” now styles himself an Islamic revolutionary, and the corpulent caudillo of Caracas has been buddying up with the squinty dwarf of Tehran. The author believes that since the terrorist strikes against the United States in September 2001, the West has begun to wake up to the threat and begin to act against it, but that far more, both in realising the scope of the problem and acting to avert it, remains to be done. He argues, and documents from post-2001 events, that the perpetrators of future jihadist strikes against the West are likely to be home-grown second generation jihadists radicalised and recruited among Muslim communities within their own countries, aided by Saudi financed networks. He worries that the emergence of a nuclear armed jihadist state (most likely due to an Islamist takeover of Pakistan or Iran developing its own bomb) would create a base of operations for jihad against the West which could deter reprisal against it. Chapter thirteen presents a chilling scenario of what might have happened had the West not had the wake-up call of the 2001 attacks and begun to mobilise against the threat. The scary thing is that events could still go this way should the threat be real and the West, through fatigue, ignorance, or fear, cease to counter it. While defensive measures at home and direct action against terrorist groups are required, the author believes that only the promotion of democratic and pluralistic civil societies in the Muslim world can ultimately put an end to the jihadist threat. Toward this end, a good first step would be, he argues, for the societies at risk to recognise that they are not at war with “terrorism” or with Islam, but rather with an expansionist ideology with a political agenda which attacks targets of opportunity and adapts quickly to countermeasures. In all, I found the arguments somewhat over the top, but then, unlike the author, I haven't spent most of my career studying the jihadists, nor read their publications and Web sites in the original Arabic as he has. His warnings of cultural penetration of the West, misdirection by artful propaganda, and infiltration of policy making, security, and military institutions by jihadist covert agents read something like J. Edgar Hoover's Masters of Deceit, but then history, in particular the Venona decrypts, has borne out many of Hoover's claims which were scoffed at when the book was published in 1958. But still, one wonders how a “movement” composed of disparate threads many of whom hate one another (for example, while the Saudis fund propaganda promoting the jihadists, most of the latter seek to eventually depose the Saudi royal family and replace it with a Taliban-like regime; Sunni and Shiite extremists view each other as heretics) can effectively co-ordinate complex operations against their enemies. A thirty page afterword in this paperback edition provides updates on events through mid-2006. There are some curious things: while transliteration of Arabic and Farsi into English involves a degree of discretion, the author seems very fond of the letter “u”. He writes the name of the leader of the Iranian revolution as “Khumeini”, for example, which I've never seen elsewhere. The book is not well-edited: occasionally he used “Khomeini”, spells Sayid Qutb's last name as “Kutb” on p. 64, and on p. 287 refers to “Hezbollah” and “Hizbollah” in the same sentence. The author maintains a Web site devoted to the book, as well as a personal Web site which links to all of his work.
- There is a broad-based, highly aggressive, well-funded, and effective jihadist movement which poses a dire threat not just to secular and pluralist societies in the Muslim world, but to civil societies in Europe, the Americas, and Asia.
- There isn't.
Yes, you read that right, they plan to sell data on our trash. Of course. We should have known that BellSouth was just another megacorporation waiting in the wings to swoop down on the data revealed once its fellow corporate cronies spychip the world.I mean, I agree entirely with the message of this book, having warned of modest steps in that direction eleven years before its publication, but prose like this makes me feel like I'm driving down the road in a 1964 Vance Packard getting all righteously indignant about things we'd be better advised to coldly and deliberately draw our plans against. This shouldn't be so difficult, in principle: polls show that once people grasp the potential invasion of privacy possible with RFID, between 2/3 and 3/4 oppose it. The problem is that it's being deployed via stealth, starting with bulk pallets in the supply chain and, once proven there, migrated down to the individual product level. Visibility is a precious thing, and one of the most insidious properties of RFID tags is their very invisibility. Is there a remotely-powered transponder sandwiched into the sole of your shoe, linked to the credit card number and identity you used to buy it, which “phones home” every time you walk near a sensor which activates it? Who knows? See how the paranoia sets in? But it isn't paranoia if they're really out to get you. And they are—for our own good, naturally, and for the children, as always. In the absence of a policy fix for this (and the extreme unlikelihood of any such being adopted given the natural alliance of business and the state in tracking every move of their customers/subjects), one extremely handy technical fix would be a broadband, perhaps software radio, which listened on the frequency bands used by RFID tag readers and snooped on the transmissions of tags back to them. Passing the data stream to a package like RFDUMP would allow decoding the visible information in the RFID tags which were detected. First of all, this would allow people to know if they were carrying RFID tagged products unbeknownst to them. Second, a portable sniffer connected to a PDA would identify tagged products in stores, which clients could take to customer service desks and ask to be returned to the shelves because they were unacceptable for privacy reasons. After this happens several tens of thousands of times, it may have an impact, given the razor-thin margins in retailing. Finally, there are “active measures”. These RFID tags have large antennas which are connected to a super-cheap and hence fragile chip. Once we know the frequency it's talking on, why we could…. But you can work out the rest, and since these are all unlicensed radio bands, there may be nothing wrong with striking an electromagnetic blow for privacy.
EMP,
EMP!
Don't you put,
your tag on me!
2008 |
Well, that's the way to bet. As usual, economics trumps just about everything. Just how plausible is it that a global hegemon can continue to exert its dominance when its economy is utterly dependent upon its ability to borrow two billion dollars a day from its principal rivals: China and Japan, and from these hired funds, it pumps more than three hundred billion dollars a year into the coffers of its enemies: Saudi Arabia, Venezuela, Iran, and others to fund its addiction to petroleum? The last chapter presents a set of policy prescriptions to reverse the imminent disasters facing the U.S. Even if these policies could be sold to an electorate in which two generations have been brainwashed by collectivist nostrums, it still seems like “too little, too late”—once you've shipped your manufacturing industries offshore and become dependent upon immigrants for knowledge workers, how precisely do you get back to first world status? Beats me. Some will claim I am, along with the author, piling on recent headlines. I'd counsel taking a longer-term view, as I did when I decided to get out of the U.S. If you're into numbers, note the exchange rate of the U.S. dollar versus the Euro, and the price of gold and oil in U.S. dollars today, then compare them to the quotes five years hence. If the dollar has appreciated, then I'm wrong; if it's continuing its long-term slide into banana republic status, then maybe this rant wasn't as intemperate as you might have initially deemed it. His detractors call Pat Buchanan a “paleoconservative”, but how many “progressives” publish manuscripts written in the future? The acknowledgements (p. 266) is dated October 2008, ten months after I read it, but then I'm cool with that.Is the best of the free life behind us now?
Are the good times really over for good?
Odd and tragic coincidences in maritime history render a little more plausible the breathless meters of James Elroy Flecker (1884–1915): “The dragon-green, the luminous, the dark, the serpent-haunted sea.” That sea haunts me too, especially with the realization that Flecker died in the year of the loss of 1,154 lives on the Lusitania. More odd than tragic is this: the United States Secretary of State William Jennings Bryan (in H. L. Mencken's estimation “The National Tear-Duct”) officially protested the ship's sinking on May 13, 1915 which was the 400th anniversary, to the day, of the marriage of the Duke of Suffolk to Mary, the widow of Louis XII and sister of Henry VIII, after she had spurned the hand of the Archduke Charles. There is something ominous even in the name of the great hydrologist of the Massachusetts Institute of Technology who set the standards for water purification: Thomas Drown (1842–1904). Swinburne capitalized on the pathos: “… the place of the slaying of Itylus / The feast of Daulis, the Thracian sea.” And a singularly melancholy fact about the sea is that Swinburne did not end up in it.I noted several factual errors. For example, on p. 169, Chuck Yeager is said to have flown a “B-51 Mustang” in World War II (the correct designation is P-51). Such lapses make you wonder about the reliability of other details, which are far more arcane and difficult to verify. The author is opinionated and not at all hesitant to share his acerbic perspective: on p. 94 he calls Richard Wagner a “master of Nazi elevator music”. The vocabulary will send almost all readers other than William F. Buckley (who contributed a cover blurb to the book) to the dictionary from time to time. This is not a book you'll want to read straight through—your head will end up spinning with all the details and everything will dissolve into a blur. I found a chapter or two a day about right. I'd sum it up with Abraham Lincoln's observation “Well, for those who like that sort of thing, I should think it is just about the sort of thing they would like.”
Between September 1939 and February 1943, HM Destroyer Forester steamed 200,000 miles, a distance equal to nine times round the world. In a single year the corvette Jonquil steamed a distance equivalent to more than three times round the world. In one year and four months HM Destroyer Wolfhound steamed over 50,000 miles and convoyed 3,000 ships.The message of British triumphalism is conveyed in part by omission: you will find only the barest hints in this narrative of the disasters of Britain's early efforts in the war, the cataclysmic conflict on the Eastern front, or the Pacific war waged by the United States against Japan. (On the other hand, the title is “What Britain Has Done”, so one might argue that tasks which Britain either didn't do or failed to accomplish do not belong here.) But this is not history, but propaganda, and as the latter it is a masterpiece. (Churchill's history, The Second World War, although placing Britain at the centre of the story, treats all of these topics candidly, except those relating to matters still secret, such as the breaking of German codes during the war.) This reprint edition includes a new introduction which puts the document into historical perspective and seven maps which illustrate operations in various theatres of the war.
The inevitable loss in privacy and freedom that has been a constant characteristic of the nation's reaction to any crisis that threatens America's future will more easily be accepted by a generation that willingly opts to share personal information with advertisers just for the sake of earning a few “freebies.” After 9/11 and the massacres at Columbine and Virginia Tech, Millennials are not likely to object to increased surveillance and other intrusions into their private lives if it means increased levels of personal safety. The shape of America's political landscape after a civic realignment is thus more likely to favor policies that involve collective action and individual accountability than the libertarian approaches so much favored by Gen-Xers. (p. 200)Note that the authors applaud these developments. Digital Imprimatur, here we come!
As the newest civic realignment evolves, the center of America's public policy will continue to shift away from an emphasis on individual rights and public morality toward a search for solutions that benefit the entire community in as equitable and orderly way as possible. Majorities will coalesce around ideas that involve the entire group in the solution and downplay the right of individuals to opt out of the process. (p. 250)
Millennials favor environmental protection even at the cost of economic growth by a somewhat wider margin than any other generation (43% for Millennials vs. 40% for Gen-Xers and 38% for Baby Boomers), hardly surprising, given the emphasis this issue received in their favorite childhood television programs such as “Barney” and “Sesame Street” (Frank N. Magid Associates, May 2007). (p. 263)Deep thinkers, those millennials! (Note that these “somewhat wider” margins are within the statistical sampling error of the cited survey [p. xiv].) The whole scheme of alternating idealist and civic epochs is presented with a historicist inevitability worthy of Hegel or Marx. While one can argue that this kind of cycle is like the oscillation between crunchy and soggy, it seems to me that the authors must be exceptionally stupid, oblivious to facts before their faces, or guilty of a breathtaking degree of intellectual dishonesty to ignore the influence of the relentless indoctrination of this generation with collectivist dogma in government schools and the legacy entertainment and news media—and I do not believe the authors are either idiots nor imperceptive. What they are, however, are long-term activists (since the 1970s) in the Democratic party, who welcome the emergence of a “civic” generation which they view as the raw material for advancing the agenda which FDR launched with the aid of the previous large civic generation in the 1930s. Think about it. A generation which has been inculcated with the kind of beliefs illustrated by the quotations above, and which is largely ignorant of history (and much of the history they've been taught is bogus, agenda-driven propaganda), whose communications are mostly “peer-to-peer”—with other identically-indoctrinated members of the same generation, is the ideal putty in the hands of a charismatic leader bent on “unifying” a nation by using the coercive power of the state to enforce the “one best way”. The authors make an attempt to present the millenials as a pool of potential voters in search of a political philosophy and party embodying it which, once chosen, they will likely continue to identify with for the rest of their lives (party allegiance, they claim, is much stronger in civic than in idealist eras). But it's clear that the book is, in fact, a pitch to the Democratic party to recruit these people: Republican politicians and conservative causes are treated with thinly veiled contempt. This is entirely a book about political strategy aimed at electoral success. There is no discussion whatsoever of the specific policies upon which campaigns will be based, how they are to be implemented, or what their consequences will be for the nation. The authors almost seem to welcome catastrophes such as a “major terrorist attack … major environmental disaster … chronic, long-lasting war … hyperinflation … attack on the U.S. with nuclear weapons … major health catastrophe … major economic collapse … world war … and/or a long struggle like the Cold War” as being “events of significant magnitude to trigger a civic realignment” (p. 201). I've written before about my decision to get out of the United States in the early 1990s, which decision I have never regretted. That move was based largely upon economic fundamentals, which I believed, and continue to believe, are not sustainable and will end badly. Over the last decade, I have been increasingly unsettled by my interactions with members of the tail-end of Generation X and the next generation, whatever you call it. If the picture presented in this book is correct (and I have no way to know whether it is), and their impact upon the U.S. political scene is anything like that envisioned by the authors, anybody still in the U.S. who values their liberty and autonomy has an even more urgent reason to get out, and quickly.
It was also my fate to be an exile from my country for twenty years after my command at Amphipolis; and being present with both parties, and more especially with the Peloponnesians by reason of my exile, I had leisure to observe affairs somewhat particularly.Unlike earlier war narratives in epic poetry, Thucydides based his account purely upon the actions of the human participants involved. While he includes the prophecies of oracles and auguries, he considers them important only to the extent they influenced decisions made by those who gave them credence. Divine intervention plays no part whatsoever in his description of events, and in his account of the Athenian Plague he even mocks how prophecies are interpreted to fit subsequent events. In addition to military and political affairs, Thucydides was a keen observer of natural phenomena: his account of the Athenian Plague reads like that of a modern epidemiologist, including his identifying overcrowding and poor sanitation as contributing factors and the observation that surviving the disease (as he did himself) conferred immunity. He further observes that solar eclipses appear to occur only at the new Moon, and may have been the first to identify earthquakes as the cause of tsunamis. In the text, Thucydides includes lengthy speeches made by figures on all sides of the conflict, both in political assemblies and those of generals exhorting their troops to battle. He admits in the introduction that in many cases no contemporary account of these speeches exists and that he simply made up what he believed the speaker would likely have said given the circumstances. While this is not a technique modern historians would employ, Greeks, from their theatre and poetry, were accustomed to narratives presented in this form and Thucydides, inventing the concept of history as he wrote it, saw nothing wrong with inventing words in the absence of eyewitness accounts. What is striking is how modern everything seems. There are descriptions of the strategy of a sea power (Athens) confronted by a land power (Sparta), the dangers of alliances which invite weaker allies to take risks that involve their guarantors in unwanted and costly conflicts, the difficulties in mounting an amphibious assault on a defended shore, the challenge a democratic society has in remaining focused on a long-term conflict with an authoritarian opponent, and the utility of economic warfare (or, as Thucydides puts it [over and over again], “ravaging the countryside”) in sapping the adversary's capacity and will to resist. Readers with stereotyped views of Athens and Sparta may be surprised that many at the time of the war viewed Sparta as a liberator of independent cities from the yoke of the Athenian empire, and that Thucydides, an Athenian, often seems sympathetic to this view. Many of the speeches could have been given by present-day politicians and generals, except they would be unlikely to be as eloquent or argue their case so cogently. One understands why Thucydides was not only read over the centuries (at least prior to the present Dark Time, when the priceless patrimony of Western culture has been jettisoned and largely forgotten) for its literary excellence, but is still studied in military academies for its timeless insights into the art of war and the dynamics of societies at war. While modern readers may find the actual campaigns sporadic and the battles on a small scale by present day standards, from the Hellenic perspective, which saw their culture of city-states as “civilisation” surrounded by a sea of barbarians, this was a world war, and Thucydides records it as such a momentous event. This is Volume 1 of the audiobook, which includes the first four of the eight books into which Thucydides's text is conventionally divided, covering the prior history of Greece and the first nine years of the war, through the Thracian campaigns of the Spartan Brasidas in 423 B.C. (Here is Volume 2, with the balance.) The audiobook is distributed in two parts, totalling 14 hours and 50 minutes with more than a hour of introductory essays including a biography of Thucydides and an overview of the work. The Benjamin Jowett translation is used, read by the versatile Charlton Griffin. A print edition of this translation is available.
The fact is, liberty is not given a fair chance in our society, neither in the media, nor in politics, nor (especially) in education. I have spoken to many young people during my career, some of whom had never heard my ideas before. But as soon as I explained the philosophy of liberty and told them a little American history in light of that philosophy, their eyes lit up. Here was something they'd never heard before, but something that was compelling and moving, and which appealed to their sense of idealism. Liberty had simply never been presented to them as a choice. (p. 158)This slender (173 page) book presents that choice as persuasively and elegantly as anything I have read. Further, the case for liberty is anchored in the tradition of American history and the classic conservatism which characterised the Republican party for the first half of the 20th century. The author repeatedly demonstrates just how recent much of the explosive growth in government has been, and observes that people seemed to get along just fine, and the economy prospered, without the crushing burden of intrusive regulation and taxation. One of the most striking examples is the discussion of abolishing the personal income tax. “Impossible”, as other politicians would immediately shout? Well, the personal income tax accounts for about 40% of federal revenue, so eliminating it would require reducing the federal budget by the same 40%. How far back would you have to go in history to discover an epoch where the federal budget was 40% below that of 2007? Why, you'd have to go all the way back to 1997! (p. 80) The big government politicians who dominate both major political parties in the United States dismiss the common-sense policies advocated by Ron Paul in this book by saying “you can't turn back the clock”. But as Chesterton observed, why not? You can turn back a clock, and you can replace disastrous policies which are bankrupting a society and destroying personal liberty with time-tested policies which have delivered prosperity and freedom for centuries wherever adopted. Paul argues that the debt-funded imperial nanny state is doomed in any case by simple economic considerations. The only question is whether it is deliberately and systematically dismantled by the kinds of incremental steps he advocates here, or eventually collapses Soviet-style due to bankruptcy and/or hyperinflation. Should the U.S., as many expect, lurch dramatically in the collectivist direction in the coming years, it will only accelerate the inevitable debacle. Anybody who wishes to discover alternatives to the present course and that limited constitutional government is not a relic of the past but the only viable alternative for a free people to live in peace and prosperity will find this book an excellent introduction to the libertarian/constitutionalist perspective. A five page reading list cites both classics of libertarian thought and analyses of historical and contemporary events from a libertarian viewpoint.
The sixties generation's leaders didn't anticipate how their claim of exceptionalism would affect the next generation, and the next, but the sequence was entirely logical. Informed rejection of the past became uninformed rejection of the past, and then complete and unworried ignorance of it. (p. 228)And it is the latter which is particularly disturbing: as documented extensively, Generation Y knows they're clueless and they're cool with it! In fact, their expectations for success in their careers are entirely discordant with the qualifications they're packing as they venture out to slide down the razor blade of life (pp. 193–198). Or not: on pp. 169–173 we meet the “Twixters”, urban and suburban middle class college graduates between 22 and 30 years old who are still living with their parents and engaging in an essentially adolescent lifestyle: bouncing between service jobs with no career advancement path and settling into no long-term relationship. These sad specimens who refuse to grow up even have their own term of derision: “KIPPERS” Kids In Parents' Pockets Eroding Retirement Savings. In evaluating the objective data and arguments presented here, it's important to keep in mind that correlation does not imply causation. One cannot run controlled experiments on broad-based social trends: only try to infer from the evidence available what might be the cause of the objective outcomes one measures. Many of the characteristics of Generation Y described here might be explained in large part simply by the immersion and isolation of young people in the pernicious peer culture described by Robert Epstein in The Case Against Adolescence (July 2007), with digital technologies simply reinforcing a dynamic in effect well before their emergence, and visible to some extent in the Boomer and Generation X cohorts who matured earlier, without being plugged in 24/7. For another insightful view of Generation Y (by another professor at Emory!), see I'm the Teacher, You're the Student (January 2005). If Millennial Makeover is correct, the culture and politics of the United States is soon to be redefined by the generation now coming of age. This book presents a disturbing picture of what that may entail: a generation with little or no knowledge of history or of the culture of the society they've inherited, and unconcerned with their ignorance, making decisions not in the context of tradition and their intellectual heritage, but of peer popular culture. Living in Europe, it is clear that things have not reached such a dire circumstance here, and in Asia the intergenerational intellectual continuity appears to remain strong. But then, the U.S. was the first adopter of the wired society, and hence may simply be the first to arrive at the scene of the accident. Observing what happens there in the near future may give the rest of the world a chance to change course before their own dumbest generations mature. Paraphrasing Ronald Reagan, the author notes that “Knowledge is never more than one generation away from oblivion.” (p. 186) In an age where a large fraction of all human knowledge is freely accessible to anybody in a fraction of a second, what a tragedy it would be if the “digital natives” ended up, like the pejoratively denigrated “natives” of the colonial era, surrounded by a wealth of culture but ignorant of and uninterested in it. The final chapter is a delightful and stirring defence of culture wars and culture warriors, which argues that only those grounded in knowledge of their culture and equipped with the intellectual tools to challenge accepted norms and conventional wisdom can (for better or worse) change society. Those who lack the knowledge and reasoning skills to be engaged culture warriors are putty in the hands of marketeers and manipulative politicians, which is perhaps why so many of them are salivating over the impending Millennial majority.
…the rich are almost always too complacent, because they cherish the illusion that when things start to go bad, they will have time to extricate themselves and their wealth. It never works that way. Events move much faster than anyone expects, and the barbarians are on top of you before you can escape. … It is expensive to move early, but it is far better to be early than to be late.This is a quirky book, and not free of flaws. Biggs is a connoisseur of amusing historical anecdotes and sprinkles them throughout the text. I found them a welcome leavening of a narrative filled with human tragedy, folly, and destruction of wealth, but some may consider them a distraction and out of place. There are far more copy-editing errors in this book (including dismayingly many difficulties with the humble apostrophe) than I would expect in a Wiley main catalogue title. But that said, if you haven't discovered the wisdom of the markets for yourself, and are worried about riding out the uncertainties of what appears to be a bumpy patch ahead, this is an excellent place to start.
The AK47 moved from being a tool of the conflict to the cause of the conflict, and by the mid-1990s it had become the progenitor of indiscriminate terror across huge swaths of the continent. How could it be otherwise? AKs were everywhere, and their ubiquity made stability a rare commodity as even the smallest groups could bring to bear a military pressure out of proportion to their actual size.That's right—the existence of weapons compels human beings, who would presumably otherwise live together in harmony, to murder one another and rend their societies into chaotic, blood-soaked Hell-holes. Yup, and why do the birds always nest in the white areas? The concept that one should look at the absence of civil society as the progenitor of violence never enters the picture here. It is the evil weapon which is at fault, not the failed doctrines to which the author clings, which have wrought such suffering across the globe. Homo sapiens is a violent species, and our history has been one of constant battles. Notwithstanding the horrific bloodletting of the twentieth century, on a per-capita basis, death from violent conflict has fallen to an all-time low in the nation-state era, notwithstanding the advent of of weapons such as General Kalashnikov's. When bad ideas turn murderous, machetes will do. A U.S edition is now available, but as of this date only in hardcover.
Perhaps the most important distinction is between what sounds good and what works. The former may be sufficient for purposes of politics or moral preening, but not for the economic advancement of people in general or the poor in particular. For those willing to stop and think, basic economics provides some tools for evaluating policies and proposals in terms of their logical implications and empirical consequences.And this is precisely what the intelligent citizen needs to know in these times of financial peril. I know of no better source to acquire such knowledge than this book. I should note that due to the regrettably long bookshelf latency at Fourmilab, I read the second edition of this work after the third edition became available. Usually I wouldn't bother to mention such a detail, but while the second edition I read was 438 pages in length, the third is a 640 page ker-whump on the desktop. Now, my experience in reading the works of Thomas Sowell over the decades is that he doesn't waste words and that every paragraph encapsulates wisdom that's worth taking away, even if you need to read it four or five times over a few days to let it sink in. But still, I'm wary of books which grow to such an extent between editions. I read the second edition, and my unconditional endorsement of it as something you absolutely have to read as soon as possible is based upon the text I read. In all probability the third edition is even better—Dr. Sowell understands the importance of reputation in a market economy better than almost anybody, but I can neither evaluate nor endorse something I haven't yet read. That said, I'm confident that regardless of which edition of this book you read, you will close it as a much wiser citizen of a civil society and participant in a free economy than when you opened the volume.
Supreme Court Associate Justice J. Mortimer Brinnin's deteriorating mental condition had been the subject of talk for some months now, but when he showed up for oral argument with his ears wrapped in aluminum foil, the consensus was that the time had finally come for him to retire.The departure of Mr. Justice Brinnin created a vacancy which embattled President Donald Vanderdamp attempted to fill with two distinguished jurists boasting meagre paper trails, both of whom were humiliatingly annihilated in hearings before the Senate Judiciary Committee, whose chairman, loquacious loose cannon and serial presidential candidate Dexter Mitchell, coveted the seat for himself. After rejection of his latest nominee, the frustrated president was channel surfing at Camp David when he came across the wildly popular television show Courtroom Six featuring television (and former Los Angeles Superior Court) judge Pepper Cartwright dispensing down-home justice with her signature Texas twang and dialect. Let detested Senator Mitchell take on that kind of popularity, thought the Chief Executive, chortling at the prospect, and before long Judge Pepper is rolled out as the next nominee, and prepares for the confirmation fight. I kind of expected this story to be about how an authentic straight-talking human being confronts the “Borking” judicial nominees routinely receive in today's Senate, but it's much more and goes way beyond that, which I shall refrain from discussing to avoid spoilers. I found the latter half of the book less satisfying that the first—it seemed like once on the court Pepper lost some of her spice, but I suppose that's realistic (yet who expects realism in farces?). Still, this is a funny book, with hundreds of laugh out loud well-turned phrases and Buckley's customary delightfully named characters. The fractured Latin and snarky footnotes are an extra treat. This is not a roman ŕ clef, but you will recognise a number of Washington figures upon which various characters were modelled.
a theory of the center, that is, a theory which occupies the center. I believe that only when such a theory of the center is articulated will architecture be able to transform itself as it always has and as it always will…. But the center that I am talking about is not a center that can be the center that we know is in the past, as a nostalgia for center. Rather, this not new but other center will be … an interstitial one—but one with no structure, but one also that embraces as periphery in its own centric position. … A center no longer sustained by nostalgia and no longer sustained by univocal discourse. (p. 187)Got that? I'd hate to be a client explaining to him that I want the main door to be centred between these two windows. But seriously, apart from the zaniness, intellectual vapidity and sophistry, and obscurantist prose (all of which are on abundant display here), what we're seeing what Italian Communist Antonio Gramsci called the “long march through the institutions” arriving at the Marxist promised land: institutions of higher education funded with taxpayer money and onerous tuition payments paid by hard-working parents and towering student loans disgorging class after class of historically and culturally ignorant, indoctrinated, and easily influenced individuals into the electorate, just waiting for a charismatic leader who knows how to eloquently enunciate the trigger words they've been waiting for. In the 2008 postscript the author notes that a common reaction to the original 1990 edition of the book was the claim that he had cherry-picked for mockery a few of the inevitably bizarre extremes you're sure to find in a vibrant and diverse academic community. But with all the news in subsequent years of speech codes, jackboot enforcing of “diversity”, and the lockstep conformity of much of academia, this argument is less plausible today. Indeed, much of the history of the last two decades has been the diffusion of new deconstructive and multicultural orthodoxy from elite institutions into the mainstream and its creeping into the secondary school curriculum as well. What happens in academia matters, especially in a country in which an unprecedented percentage of the population passes through what style themselves as institutions of higher learning. The consequences of this should be begin to be manifest in the United States over the next few years.
malison, exordium, eristic, roorback, tertium quid, bibulosity, eftsoons, vendue, froward, pococurante, disprized, toper, cerecloth, sennight, valetudinarian, variorum, concinnity, plashing, ultimo, fleer, recusants, scrim, flagitious, indurated, truckling, linguacious, caducity, prepotency, natheless, dissentient, placemen, lenity, burke, plangency, roundelay, hymeneally, mesalliance, divagation, parti pris, anent, comminatory, descry, minatory
In this compelling novel, which is essentially a fictionalised survival manual, the author tracks a small group of people who have banded together to ride out total societal collapse in the United States, prepared themselves, and are eventually forced by circumstances to do all of these things and more. I do not have high expectations for self-published works by first-time authors, but I started to read this book whilst scanning documents for one of my other projects and found it so compelling that the excellent book I was currently reading (a review of which will appear here shortly) was set aside as I scarfed up this book in a few days. Our modern, technological civilisation has very much a “just in time” structure: interrupt electrical power and water supplies and sewage treatment fail in short order. Disrupt the fuel supply (in any number of ways), and provision of food to urban centres fails in less than a week, with food riots and looting the most likely outcome. As we head into what appears to be an economic spot of bother, it's worth considering just how bad it may get, and how well you and yours are prepared to ride out the turbulence. This book, which one hopes profoundly exaggerates the severity of what is to come, is an excellent way to inventory your own preparations and skills for a possible worst case scenario. For a sense of the author's perspective, and for a wealth of background information only alluded to in passing in the book, visit the author's SurvivalBlog.com site. Sploosh, splash, inky squirt! Ahhhh…, it's Apostrophe Squid trying to get my attention. What is it about self-published authors who manifest encyclopedic knowledge across domains as diverse as nutrition, military tactics, medicine, economics, agriculture, weapons and ballistics, communications security, automobile and aviation mechanics, and many more difficult to master fields, yet who stumble over the humble apostrophe like their combat bootlaces were tied together? Our present author can tell you how to modify a common amateur radio transceiver to communicate on the unmonitored fringes of the Citizens' Band and how to make your own improvised Claymore mines, but can't seem to form the possessive of a standard plural English noun, and hence writes “Citizen's Band” and the equivalent in all instances. (Just how useful would a “Citizen's Band” radio be, with only one citizen transmitting and receiving on it?) Despite the punctuational abuse and the rather awkward commingling of a fictional survival scenario with a catalogue of preparedness advice and sources of things you'll need when the supply chain breaks, I found this a compulsive page-turner. It will certainly make you recalibrate your ability to ride out that bad day when you go to check the news and find there's no Internet, and think again about just how much food you should store in the basement and (more importantly), how skilled you are in preparing what you cached many years ago, not to mention what you'll do when that supply is exhausted.A human being should be able to change a diaper, plan an invasion, butcher a hog, design a building, conn a ship, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve an equation, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
2009 |
“Eureka! Good evening, folks.”It wouldn't be Doc Smith if it weren't prophetic, and in this book published in the year in which the Original Nixon was to lose the presidential election to John F. Kennedy, we catch a hint of a “New Nixon” as the intrepid Vortex Blaster visits the planet Nixson II on p. 77. While not as awe inspiring in scope as the Lensman novels, this is a finely crafted yarn which combines a central puzzle with many threads exploring characteristics of alien cultures (never cross an adolescent cat-woman from Vegia!), the ultimate power of human consciousness, and the eternal question never far from the mind of the main audience of science fiction: whether a nerdy brainiac can find a soulmate somewhere out there in the spacelanes. If you're unacquainted with the Lensman universe, this is not the place to start, but once you've worked your way through, it's a delightful lagniappe to round out the epic. Unlike the Lensman series, this book remains out of print. Used copies are readily available although sometimes pricey. For those with access to the gizmo, a Kindle edition is available.
“Eureka? I hope you rot in hell, Graves…”
“This isn't Graves. Cloud. Storm Cloud, the Vortex Blaster, investigating…”
“Oh, Bob, the patrol!” the girl screamed.
The answer to the problem of Hollywood for those of a more conservative or centrist bent is to go make movies of their own. Of course, to do so means finding financing and distribution. Today's technologies are making that simpler. Cameras and editing equipment cost a pittance. Distribution is at hand for the price of a URL. All that's left is the creativity. Unfortunately, that's the difficult part.A video interview with the author is available.
Wilson loops can describe a gauge theory such as Maxwell's theory of electromagnetism or the gauge theory of the standard model of particle physics. These loops are gauge-invariant observables obtained from the holonomy of the gauge connection around a given loop. The holonomy of a connection in differential geometry on a smooth manifold is defined as the measure to which parallel transport around closed loops fails to preserve the geometrical data being transported. Holonomy has nontrivial local and global features for curved connections.I know that they say you lose half the audience for every equation you include in a popular science book, but this is pretty forbidding stuff for anybody who wanders into the notes. For a theory like this, the fit to the best available observational data is everything, and this is discussed almost everywhere only in qualitative terms. Let's see the numbers! Although there is a chapter on string theory and quantum gravity, these topics are dropped in the latter half of the book: MOG is a purely classical theory, and there is no discussion of how it might lead toward the quantisation of gravitation or be an emergent effective field theory of a lower level quantum substrate. There aren't many people with the intellect, dogged persistence, and self-confidence to set out on the road to deepen our understanding of the universe at levels far removed from those of our own experience. Einstein struggled for ten years getting from Special to General Relativity, and Moffat has worked for three times as long arriving at MOG and working out its implications. If it proves correct, it will be seen as one of the greatest intellectual achievements by a single person (with a small group of collaborators) in recent history. Should that be the case (and several critical tests which may knock the theory out of the box will come in the near future), this book will prove a unique look into how the theory was so patiently constructed. It's amusing to reflect, if it turns out that dark matter and dark energy end up being epicycles invoked to avoid questioning a theory never tested in the domains in which it was being applied, how historians of science will look back at our age and wryly ask, “What were they thinking?”. I have a photo credit on p. 119 for a vegetable.
Oil powers just about everything in the US economy, from food production and distribution to shipping, construction and plastics manufacturing. When less oil becomes available, less is produced, but the amount of money in circulation remains the same, causing the prices for the now scarcer products to be bid up, causing inflation. The US relies on foreign investors to finance its purchases of oil, and foreign investors, seeing high inflation and economic turmoil, flee in droves. Result: less money with which to buy oil and, consequently, less oil with which to produce things. Lather, rinse, repeat; stop when you run out of oil. Now look around: Where did that economy disappear to?Now if you believe in Peak Oil (as the author most certainly does, along with most of the rest of the catechism of the environmental left), this is pretty persuasive. But even if you don't, you can make the case for a purely economic collapse, especially with the unprecedented deficits and money creation as the present process of deleveraging accelerates into debt liquidation (either through inflation or outright default and bankruptcy). The ultimate trigger doesn't make a great deal of difference to the central argument: the U.S. runs on oil (and has no near-term politically and economically viable substitute) and depends upon borrowed money both to purchase oil and to service its ever-growing debt. At the moment creditors begin to doubt they're every going to be repaid (as happened with the Soviet Union in its final days), it's game over for the economy, even if the supply of oil remains constant. Drawing upon the Soviet example, the author examines what an economic collapse on a comparable scale would mean for the U.S. Ironically, he concludes that many of the weaknesses which were perceived as hastening the fall of the Soviet system—lack of a viable cash economy, hoarding and self-sufficiency at the enterprise level, failure to produce consumer goods, lack of consumer credit, no private ownership of housing, and a huge and inefficient state agricultural sector which led many Soviet citizens to maintain their own small garden plots— resulted, along with the fact that the collapse was from a much lower level of prosperity, in mitigating the effects of collapse upon individuals. In the United States, which has outsourced much of its manufacturing capability, depends heavily upon immigrants in the technology sector, and has optimised its business models around high-velocity cash transactions and just in time delivery, the consequences post-collapse may be more dire than in the “primitive” Soviet system. If you're going to end up primitive, you may be better starting out primitive. The author, although a U.S. resident for all of his adult life, did not seem to leave his dark Russian cynicism and pessimism back in the USSR. Indeed, on numerous occasions he mocks the U.S. and finds it falls short of the Soviet standard in areas such as education, health care, public transportation, energy production and distribution, approach to religion, strength of the family, and durability and repairability of capital and the few consumer goods produced. These are indicative of what he terms a “collapse gap”, which will leave the post-collapse U.S. in much worse shape than ex-Soviet Russia: in fact he believes it will never recover and after a die-off and civil strife, may fracture into a number of political entities, all reduced to a largely 19th century agrarian lifestyle. All of this seems a bit much, and is compounded by offhand remarks about the modern lifestyle which seem to indicate that his idea of a “sustainable” world would be one largely depopulated of humans in which the remainder lived in communities much like traditional African villages. That's what it may come to, but I find it difficult to see this as desirable. Sign me up for L. Neil Smith's “freedom, immortality, and the stars” instead. The final chapter proffers a list of career opportunities which proved rewarding in post-collapse Russia and may be equally attractive elsewhere. Former lawyers, marketing executives, financial derivatives traders, food chemists, bank regulators, university administrators, and all the other towering overhead of drones and dross whose services will no longer be needed in post-collapse America may have a bright future in the fields of asset stripping, private security (or its mirror image, violent racketeering), herbalism and medical quackery, drugs and alcohol, and even employment in what remains of the public sector. Hit those books! There are some valuable insights here into the Soviet collapse as seen from the perspective of citizens living through it and trying to make the best of the situation, and there are some observations about the U.S. which will make you think and question assumptions about the stability and prospects for survival of the economy and society on its present course. But there are so many extreme statements you come away from the book feeling like you've endured an “end is nigh” rant by a wild-eyed eccentric which dilutes the valuable observations the author makes.
Although the Black Hole War should have come to an end in early 1998, Stephen Hawking was like one of those unfortunate soldiers who wander in the jungle for years, not knowing that the hostilities have ended. By this time, he had become a tragic figure. Fifty-six years old, no longer at the height of his intellectual powers, and almost unable to communicate, Stephen didn't get the point. I am certain that it was not because of his intellectual limitations. From the interactions I had with him well after 1998, it was obvious that his mind was still extremely sharp. But his physical abilities had so badly deteriorated that he was almost completely locked within his own head. With no way to write an equation and tremendous obstacles to collaborating with others, he must have found it impossible to do the things physicists ordinarily do to understand new, unfamiliar work. So Stephen went on fighting for some time. (p. 419)Or, Prof. Susskind, perhaps it's that the intellect of Prof. Hawking makes him sceptical of arguments based a “theory” which is, as you state yourself on p. 384, “like a very complicated Tinkertoy set, with lots of different parts that can fit together in consistent patterns”; for which not a single fundamental equation has yet been written down; in which no model that remotely describes the world in which we live has been found; whose mathematical consistency and finiteness in other than toy models remains conjectural; whose results regarding black holes are based upon another conjecture (AdS/CFT) which, even if proven, operates in a spacetime utterly unlike the one we inhabit; which seems to predict a vast “landscape” of possible solutions (vacua) which make it not a theory of everything but rather a “theory of anything”; which is formulated in a flat Minkowski spacetime, neglecting the background independence of general relativity; and which, after three decades of intensive research by some of the most brilliant thinkers in theoretical physics, has yet to make a single experimentally-testable prediction, while demonstrating its ability to wiggle out of almost any result (for example, failure of the Large Hadron Collider to find supersymmetric particles). At the risk of attracting the scorn the author vents on pp. 186–187 toward non-specialist correspondents, let me say that the author's argument for “black hole complementarity” makes absolutely no sense whatsoever to this layman. In essence, he argues that matter infalling across the event horizon of a black hole, if observed from outside, is disrupted by the “extreme temperature” there, and is excited into its fundamental strings which spread out all over the horizon, preserving the information accreted in the stringy structure of the horizon (whence it can be released as the black hole evaporates). But for a co-moving observer infalling with the matter, nothing whatsoever happens at the horizon (apart from tidal effects whose magnitude depends upon the mass of the black hole). Susskind argues that since you have to choose your frame of reference and cannot simultaneously observe the event from both outside the horizon and falling across it, there is no conflict between these two descriptions, and hence they are complementary in the sense Bohr described quantum observables. But, unless I'm missing something fundamental, the whole thing about the “extreme temperature” at the black hole event horizon is simply nonsense. Yes, if you lower a thermometer from a space station at some distance from a black hole down toward the event horizon, it will register a diverging temperature as it approaches the horizon. But this is because it is moving near the speed of light with respect to spacetime falling through the horizon and is seeing the cosmic background radiation blueshifted by a factor which reaches infinity at the horizon. Further, being suspended above the black hole, the thermometer is in a state of constant acceleration (it might as well have a rocket keeping it at a specified distance from the horizon as a tether), and is thus in a Rindler spacetime and will measure black body radiation even in a vacuum due to the Unruh effect. But note that due to the equivalence principle, all of this will happen precisely the same even with no black hole. The same thermometer, subjected to the identical acceleration and velocity with respect to the cosmic background radiation frame, will read precisely the same temperature in empty space, with no black hole at all (and will even observe a horizon due to its hyperbolic motion). The “lowering the thermometer” is a completely different experiment from observing an object infalling to the horizon. The fact that the suspended thermometer measures a high temperature in no way implies that a free-falling object approaching the horizon will experience such a temperature or be disrupted by it. A co-moving observer with the object will observe nothing as it crosses the horizon, while a distant observer will see the object appear to freeze and wink out as it reaches the horizon and the time dilation and redshift approaches infinity. Nowhere is there this legendary string blowtorch at the horizon spreading out the information in the infalling object around a horizon which, observed from either perspective, is just empty space. The author concludes, in a final chapter titled “Humility”, “The Black Hole War is over…”. Well, maybe, but for this reader, the present book did not make the sale. The arguments made here are based upon aspects of string theory which are, at the moment, purely conjectural and models which operate in universes completely different from the one we inhabit. What happens to information that falls into a black hole? Well, Stephen Hawking has now conceded that it is preserved and released in black hole evaporation (but this assumes an anti de Sitter spacetime, which we do not inhabit), but this book just leaves me shaking my head at the arm waving arguments and speculative theorising presented as definitive results.
“… I'm what you might call a counterterrorism specialist.” “Okay … and what, may I ask, does a counterterrorism specialist do?” Rapp was not well versed in trying to spin what he did, so he just blurted out the hard, cold truth. “I kill terrorists.” “Say again?” “I hunt them down, and I kill them.”No nuance for Mr. Mitch! This is a superbly crafted thriller which will make you hunger for the next. Fortunately, there are seven sequels already published and more on the way. See my comments on the first installment for additional details and a link to an interview with the author. The montage on the cover of the paperback edition I read uses a biohazard sign (☣) as its background—I have no idea why—neither disease nor biological weapons figure in the story in any way. Yes, I've been reading a lot of thrillers recently—summer's comin' and 'tis the season for light and breezy reading. I'll reserve Quantum Field Theory in a Nutshell for the dwindling daylight of autumn, if you don't mind.
“Hey, Jimmy Joe. How's the flow?”If you want to warm up your suspension of disbelief to take on this twaddle, imagine Tom Clancy voluntarily lending his name and reputation to it. And, hey, if you like this kind of stuff, there are nine more books in the series to read!
“Dee eff eff, Tyrone.” This stood for DFF—data flowin' fine.
“Listen, I talked to Jay Gee. He needs our help.”
“Nopraw,” Tyrone said. “Somebody is poppin' strands.”
“Tell me somethin' I don't compro, bro. Somebody is always poppin' strands.”
“Yeah, affirm, but this is different. There's a C-1 grammer [sic] looking to rass the whole web.”
“Nofeek?”
“Nofeek.”
Suddenly, a fiery chariot drawn by fiery horses descended from the sky. Sarah was driving. Urim and Thummim were shining on her breastplate of judgment.Look, I've been backed into corners in stories myself on many occasions, and every time the fiery chariot option appears the best way out, I've found it best to get a good night's sleep and have another go at it on the morrow. Perhaps you have to write and discard a million words before achieving that perspective.
Baseball, I know, needs people who can not only make snap decisions but live with them, something most people will do only when there's no other choice. Come to think of it, the world in general needs people who accept responsibility so easily and so readily. We should be thankful for them.Batter up! Answer: The run scores, the batter is called out on strikes, and the ball is dead. Had there been two outs, the third strike would have ended the inning and the run would not have scored (p. 91).
Whoever invests in the NucRocCorp and subsequent Space Charter Authority should be required to sign a declaration that commits him or her to respect the purpose of the new regime, and conduct their personal lives in a manner that recognizes the rights of their fellow man (What about woman?—JW). They must be made aware that failure to do so could result in forfeiture of their investment.Property rights, anybody? Thought police? Apart from the manifest baroque complexity of the proposed scheme, it entirely ignores Jerry Pournelle's Iron Law of Bureaucracy: regardless of its original mission, any bureaucracy will eventually be predominately populated by those seeking to advance the interests of the bureaucracy itself, not the purpose for which it was created. The structure proposed here, even if enacted (implausible in the extreme) and even if it worked as intended (vanishingly improbable), would inevitably be captured by the Iron Law and become something like, well, NASA. On pp. 36–37, the author likens attempts to stretch chemical rocket technology to its limits to gold plating a nail when what is needed is a bigger hammer (nuclear rockets). But this book brings to my mind another epigram: “When all you have is a hammer, everything looks like a nail.” Dewar passionately supports nuclear rocket technology and believes that it is the way to open the solar system to human settlement. I entirely concur. But when it comes to assuming that boosting people up to a space station (p. 111):
And looking down on the bright Earth and into the black heavens might create a new perspective among Protestant, Roman Catholic, and Orthodox theologians, and perhaps lead to the end of the schism plaguing Christianity. The same might be said of the division between the Sunnis and Shiites in Islam, and the religions of the Near and Far East might benefit from a new perspective.Call me cynical, but I'll wager this particular swing of the hammer is more likely to land on a thumb than the intended nail. Those who cherish individual freedom have often dreamt of a future in which the opening of access to space would, in the words of L. Neil Smith, extend the human prospect to “freedom, immortality, and the stars”—works for me. What is proposed here, if adopted, looks more like, after more than a third of a century of dithering, the space frontier being finally opened to the brave pioneers ready to homestead there, and when they arrive, the tax man and the all-pervasive regulatory state are already there, up and running. The nuclear rocket can expand the human presence throughout the solar system. Let's just hope that when humanity (or some risk-taking subset of it) takes that long-deferred step, it does not propagate the soft tyranny of present day terrestrial governance to worlds beyond.
There are many categories of scientists, people of second and third rank, who do their best, but do not go very far. There are also people of first class, who make great discoveries, which are of capital importance for the development of science. But then there are the geniuses, like Galileo and Newton. Well, Ettore was one of these.In 1933, Majorana visited Werner Heisenberg in Leipzig and quickly became a close friend of this physicist who was, in most personal traits, his polar opposite. Afterward, he returned to Rome and flip-flopped from his extroversion in the company of Heisenberg to the life of a recluse, rarely leaving his bedroom in the family mansion for almost four years. Then something happened, and he jumped into the competition for the position of full professor at the University of Naples, bypassing the requirement for an examination due to his “exceptional merit”. He emerged from his reclusion, accepted the position, and launched into his teaching career, albeit giving lectures at a level which his students often found bewildering. Then, on March 26th, 1938, he boarded a ship in Palermo Sicily bound for Naples and was never seen again. Before his departure he had posted enigmatic letters to his employer and family, sent a telegram, and left a further letter in his hotel room which some interpreted as suicide notes, but which forensic scientists who have read thousands of suicide notes say resemble none they've ever seen (but then, would a note by a Galileo or Newton read like that of the run of the mill suicide?). This event set in motion investigation and speculation which continues to this very day. Majorana was said to have withdrawn a large sum of money from his bank a few days before: is this plausible for one bent on self-annihilation (we'll get back to that infra)? Based on his recent interest in religion and reports of his having approached religious communities to join them, members of his family spent a year following up reports that he'd joined a monastery; despite “sightings”, none of these leads panned out. Years later, multiple credible sources with nothing apparently to gain reported that Majorana had been seen on numerous occasions in Argentina, and, abandoning physics (which he had said “was on the wrong path” before his disappearance), pursued a career as an engineer. This only scratches the surface of the legends which have grown up around Majorana. His disappearance, occurring after nuclear fission had already been produced in Fermi's laboratory, but none of the “boys” had yet realised what they'd seen, spawns speculation that Majorana, as he often did, figured it out, worked out the implications, spoke of it to someone, and was kidnapped by the Germans (maybe he mentioned it to his friend Heisenberg), the Americans, or the Soviets. There is an Italian comic book in which Majorana is abducted by Americans, spirited off to Los Alamos to work on the Manhattan Project, only to be abducted again (to his great relief) by aliens in a flying saucer. Nobody knows—this is just one of the many mysteries bearing the name Majorana. Today, Majorana is best known for his work on the neutrino. He responded to Paul Dirac's theory of the neutrino (which he believed unnecessarily complicated and unphysical) with his own, in which, as opposed to there being neutrinos and antineutrinos, the neutrino is its own antiparticle and hence neutrinos of the same flavour can annihilate one another. At the time these theories were proposed the neutrino had not been detected, nor would it be for twenty years. When the existence of the neutrino was confirmed (although few doubted its existence by the time Reines and Cowan detected it in 1956), few believed it would ever be possible to distinguish the Dirac and Majorana theories of the neutrino, because that particle was almost universally believed to be massless. But then the “scientific consensus” isn't always the way to bet.
Starting with solar neutrino experiments in the 1960s, and continuing to the present day, it became clear that neutrinos did have mass, albeit very little compared to the electron. This meant that the distinction between the Dirac and Majorana theories of the neutrino was accessible to experiment, and could, at least in principle, be resolved. “At least in principle”: what a clarion call to the bleeding edge experimentalist! If the neutrino is a Majorana particle, as opposed to a Dirac particle, then neutrinoless double beta decay should occur, and we'll know whether Majorana's model, proposed more than seven decades ago, was correct. I wish there'd been more discussion of the open controversy over experiments which claim a 6σ signal for neutrinoless double beta decay in 76Ge, but then one doesn't want to date one's book with matters actively disputed.
To the book: this may be the first exemplar of a new genre I'll dub “gonzo scientific biography”. Like the “new journalism” of the 1960s and '70s, this is as much about the author as the subject; the author figures as a central character in the narrative, whether transcribing his queries in pidgin Italian to the Majorana family:“Signora wifed a brother of Ettore, Luciano?”Besides humourously trampling on the language of Dante, the author employs profanity as a superlative as do so many “new journalists”. I find this unseemly in a scientific biography of an ascetic, deeply-conflicted individual who spent most of his short life in a search for the truth and, if he erred, erred always on the side of propriety, self-denial, and commitment to dignity of all people. Should you read this? Well, if you've come this far, of course you should! This is an excellent, albeit flawed, biography of a singular, albeit flawed, genius whose intellectual legacy motivates massive experiments conducted deep underground and in the seas today. Suppose a neutrinoless double beta decay experiment should confirm the Majorana theory? Should he receive the Nobel prize for it? On the merits, absolutely: many physics Nobels have been awarded for far less, and let's not talk about the “soft Nobels”. But under the rules a Nobel prize can't be awarded posthumously. Which then compels one to ask, “Is Ettore dead?” Well, sure, that's the way to bet: he was born in 1906 and while many people have lived longer, most don't. But how you can you be certain? I'd say, should an experiment for neutrinoless double beta decay prove conclusive, award him the prize and see if he shows up to accept it. Then we'll all know for sure. Heck, if he did, it'd probably make Drudge.
“What age did signora owned at that time”
“But he was olded fifty years!”
“But in end he husbanded you.”
2010 |
Iran today is, in a sense, the only country where progressive ideas enjoy a vast constituency. It is there that the ideas I subscribe to are defended by a majority.Lest this be deemed a slip of the tongue due to intoxication by the heady Alpine air of Davos, a few days later on U.S. television he doubled down with:
[Iran is] the only one with elections, including the United States, including Israel, including you name it, where the liberals, or the progressives, have won two-thirds to 70 percent of the vote in six elections…. In every single election, the guys I identify with got two-thirds to 70 percent of the vote. There is no other country in the world I can say that about, certainly not my own.I suppose if the U.S. had such an overwhelming “progressive” majority, it too would adopt “liberal” policies such as hanging homosexuals from cranes until they suffocate and stoning rape victims to death. But perhaps Clinton was thinking of Iran's customs of polygamy and “temporary marriage”. Iran is a great nation which has been a major force on the world stage since antiquity, with a deep cultural heritage and vigorous population who, in exile from poor governance in the homeland, have risen to the top of demanding professions all around the world. Today (as well as much of the last century) Iran is saddled with a regime which squanders its patrimony on a messianic dream which runs the very real risk of igniting a catastrophic conflict in the Middle East. The author argues that the only viable option is regime change, and that all actions taken by other powers should have this as the ultimate goal. Does that mean going to war with Iran? Of course not—the very fact that the people of Iran are already pushing back against the mullahs is evidence they perceive how illegitimate and destructive the present regime is. It may even make sense to engage with institutions of the Iranian state, which will be the enduring foundation of the nation after the mullahs are sent packing, but it it essential that the Iranian people be sent the message that the forces of civilisation are on their side against those who oppress them, and to use the communication tools of this new century (Which country has the most bloggers? The U.S. Number two? Iran.) to bypass the repressive regime and directly address the people who are its victims. Hey, I spent two weeks in Iran a decade ago and didn't pick up more than a tiny fraction of the insight available here. Events in Iran are soon to become a focus of world attention to an extent they haven't been for the last three decades. Read this book to understand how Iran figures in the contemporary Great Game, and how revolutionary change may soon confront the Islamic Republic.
Altogether between 1946 and 1962, the United States detonated just over a thousand nuclear warheads, including some three hundred in the open air, hurling numberless tons of radioactive dust into the atmosphere. The USSR, China, Britain, and France detonated scores more.Sigh…where do we start? Well, the obvious subtext is that U.S. started the arms race and that other nuclear powers responded in a feeble manner. In fact, the U.S. conducted a total of 1030 nuclear tests, with a total of 215 detonated in the atmosphere, including all tests up until testing was suspended in 1992, with the balance conducted underground with no release of radioactivity. The Soviet Union (USSR) did, indeed, conduct “scores” of tests, to be precise 35.75 score with a total of 715 tests, with 219 in the atmosphere—more than the U.S.—including Tsar Bomba, with a yield of 50 megatons. “Scores” indeed—surely the arms race was entirely at the instigation of the U.S. If you've grown up in he U.S. in the 1950s or wished you did, you'll want to read this book. I had totally forgotten the radioactive toilets you had to pay to use but kids could wiggle under the door to bask in their actinic glare, the glories of automobiles you could understand piece by piece and were your ticket to exploring a broad continent where every town, every city was completely different: not just another configuration of the same franchises and strip malls (and yet recall how exciting it was when they first arrived: “We're finally part of the great national adventure!”) The 1950s, when privation gave way to prosperity, yet Leviathan had not yet supplanted family, community, and civil society, it was utopia to be a kid (although, having been there, then, I'd have deemed it boring, but if I'd been confined inside as present-day embryonic taxpayers in safetyland are I'd have probably blown things up. Oh wait—Willoughby already did that, twelve hours too early!). If you grew up in the '50s, enjoy spending a few pleasant hours back there; if you're a parent of the baby boomers, exult in the childhood and opportunities you entrusted to them. And if you're a parent of a child in this constrained century? Seek to give your child the unbounded opportunities and unsupervised freedom to explore the world which Bryson and this humble scribbler experienced as we grew up. Vapourising morons with ThunderVision—we need you more than ever, Thunderbolt Kid! A U.S. edition is available.
10. Some scholars believe that the zombies were a last-minute addition to the novel, requested by the publisher in a shameless attempt to boost sales. Others argue that the hordes of living dead are integral to Jane Austen's plot and social commentary. What do you think? Can you imagine what this novel might be without the violent zombie mayhem?Beats me. Of course this is going to be made into a movie—patience! A comic book edition, set of postcards, and a 2011 wall calendar ideal for holiday giving are already available—go merchandising! Here is a chart which will help you sort out the relationships among the many characters in both Jane Austen's original novel and this one. While this is a parody, whilst reading it I couldn't help but recall Herman Kahn's parable of the lions in New York City. Humans are almost infinitely adaptable and can come to consider almost any situation normal once they've gotten used to it. In this novel zombies are something one lives with as one of the afflictions of mortal life like tuberculosis and crabgrass, and it is perfectly normal for young ladies to become warriors because that's what circumstances require. It gives one pause to think how many things we've all come to consider unremarkable in our own lives might be deemed bizarre and/or repellent from the perspective of those of another epoch or observing from a different cultural perspective.
To avoid being mistaken for a sellout, I chose my friends carefully. The more politically active black students. The foreign students. The Chicanos. The Marxist professors and the structural feminists and punk-rock performance poets. We smoked cigarettes and wore leather jackets. At night, in the dorms, we discussed neocolonialism, Frantz Fanon, Eurocentrism, and patriarchy.The sentence fragments. Now, certainly, many people have expressed radical thoughts in their college days, but most, writing an autobiography fifteen years later, having graduated from Harvard Law School and practiced law, might be inclined to note that they'd “got better”; to my knowledge, Obama makes no such assertion. Further, describing his first job in the private sector, also in Dreams, he writes:
Eventually, a consulting house to multinational corporations agreed to hire me as a research assistant. Like a spy behind enemy lines, I arrived every day at my mid-Manhattan office and sat at my computer terminal, checking the Reuters machine that blinked bright emerald messages from across the globe.Now bear in mind that this is Obama on Obama, in a book published the same year he decided to enter Illinois politics, running for a state senate seat. Why would a politician feigning moderation in order to gain power, thence to push a radical agenda, explicitly brag of his radical credentials and background? Well, he doesn't because he's been an overt hard left radical with a multitude of connections to leftist, socialist, communist, and militant figures all of his life, from the first Sunday school he attended in Hawaii to the circle of advisers he brought into government following his election as president. The evidence of this has been in plain sight ever since Obama came onto the public scene, and he has never made an effort to cover it up or deny it. The only reason it is not widely known is that the legacy media did not choose to pursue it. This book documents Obama's radical leftist history and connections, but it does so in such a clumsy and tedious manner that you may find it difficult to slog through. The hard left in the decades of Obama's rise to prominence is very much like that of the 1930s through 1950s: a multitude of groups with platitudinous names concealing their agenda, staffed by a cast of characters whose names pop up again and again as you tease out the details, and with sources of funding which disappear into a cloud of smoke as you try to pin them down. In fact, the “new new left” (or “contemporary progressive movement”, as they'd doubtless prefer) looks and works almost precisely like what we used to call “communist front organisations” back in the day. The only difference is that they aren't funded by the KGB, seek Soviet domination, or report to masters in Moscow—at least as far as we know…. Obama's entire career has been embedded in such a tangled web of radical causes, individuals, and groups that following any one of them is like pulling up a weed whose roots extend in all directions, tangling with other weeds, which in turn are connected every which way. What we have is not a list of associations, but rather a network, and a network is a difficult thing to describe in the linear narrative of a book. In the present case, the authors get all tangled up in the mess, and the result is a book which is repetitive, tedious, and on occasions so infuriating that it was mostly a desire not to clean up the mess and pay the repair cost which kept me from hurling it through a window. If they'd mentioned just one more time that Bill Ayers was a former Weatherman terrorist, I think I might have lost that window. Each chapter starts out with a theme, but as the web of connections spreads, we get into material and individuals covered elsewhere, and there is little discipline in simply cross-referencing them or trusting the reader to recall their earlier mention. And when there are cross-references, they are heavy handed. For example at the start of chapter 12, they write: “Two of the architects of that campaign, and veterans of Obama's U.S. senatorial campaign—David Axelrod and Valerie Jarrett—were discussed by the authors in detail in Chapter 10 of this book.” Hello, is there an editor in the house? Who other than “the authors” would have discussed them, and where else than in “this book”? And shouldn't an attentive reader be likely to recall two prominent public figures discussed “in detail” just two chapters before? The publisher's description promises much, including “Obama's mysterious college years unearthed”, but very little new information is delivered, and most of the book is based on secondary sources, including blog postings the credibility of which the reader is left to judge. Now, I did not find much to quibble about, but neither did I encounter much material I did not already know, and I've not obsessively followed Obama. I suppose that people who exclusively get their information from the legacy media might be shocked by what they read here, but most of it has been widely mentioned since Obama came onto the radar screen in 2007. The enigmatic lacunæ in Obama's paper trail (SAT and LSAT scores, college and law school transcripts, etc.) are mentioned here, but remain mysterious. If you're interested in this topic, I'd recommend giving this book a miss and instead starting with the Barack Obama page on David Horowitz's Discover the Networks site, following the links outward from there. Horowitz literally knows the radical left from inside and out: the son of two members of the Communist Party of the United States, he was a founder of the New Left and editor of Ramparts magazine. Later, repelled by the murderous thuggery of the Black Panthers, he began to re-think his convictions and has since become a vocal opponent of the Left. His book, Radical Son (March 2007), is an excellent introduction to the Old and New Left, and provides insight into the structure and operation of the leftists behind and within the Obama administration.
This state was regularly induced by PR experts to cloud and control issues in the public discourse, to keep thinking people depressed and apathetic on election days, and to discourage those who might be tempted to actually take a stand on a complex issue.It is easy to imagine responsible citizens in the United States, faced with a topical storm of radical leftist “transformation” unleashed by the Obama administration and its Congressional minions, combined with a deep recession, high unemployment, impending financial collapse, and empowered adversaries around the world, falling into a lethargic state where each day's dismaying news simply deepens the depression and sense of powerlessness and hopelessness. Whether deliberately intended or not, this is precisely what the statists want, and it leads to a citizenry reduced to a despairing passivity as the chains of dependency are fastened about them. This book is a superb antidote for those in topical depression, and provides common-sense and straightforward policy recommendations which can gain the support of the majorities needed to put them into place. Gingrich begins by surveying the present dire situation in the U.S. and what is at stake in the elections of 2010 and 2012, which he deems the most consequential elections in living memory. Unless stopped by voters at these opportunities, what he describes as a “secular-socialist machine” will be able to put policies in place which will restructure society in such as way as to create a dependent class of voters who will reliably return their statist masters to power for the foreseeable future, or at least until the entire enterprise collapses (which may be sooner, rather than later, but should not be wished for by champions of individual liberty as it will entail human suffering comparable to a military conquest and may result in replacement of soft tyranny by that of the jackbooted variety). After describing the hole the U.S. have dug themselves into, the balance of the book contains prescriptions for getting out. The situation is sufficiently far gone, it is argued, that reforming the present corrupt bureaucratic system will not suffice—a regime pernicious in its very essence cannot be fixed by changes around the margin. What is needed, then, is not reform but replacement: repealing or sunsetting the bad policies of the present and replacing them with ones which make sense. In certain domains, this may require steps which seem breathtaking to present day sensibilities, but when something reaches its breaking point, drastic things will happen, for better or for worse. For example, what to do about activist left-wing Federal judges with lifetime tenure, who negate the people's will expressed through their elected legislators and executive branch? Abolish their courts! Hey, it worked for Thomas Jefferson, why not now? Newt Gingrich seeks a “radical transformation” of U.S. society no less than does Barack Obama. Unlike Obama, however, his prescriptions, unlike his objectives, are mostly relatively subtle changes on the margin which will shift incentives in such a way that the ultimate goal will become inevitable in the fullness of time. One of the key formative events in Gingrich's life was the fall of the French Fourth Republic in 1958, which he experienced first hand while his career military stepfather was stationed in France. This both acquainted him with the possibility of unanticipated discontinuous change when the unsustainable can no longer be sustained, and the risk of a society with a long tradition of republican government and recent experience with fascist tyranny welcoming with popular acclaim what amounted to a military dictator as an alternative to chaos. Far better to reset the dials so that the society will start heading in the right direction, even if it takes a generation or two to set things aright (after all, depending on how you count, it's taken between three and five generations to dig the present hole) than to roll the dice and hope for the best after the inevitable (should present policies continue) collapse. That, after all, didn't work out too well for Russia, Germany, and China in the last century. I have cited the authors in the manner above because a number of the chapters on specific policy areas are co-authored with specialists in those topics from Gingrich's own American Solutions and other organisations.
He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.As was, of course, the attention paid by his audience. Intellectuals, even when pronouncing within their area of specialisation, encounter the same “knowledge problem” Hayek identified in conjunction with central planning of economies. While the expert, or the central planning bureau, may know more about the problem domain than 99% of individual participants in the area, in many cases that expertise constitutes less than 1% of the total information distributed among all participants and expressed in their individual preferences and choices. A free market economy can be thought of as a massively parallel cloud computer for setting prices and allocating scarce resources. Its information is in the totality of the system, not in any particular place or transaction, and any attempt to extract that information by aggregating data and working on bulk measurements is doomed to failure both because of the inherent loss of information in making the aggregations and also because any such measure will be out of date long before it is computed and delivered to the would-be planner. Intellectuals have the same conceit: because they believe they know far more about a topic than the average person involved with it (and in this they may be right), they conclude that they know much more about the topic than everybody put together, and that if people would only heed their sage counsel much better policies would be put in place. In this, as with central planning, they are almost always wrong, and the sorry history of expert-guided policy should be adequate testament to its folly. But it never is, of course. The modern administrative state and the intelligentsia are joined at the hip. Both seek to concentrate power, sucking it out from individuals acting at their own discretion in their own perceived interest, and centralising it in order to implement the enlightened policies of the “experts”. That this always ends badly doesn't deter them, because it's power they're ultimately interested in, not good outcomes. In a section titled “The Propagation of the Vision”, Sowell presents a bill of particulars as damning as that against King George III in the Declaration of Independence, and argues that modern-day intellectuals, burrowed within the institutions of academia, government, and media, are a corrosive force etching away the underpinnings of a free society. He concludes:
Just as a physical body can continue to live, despite containing a certain amount of microorganisms whose prevalence would destroy it, so a society can survive a certain amount of forces of disintegration within it. But that is very different from saying that there is no limit to the amount, audacity and ferocity of those disintegrative forces which a society can survive, without at least the will to resist.In the past century, it has mostly been authoritarian tyrannies which have “cleaned out the universities” and sent their effete intellectual classes off to seek gainful employment in the productive sector, for example doing some of those “jobs Americans won't do”. Will free societies, whose citizens fund the intellectual class through their taxes, muster the backbone to do the same before intellectuals deliver them to poverty and tyranny? Until that day, you might want to install my “Monkeying with the Mainstream Media”, whose Red Meat edition translates “expert” to “idiot”, “analyst” to “moron”, and “specialist” to “nitwit” in Web pages you read. An extended video interview with the author about the issues discussed in this book is available, along with a complete transcript.
“So let me get this straight,” said the Old Man. “You trunked two Basque separatists, Tasered a madam and a bodyguard—after she kicked your tail—then bagged and dragged her to some French farmhouse where you threatened to disfigure her, then iceboarded a concierge, shot three hotel security guards, kidnapped the wife of one of Russia's wealthiest mobsters, are now sitting in a hotel in Marseille waiting for a callback from the man I sent you over there to apprehend. Is that about right?”Never a dull moment with the Carlton Group on the job! Aggressive action is called for, because Harvath finds himself on the trail of a time-sensitive plot to unleash terror attacks in Europe and the U.S., launched by an opaque conspiracy where nothing is as it appears to be. Is this a jihadist plot, or the first volley in an asymmetric warfare conflict launched by an adversary, or a terror network hijacked by another mysterious non-state actor with its own obscure agenda? As Harvath follows the threads, two wisecracking Chicago cops moonlighting to investigate a hit and run accident stumble upon a domestic sleeper cell about to be activated by the terror network. And as the action becomes intense, we make the acquaintance of an Athena Team, an all-babe special forces outfit which is expected to figure prominently in the next novel in the saga and will doubtless improve the prospects of these books being picked up by Hollywood. With the clock ticking, these diverse forces (and at least one you'll never see coming) unite to avert a disastrous attack on American soil. The story is nicely wrapped up at the end, but the larger mystery remains to be pursued in subsequent books. I find Brad Thor's novels substantially more “edgy” than those of Vince Flynn or Tom Clancy—like Ian Fleming, he's willing to entertain the reader with eccentric characters and situations even if they strain the sense of authenticity. If you enjoy this kind of thing—and I do, very much—you'll find this an entertaining thriller, perfect “airplane book”, and look forward to the next in the series. A podcast interview with the author is available.
No one issue and no one administration in Washington has been enough to create a perfect storm for a great nation that has weathered many storms in its more than two centuries of existence. But the Roman Empire lasted many times longer, and weathered many storms in its turbulent times—and yet it ultimately collapsed completely. It has been estimated that a thousand years passed before the standard of living in Europe rose again to the level it had achieved in Roman times. The collapse of civilization is not just the replacement of rulers or institutions with new rulers and new institutions. It is the destruction of a whole way of life and the painful, and sometimes pathetic, attempts to begin rebuilding amid the ruins. Is that where America is headed? I believe it is. Our only saving grace is that we are not there yet—and that nothing is inevitable until it happens.Strong stuff! The present volume is a collection of the author's syndicated columns dating from before the U.S. election of 2008 into the first two years of the Obama administration. In them he traces how the degeneration and systematic dismantling of the underpinnings of American society which began in the 1960s culminated in the election of Obama, opening the doors to power to radicals hostile to what the U.S. has stood for since its founding and bent on its “fundamental transformation” into something very different. Unless checked by the elections of 2010 and 2012, Sowell fears the U.S. will pass a “point of no return” where a majority of the electorate will be dependent upon government largesse funded by a minority who pay taxes. I agree: I deemed it the tipping point almost two years ago. A common theme in Sowell's writings of the last two decades has been how public intellectuals and leftists (but I repeat myself) attach an almost talismanic power to words and assume that good intentions, expressed in phrases that make those speaking them feel good about themselves, must automatically result in the intended outcomes. Hence the belief that a “stimulus bill” will stimulate the economy, a “jobs bill” will create jobs, that “gun control” will control the use of firearms by criminals, or that a rise in the minimum wage will increase the income of entry-level workers rather than price them out of the market and send their jobs to other countries. Many of the essays here illustrate how “progressives” believe, with the conviction of cargo cultists, that their policies will turn the U.S. from a social Darwinist cowboy capitalist society to a nurturing nanny state like Sweden or the Netherlands. Now, notwithstanding that the prospects of those two countries and many other European welfare states due to demographic collapse and Islamisation are dire indeed, the present “transformation” in the U.S. is more likely, in my opinion, to render it more like Perón's Argentina than France or Germany. Another part of the “perfect storm” envisioned by Sowell is the acquisition of nuclear weapons by Iran, the imperative that will create for other states in the region to go nuclear, and the consequent possibility that terrorist groups will gain access to these weapons. He observes that Japan in 1945 was a much tougher nation than the U.S. today, yet only two nuclear bombs caused them to capitulate in a matter of days. How many cities would the U.S. have to lose? My guess is at least two but no more than five. People talk about there being no prospect of a battleship Missouri surrender in the War on Terror (or whatever they're calling it this week), but the prospect of a U.S. surrender on the carrier Khomeini in the Potomac is not as far fetched as you might think. Sowell dashes off epigrams like others write grocery lists. Here are a few I noted:
In 1959, the pioneers contemplating a SETI program based on the tools of radio astronomy mostly assumed that the civilisations whose beacons they hoped to discover would be biological organisms much like humans or their descendants, but endowed with the scientific and technological capabilities of a much longer period of time. (For statistical reasons, it is vanishingly improbable that humans would make contact with another intelligent species at a comparable state of development, since humans have had the capability to make contact for less than a century, and if other civilisations are comparably short-lived there will never be more than one in the galaxy at any given time. Hence, any signal we receive will necessarily be from a sender whose own technological civilisation is much older than our own and presumably more advanced and capable.) But it now appears probable that unless human civilisation collapses, stagnates, or is destroyed by barbarism (I put the collective probability of these outcomes at around fifty-fifty), or that some presently unenvisioned constraint puts a lid on the exponential growth of computing and communication capability, that before long, probably within this century, our species will pass through a technological singularity which will witness the emergence of artificial intelligence with intellectual capabilities on the order of 1010 to 1015 times that of present-day humans. Biological humans may continue to exist (after all, the evolution of humans didn't impact the dominance of the biosphere by bacteria), but they will no longer determine the course of technological evolution on this planet and beyond. Asking a present-day human to comprehend the priorities and capabilities of one of these successor beings is like asking a butterfly to understand Beethoven's motivations in writing the Ninth Symphony.
And yet, unless we're missing something terribly important, any aliens we're likely to contact are overwhelmingly probable to be such forbidding machine intelligences, not Romulans, Klingons, Ferengi, or even the Borg. Why would such super beings try to get our attention by establishing interstellar beacons? What would they have to say if they did contact us? Consider: how much effort does our own species exert in making contact with or carrying on a dialogue with yeast? This is the kind of gap which will exist between humans and the products of millions of years of teleological development. And so, the author argues, while keeping a lookout for those elusive beacons (and also ultra-short laser pulses, which are an alternative mechanism of interstellar signalling unimagined when “old SETI” was born), we should also cast the net much wider, looking for the consequences of an intelligence whose motivations and capabilities we cannot hope to envision. Perhaps they have seeded the galaxy with self-reproducing von Neumann probes, one of which is patiently orbiting in the asteroid belt or at one of the Earth-Sun Lagrangian points waiting to receive a ping from us. (And speaking of that, what about those long delayed echoes anyway?) Maybe their wave of exploration passed by the solar system more than three billion years ago and seeded the Earth with the ancestral cell from which all terrestrial life is descended. Or maybe they left a different kind of life, perhaps in their garbage dumps, which lives on as a “shadow biosphere” to this day, undetected because our surveys for life don't look for biochemistry which is different from that of our own. Heck, maybe they even left a message! We should also be on the lookout for things which don't belong, like discrepancies in isotope abundances which may be evidence of alien technology in distant geological time, or things which are missing. Where did all of those magnetic monopoles which should have been created in the Big Bang go, anyway? Or maybe they've moved on to some other, richer domain in the universe. According to the consensus model of cosmology, we have no idea whatsoever what more than 95% of the universe is made of. Maybe they've transcended their juvenile baryonic origins and decamped to the greener fields we call, in our ignorance, “dark matter” and “dark energy”. While we're pointing antennas at obsolete stars in the sky, maybe they're already here (and everywhere else), not as UFOs or alien invaders, but super-intelligences made of structures which interact only gravitationally with the thin scum of baryonic matter on top of the rich ocean of the universe. Maybe their galactic Internet traffic is already tickling the mirrors of our gravitational wave detectors at intensities we can't hope to detect with our crude technologies. Anybody who's interested in these kinds of deep questions about some of the most profound puzzles about our place in the universe will find this book a pure delight. The Kindle edition is superbly produced, with high-resolution colour plates which display beautifully on the iPad Kindle reader, and that rarest and most welcome of attributes in an electronic book, an index which is properly linked to the text. The Kindle edition is, however, more expensive than the hardcover as of this writing.2011 |
… The team was outfitted in black, fire-retardant Nomex fatigues, HellStorm tactical assault gloves, and First Choice body armor. Included with the cache laid out by the armorer, were several newly arrived futuristic .40-caliber Beretta CX4 Storm carbines, as well as Model 96 Beretta Vertex pistols, also in .40 caliber. There was something about being able to interchange their magazines that Harvath found very comforting. A Picatinny rail system allowed him to outfit the CX4 Storm with an under-mounted laser sight and an above-mounted Leupold scope. …Ka ching! Ka ching! Ka ching! I have no idea if the author or publisher were paid for mentioning this most excellent gear for breaking things and killing bad guys, but that's how it reads. But, hey, what's not to like about a novel which includes action scenes on a Russian nuclear powered icebreaker in the Arctic? Been there—done that!
I am the Eschaton. I am not your god.“Or else” ranged from slamming relativistic impactors into misbehaving planets to detonating artificial supernovæ to sterilise an entire interstellar neighbourhood whose inhabitants were up to some mischief which risked spreading. While the “Big E” usually remained off stage, meddling in technologies which might threaten its own existence (for example, time travel to back before its emergence on Earth to prevent the singularity) brought a swift and ruthless response with no more remorse than humans feel over massacring Saccharomyces cerevisiae in the trillions to bake their daily bread. On Rochard's World, an outpost of the New Republic, everything was very much settled into a comfortable (for the ruling class) stasis, with technology for the masses arrested at something approximating the Victorian era, and the advanced stuff (interstellar travel, superluminal communication) imported from Earth and restricted to managing the modest empire to which they belong and suppressing any uprising. Then the Festival arrived. As with most things post-singularity, the Festival is difficult to describe—imagine how incomprehensible it must appear to a society whose development has been wilfully arrested at the railroad era. Wafted from star to star in starwisp probes, upon arrival its nanotechnological payload unpacks itself, disassembles bodies in the outer reaches of its destination star system, and instantiates the information it carries into the hardware and beings to carry out its mission. On a planet with sentient life, things immediately begin to become extremely weird. Mobile telephones rain from the sky which offer those who pick them up anything they ask for in return for a story or bit of information which is novel to the Festival. Within a day or so, the entire social and economic structure is upended as cornucopia machines, talking bunnies, farms that float in the air, mountains of gold and diamonds, houses that walk around on chicken legs, and things which words fail to describe become commonplace in a landscape that changes from moment to moment. The Festival, much like a eucaryotic organism which has accreted a collection of retroviruses in its genome over time, is host to a multitude of hangers-on which range from the absurd to the menacing: pie-throwing zombies, giant sentient naked mole rats, and “headlaunchers” which infect humans, devour their bodies, and propel their brains into space to be uploaded into the Festival. Needless to say, what ensues is somewhat chaotic. Meanwhile, news of these events has arrived at the home world of the New Republic, and a risky mission is mounted, skating on the very edge of the Eschaton's prohibition on causality violation, to put an end to the Festival's incursion and restore order on Rochard's World. Two envoys from Earth, technician Martin Springfield and U.N. arms inspector Rachel Mansour, accompany the expedition, the first to install and maintain the special technology the Republic has purchased from the Earth and the second, empowered by the terms under which Earth technology has been acquired, to verify that it is not used in a manner which might bring the New Republic or Earth into the sights of the Big E. This is a well-crafted tale which leaves the reader with an impression of just how disruptive a technological singularity will be and, especially, how fast everything happens once the exponential take-off point is reached. The shifts in viewpoint are sometimes uneven—focusing on one subplot for an extended period and then abruptly jumping to another where things have radically changed in the interim, but that may be deliberate in an effort to convey how fluid the situation is in such circumstances. Stross also makes excellent use of understated humour throughout: Burya Rubenstein, the anarcho-Leninist revolutionary who sees his entire socio-economic utopia come and go within a couple of days, much faster than his newly-installed party-line propaganda brain implants can adapt, is one of many delightful characters you'll encounter along the way. There is a sequel, which I look forward to reading.
I am descended from you, and I exist in your future.
Thou shalt not violate causality within my historic light cone. Or else.
which relates the sequence of prime numbers (pi is the ith prime number) to the ratio of the circumference to the diameter of a circle. Who could have imagined they had anything to do with one another? And how did 105 get into it?
This book is a pure joy, and a excellent introduction for those who “don't get it” of how mathematics can become a consuming passion for those who do. The only low spot in the book is chapter 9, which discusses the application of large prime numbers to cryptography. While this was much in the news during the crypto wars when the book was published in the mid-1990s, some of the information in this chapter is factually incorrect and misleading, and the attempt at a popular description of the RSA algorithm will probably leave many who actually understand its details scratching their heads. So skip this chapter. I bought this book shortly after it was published, and it sat on my shelf for a decade and a half until I picked it up and started reading it. I finished it in three days, enjoying it immensely, and I was already familiar with most of the material covered here. For those who are encountering it for the first time, this may be a door into a palace of intellectual pleasures they previously thought to be forbidding, dry, and inaccessible to them.This classic work, originally published in 1975, is the definitive history of the great inflation in Weimar Germany, culminating in the archetypal paroxysm of hyperinflation in the Fall of 1923, when Reichsbank printing presses were cranking out 100 trillion (1012) mark banknotes as fast as paper could be fed to them, and government expenditures were 6 quintillion (1018) marks while, in perhaps the greatest achievement in deficit spending of all time, revenues in all forms accounted for only 6 quadrillion (1015) marks. The book has long been out of print and much in demand by students of monetary madness, driving the price of used copies into the hundreds of dollars (although, to date, not trillions and quadrillions—patience). Fortunately for readers interested in the content and not collectibility, the book has been re-issued in a new paperback and electronic edition, just as inflation has come back onto the radar in the over-leveraged economies of the developed world. The main text is unchanged, and continues to use mid-1970s British nomenclature for large numbers (“millard” for 109, “billion” for 1012 and so on) and pre-decimalisation pounds, shillings, and pence for Sterling values. A new note to this edition explains how to convert the 1975 values used in the text to their approximate present-day equivalents.
The Weimar hyperinflation is an oft-cited turning point in twentieth century, but like many events of that century, much of the popular perception and portrayal of it in the legacy media is incorrect. This work is an in-depth antidote to such nonsense, concentrating almost entirely upon the inflation itself, and discussing other historical events and personalities only when relevant to the main topic. To the extent people are aware of the German hyperinflation at all, they'll usually describe it as a deliberate and cynical ploy by the Weimar Republic to escape the reparations for World War I exacted under the Treaty of Versailles by inflating away the debt owed to the Allies by debasing the German mark. This led to a cataclysmic episode of hyperinflation where people had to take a wheelbarrow of banknotes to the bakery to buy a loaf of bread and burning money would heat a house better than the firewood or coal it would buy. The great inflation and the social disruption it engendered led directly to the rise of Hitler. What's wrong with this picture? Well, just about everything…. Inflation of the German mark actually began with the outbreak of World War I in 1914 when the German Imperial government, expecting a short war, decided to finance the war effort by deficit spending and printing money rather than raising taxes. As the war dragged on, this policy continued and was reinforced, since it was decided that adding heavy taxes on top of the horrific human cost and economic privations of the war would be disastrous to morale. As a result, over the war years of 1914–1918 the value of the mark against other currencies fell by a factor of two and was halved again in the first year of peace, 1919. While Germany was committed to making heavy reparation payments, these payments were denominated in gold, not marks, so inflating the mark did nothing to reduce the reparation obligations to the Allies, and thus provided no means of escaping them. What inflation and the resulting cheap mark did, however, was to make German exports cheap on the world market. Since export earnings were the only way Germany could fund reparations, promoting exports through inflation was both a way to accomplish this and to promote social peace through full employment, which was in fact achieved through most of the early period of inflation. By early 1920 (well before the hyperinflationary phase is considered to have kicked in), the mark had fallen to one fortieth of its prewar value against the British pound and U.S. dollar, but the cost of living in Germany had risen only by a factor of nine. This meant that German industrialists and their workers were receiving a flood of marks for the products they exported which could be spent advantageously on the domestic market. Since most of Germany's exports at the time relied little on imported raw materials and products, this put Germany at a substantial advantage in the world market, which was much remarked upon by British and French industrialists at the time, who were prone to ask, “Who won the war, anyway?”. While initially beneficial to large industry and its organised labour force which was in a position to negotiate wages that kept up with the cost of living, and a boon to those with mortgaged property, who saw their principal and payments shrink in real terms as the currency in which they were denominated declined in value, the inflation was disastrous to pensioners and others on fixed incomes denominated in marks, as their standard of living inexorably eroded. The response of the nominally independent Reichsbank under its President since 1908, Dr. Rudolf Havenstein, and the German government to these events was almost surreally clueless. As the originally mild inflation accelerated into dire inflation and then headed vertically on the exponential curve into hyperinflation they universally diagnosed the problem as “depreciation of the mark on the foreign exchange market” occurring for some inexplicable reason, which resulted in a “shortage of currency in the domestic market”, which could only be ameliorated by the central bank's revving up its printing presses to an ever-faster pace and issuing notes of larger and larger denomination. The concept that this tsunami of paper money might be the cause of the “depreciation of the mark” both at home and abroad, never seemed to enter the minds of the masters of the printing presses. It's not like this hadn't happened before. All of the sequelæ of monetary inflation have been well documented over forty centuries of human history, from coin clipping and debasement in antiquity through the demise of every single unbacked paper currency ever created. Lord D'Abernon, the British ambassador in Berlin and British consular staff in cities across Germany precisely diagnosed the cause of the inflation and reported upon it in detail in their dispatches to the Foreign Office, but their attempts to explain these fundamentals to German officials were in vain. The Germans did not even need to look back in history at episodes such as the assignat hyperinflation in revolutionary France: just across the border in Austria, a near-identical hyperinflation had erupted just a few years earlier, and had eventually been stabilised in a manner similar to that eventually employed in Germany. The final stages of inflation induce a state resembling delirium, where people seek to exchange paper money for anything at all which might keep its value even momentarily, farmers with abundant harvests withhold them from the market rather than exchange them for worthless paper, foreigners bearing sound currency descend upon the country and buy up everything for sale at absurdly low prices, employers and towns, unable to obtain currency to pay their workers, print their own scrip, further accelerating the inflation, and the professional and middle classes are reduced to penury or liquidated entirely, while the wealthy, industrialists, and unionised workers do reasonably well by comparison. One of the principal problems in coping with inflation, whether as a policy maker or a citizen or business owner attempting to survive it, is inherent in its exponential growth. At any moment along the path, the situation is perceived as a “crisis” and the current circumstances “unsustainable”. But an exponential curve is self-similar: when you're living through one, however absurd the present situation may appear to be based on recent experience, it can continue to get exponentially more bizarre in the future by the inexorable continuation of the dynamic driving the curve. Since human beings have evolved to cope with mostly linear processes, we are ill-adapted to deal with exponential growth in anything. For example, we run out of adjectives: after you've used up “crisis”, “disaster”, “calamity”, “catastrophe”, “collapse”, “crash”, “debacle”, “ruin”, “cataclysm”, “fiasco”, and a few more, what do you call it the next time they tack on three more digits to all the money? This very phenomenon makes it difficult to bring inflation to an end before it completely undoes the social fabric. The longer inflation persists, the more painful wringing it out of an economy will be, and consequently the greater the temptation to simply continue to endure the ruinous exponential. Throughout the period of hyperinflation in Germany, the fragile government was painfully aware that any attempt to stabilise the currency would result in severe unemployment, which radical parties of both the Left and Right were poised to exploit. In fact, the hyperinflation was ended only by the elected government essentially ceding its powers to an authoritarian dictatorship empowered to put down social unrest as the costs of its policies were felt. At the time the stabilisation policies were put into effect in November 1923, the mark was quoted at six trillion to the British pound, and the paper marks printed and awaiting distribution to banks filled 300 ten-ton railway boxcars. What lessons does this remote historical episode have for us today? A great many, it seems to me. First and foremost, when you hear pundits holding forth about the Weimar inflation, it's valuable to know that much of what they're talking about is folklore and conventional wisdom which has little to do with events as they actually happened. Second, this chronicle serves to remind the reader of the one simple fact about inflation that politicians, bankers, collectivist media, organised labour, and rent-seeking crony capitalists deploy an entire demagogic vocabulary to conceal: that inflation is caused by an increase in the money supply, not by “greed”, “shortages”, “speculation”, or any of the other scapegoats trotted out to divert attention from where blame really lies: governments and their subservient central banks printing money (or, in current euphemism, “quantitative easing”) to stealthily default upon their obligations to creditors. Third, wherever and whenever inflation occurs, its ultimate effect is the destruction of the middle class, which has neither the political power of organised labour nor the connections and financial resources of the wealthy. Since liberal democracy is, in essence, rule by the middle class, its destruction is the precursor to establishment of authoritarian rule, which will be welcomed after the once-prosperous and self-reliant bourgeoisie has been expropriated by inflation and reduced to dependence upon the state. The Weimar inflation did not bring Hitler to power—for one thing the dates just don't work. The inflation came to an end in 1923, the year Hitler's beer hall putsch in Munich failed ignominiously and resulted in his imprisonment. The stabilisation of the economy in the following years was widely considered the death knell for radical parties on both the Left and Right, including Hitler's. It was not until the onset of the Great Depression following the 1929 crash that rising unemployment, falling wages, and a collapsing industrial economy as world trade contracted provided an opening for Hitler, and he did not become chancellor until 1933, almost a decade after the inflation ended. And yet, while there was no direct causal connection between the inflation and Hitler's coming to power, the erosion of civil society and the rule of law, the destruction of the middle class, and the lingering effects of the blame for these events being placed on “speculators” all set the stage for the eventual Nazi takeover. The technology and complexity of financial markets have come a long way from “Railway Rudy” Havenstein and his 300 boxcars of banknotes to “Helicopter Ben” Bernanke. While it used to take years of incompetence and mismanagement, leveling of vast forests, and acres of steam powered printing presses to destroy an industrial and commercial republic and impoverish those who sustain its polity, today a mere fat-finger on a keyboard will suffice. And yet the dynamic of inflation, once unleashed, proceeds on its own timetable, often taking longer than expected to corrode the institutions of an economy, and with ups and downs which tempt investors back into the market right before the next sickening slide. The endpoint is always the same: destruction of the middle class and pensioners who have provided for themselves and the creation of a dependent class of serfs at the mercy of an authoritarian regime. In past inflations, including the one documented in this book, this was an unintended consequence of ill-advised monetary policy. I suspect the crowd presently running things views this as a feature, not a bug. A Kindle edition is available, in which the table of contents and notes are properly linked to the text, but the index is simply a list of terms, not linked to their occurrences in the text.A quick glance at the rest of this particular AIB [Accidents Investigation Branch] file reveals many similar casualties. It deals with accidents that took place between 3 May 1956 and 3 January 1957. In those mere eight months there was a total of thirty-four accidents in which forty-two aircrew were killed (roughly one fatality every six days). Pilot error and mechanical failure shared approximately equal billing in the official list of causes. The aircraft types included ten de Havilland Venoms, six de Havilland Vampires, six Hawker Hunters, four English Electric Canberras, two Gloster Meteors, and one each of the following: Gloster Javelin, Folland Gnat, Avro Vulcan, Avro Shackleton, Short Seamew and Westland Whirlwind helicopter. (pp. 128–129)There is much to admire in the spirit of mourn the dead, fix the problem, and get on with the job, but that stoic approach, essential in wartime, can blind one to asking, “Are these losses acceptable? Do they indicate we're doing something wrong? Do we need to revisit our design assumptions, practises, and procedures?” These are the questions which came into the mind of legendary test pilot Bill Waterton, whose career is the basso continuo of this narrative. First as an RAF officer, then as a company test pilot, and finally as aviation correspondent for the Daily Express, he perceived and documented how Britain's aviation industry was, due to its fragmentation into tradition-bound companies, incessant changes of priorities by government, and failure to adapt to the aggressive product development schedules of the Americans and even the French, still rebuilding from wartime ruins, doomed to bring inferior products to the market too late to win foreign sales, which were essential for the viability of an industry with a home market as small as Britain's to maintain world-class leadership. Although the structural problems within the industry had long been apparent to observers such as Waterton, any hope of British leadership was extinguished by the Duncan Sandys 1957 Defence White Paper which, while calling for long-overdue consolidation of the fragmented U.K. aircraft industry, concluded that most military missions in the future could be accomplished more effectively and less expensively by unmanned missiles. With a few exceptions, it cancelled all British military aviation development projects, condemning Britain, once the fallacy in the “missiles only” approach became apparent, to junior partner status in international projects or outright purchases of aircraft from suppliers overseas. On the commercial aviation side, only the Vickers Viscount was a success: the fatigue-induced crashes of the de Havilland Comet and the protracted development process of the Bristol Britannia caused their entry into service to be so late as to face direct competition from the Boeing 707 and Douglas DC-8, which were superior aircraft in every regard. This book recounts a curious epoch in the history of British aviation. To observers outside the industry, including the hundreds of thousands who flocked to airshows, it seemed like a golden age, with one Made in Britain innovation following another in rapid succession. But in fact, it was the last burst of energy as the capital of a mismanaged and misdirected industry was squandered at the direction of fickle politicians whose priorities were elsewhere, leading to a sorry list of cancelled projects, prototypes which never flew, and aircraft which never met their specifications or were rushed into service before they were ready. In 1945, Britain was positioned to be a world leader in aviation and proceeded, over the next two decades, to blow it, not due to lack of talent, infrastructure, or financial resources, but entirely through mismanagement, shortsightedness, and disastrous public policy. The following long quote from the concluding chapter expresses this powerfully.
One way of viewing the period might be as a grand swansong or coda to the process we Britons had ourselves started with the Industrial Revolution. The long, frequently brilliant chapter of mechanical inventiveness and manufacture that began with steam finally ran out of steam. This was not through any waning of either ingenuity or enthusiasm on the part of individuals, or even of the nation's aviation industry as a whole. It happened because, however unconsciously and blunderingly it was done, it became the policy of successive British governments to eradicate that industry as though it were an unruly wasps' nest by employing the slow cyanide of contradictory policies, the withholding of support and funds, and the progressive poisoning of morale. In fact, although not even the politicians themselves quite realised it – and certainly not at the time of the upbeat Festival of Britain in 1951 – this turned out to be merely part of a historic policy change to do away with all Britain's capacity as a serious industrial nation, abolishing not just a century of making its own cars but a thousand years of building its own ships. I suspect this policy was more unconscious than deliberately willed, and it is one whose consequences for the nation are still not fully apparent. It sounds improbable; yet there is surely no other interpretation to be made of the steady, decades-long demolition of the country's manufacturing capacity – including its most charismatic industry – other that at some level it was absolutely intentional, no matter what lengths politicians went to in order to conceal this fact from both the electorate and themselves. (p. 329)Not only is this book rich in aviation anecdotes of the period, it has many lessons for those living in countries which have come to believe they can prosper by de-industrialising, sending all of their manufacturing offshore, importing their science and engineering talent from other nations, and concentrating on selling “financial services” to one another. Good luck with that.
Then there are Explosives. Have we reached the end? Has Science turned its last page on them? May there not be methods of using explosive energy incomparably more intense than anything heretofore discovered? Might not a bomb no bigger than an orange be found to possess a secret power to destroy a whole block of buildings—nay, to concentrate the force of a thousand tons of cordite and blast a township at a stroke? Could not explosives of even the existing type be guided automatically in flying machines by wireless or other rays, without a human pilot, in ceaseless procession upon a hostile city, arsenal, camp, or dockyard?Bear in mind that this was published in 1924. In 1931, looking “Fifty Years Hence”, he envisions (p. 290):
Wireless telephones and television, following naturally upon their present path of development, would enable their owner to connect up with any room similarly installed, and hear and take part in the conversation as well as if he put his head through the window. The congregation of men in cities would become superfluous. It would rarely be necessary to call in person on any but the most intimate friends, but if so, excessively rapid means of communication would be at hand. There would be no more object in living in the same city with one's neighbour than there is to-day in living with him in the same house. The cities and the countryside would become indistinguishable. Every home would have its garden and its glade.It's best while enjoying this magnificent collection not to dwell on whether there is a single living politician of comparable stature who thinks so profoundly on so broad a spectrum of topics, or who can expound upon them to a popular audience in such pellucid prose.
When depth of time replaces depths of sensible space; when the commutation of interface supplants the delimitation of surfaces; when transparence re-establishes appearances; then we begin to wonder whether that which we insist on calling space isn't actually light, a subliminary, para-optical light of which sunlight is only one phase or reflection. This light occurs in a duration measured in instantaneous time exposure rather than the historical and chronological passage of time. The time of this instant without duration is “exposure time”, be it over- or underexposure. Its photographic and cinematographic technologies already predicted the existence and the time of a continuum stripped of all physical dimensions, in which the quantum of energetic action and the punctum of cinematic observation have suddenly become the last vestiges of a vanished morphological reality. Transferred into the eternal present of a relativity whose topological and teleological thickness and depth belong to this final measuring instrument, this speed of light possesses one direction, which is both its size and dimension and which propagates itself at the same speed in all radial directions that measure the universe. (pp. 174–175)This paragraph, which recalls those bright college days punctuated by deferred exhalations accompanied by “Great weed, man!”, was a single 193 word sentence in the original French; the authors deem it “the most perfect example of diarrhea of the pen that we have ever encountered.” The authors survey several topics in science and mathematics which are particularly attractive to these cargo cult confidence men and women, and, dare I say, deconstruct their babblings. In all, I found the authors' treatment of the postmodernists remarkably gentle. While they do not hesitate to ridicule their gross errors and misappropriation of scientific concepts, they carefully avoid drawing the (obvious) conclusion that such ignorant nonsense invalidates the entire arguments being made. I suspect this is due to the authors, both of whom identify themselves as men of the Left, being sympathetic to the conclusions of those they mock. They're kind of stuck, forced to identify and scorn the irrational misuse of concepts from the hard sciences, while declining to examine the absurdity of the rest of the argument, which the chart from Explaining Postmodernism (May 2007) so brilliantly explains. Alan Sokal is the perpetrator of the famous hoax which took in the editors of Social Text with his paper “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”, which appears in full here, along with comments on construction of the parody and remarks on the motivation behind it. This book was originally published in French under the title Impostures intellectuelles. This English edition contains some material added to address critical comments on the French edition, and includes the original French language text of passages whose translation might be challenged as unfaithful to whatever the heck the original was trying to say.
This view of the human prospect is very odd indeed, and to this reader more disturbing (verging on creepy) than the approach of a technological singularity. What we encounter here are beings, whether augmented humans or software intelligences with no human ancestry whatsoever, that despite having at hand, by the end of the century, mental capacity per individual on the order of 1024 times that of the human brain (and maybe hundreds of orders of magnitude more if quantum computing pans out), still have identities, motivations, and goals which remain comprehensible to humans today. This seems dubious in the extreme to me, and my impression from Singularity is that the author has rethought this as well.
Starting from the publication date of 1999, the book serves up surveys of the scene in that year, 2009, 2019, 2029, and 2099. The chapter describing the state of computing in 2009 makes many specific predictions. The following are those which the author lists in the “Time Line” on pp. 277–278. Many of the predictions in the main text seem to me to be more ambitious than these, but I shall go with those the author chose as most important for the summary. I have reformatted these as a numbered list to make them easier to cite.This is just so breathtakingly wrong I am at a loss for where to begin, and it was just as completely wrong when the book was published two decades ago as it is today; nothing relevant to these statements has changed. My guess is that Kurzweil was thinking of “intricate mechanisms” within hadrons and mesons, particles made up of quarks and gluons, and not within quarks themselves, which then and now are believed to be point particles with no internal structure whatsoever and are, in any case, impossible to isolate from the particles they compose. When Richard Feynman envisioned molecular nanotechnology in 1959, he based his argument on the well-understood behaviour of atoms known from chemistry and physics, not a leap of faith based on drawing a straight line on a sheet of semi-log graph paper. I doubt one could find a single current practitioner of subatomic physics equally versed in the subject as was Feynman in atomic physics who would argue that engineering at the level of subatomic particles would be remotely feasible. (For atoms, biology provides an existence proof that complex self-replicating systems of atoms are possible. Despite the multitude of environments in the universe since the big bang, there is precisely zero evidence subatomic particles have ever formed structures more complicated than those we observe today.) I will not further belabour the arguments in this vintage book. It is an entertaining read and will certainly expand your horizons as to what is possible and introduce you to visions of the future you almost certainly have never contemplated. But for a view of the future which is simultaneously more ambitious and plausible, I recommend The Singularity Is Near.If engineering at the nanometer scale (nanotechnology) is practical in the year 2032, then engineering at the picometer scale should be practical in about forty years later (because 5.64 = approximately 1,000), or in the year 2072. Engineering at the femtometer (one thousandth of a trillionth of a meter, also referred to as a quadrillionth of a meter) scale should be feasible, therefore, by around the year 2112. Thus I am being a bit conservative to say that femtoengineering is controversial in 2099.
Nanoengineering involves manipulating individual atoms. Picoengineering will involve engineering at the level of subatomic particles (e.g., electrons). Femtoengineering will involve engineering inside a quark. This should not seem particularly startling, as contemporary theories already postulate intricate mechanisms within quarks.
Ahhhh…now I understand! Seriously, much of this book is tough going, as technical in some sections as scholarly publications in the field of general relativity, and readers expecting a popular account of Penrose's proposal may not make it to the payoff at the end. For those who thirst for even more rigour there are two breathtakingly forbidding appendices. The Kindle edition is excellent, with the table of contents, notes, cross-references, and index linked just as they should be.We now ask for the analogues of F and J in the case of the gravitational field, as described by Einstein's general theory of relativity. In this theory there is a curvature to space-time (which can be calculated once knows how the metric g varies throughout the space-time), described by a [ 04]-tensor R, called the Riemann(-Christoffel) tensor, with somewhat complicated symmetries resulting in R having 20 independent components per point. These components can be separated into two parts, constituting a [ 04]-tensor C, with 10 independent components, called the Weyl conformal tensor, and a symmetric [ 02]-tensor E, also with 10 independent components, called the Einstein tensor (this being equivalent to a slightly different [ 02]-tensor referred to as the Ricci tensor[2.57]). According to Einstein's field equations, it is E that provides the source to the gravitational field. (p. 129)
2012 |
Hunger is a command, not a request. Hunger is looking at your dog curled up sleeping on the rug and thinking, “I wonder how much meat there is beneath all that fur?”Here, the author explores hunger both at the level of biochemistry (where you may be amazed how much has been learned in the past few decades as to how the body regulates appetite and the fall-back from glucose-based metabolism from food to ketone body energy produced from stored fat, and how the ratio of energy from consumption of muscle mass differs between lean and obese individuals and varies over time) and the historical and social context of hunger. We encounter mystics and saints who fast to discover a higher wisdom or their inner essence; political activists (including Gandhi) willing to starve themselves to the point of death to shame their oppressors into capitulation; peoples whose circumstances have created a perverse (to us, the well-fed) culture built around hunger as the usual state of affairs; volunteers who participated in projects to explore the process of starvation and means to rescue those near death from its consequences; doctors in the Warsaw ghetto who documented the effects of starvation in patients they lacked the resources to save; and the millions of victims of famine in the last two centuries. In discussing famine, the author appears uncomfortable with the fact, reluctantly alluded to, that famine in the modern era is almost never the result of a shortage of food, but rather the consequence of coercive government either constraining the supply of food or blocking its delivery to those in need. Even in the great Irish famine of the 1840s, Ireland continued to export food even as its population starved. (The author argues that even had the exports been halted, the food would have been inadequate to feed the Irish, but even so, they could have saved some, and this is before considering potential food shipments from the rest of the “Union” to a starving Ireland. [Pardon me if this gets me going—ancestors….]) Certainly today it is beyond dispute that the world produces far more food (at least as measured by calories and principal nutrients) than is needed to feed its population. Consequently, whenever there is a famine, the cause is not a shortage of food but rather an interruption in its delivery to those who need it. While aid programs can help to alleviate crises, and “re-feeding” therapy can rescue those on the brink of death by hunger, the problem will persist until the dysfunctional governments that starve their people and loot aid intended for them are eliminated. Given how those who've starved in recent decades have usually been disempowered minorities, perhaps it would be more effective in the long term to arm them than to feed them. You will not find such gnarly sentiments in this book, which is very much aligned with the NGO view that famine due to evil coercive dictatorships is just one of those things that happens, like hurricanes. That said, I cannot recommend this book too highly. The biochemical view of hunger and energy storage and release in times of feast and famine alone is worth the price of admission, and the exploration of hunger in religion, politics, and even entertainment puts it over the top. If you're dieting, this may not be the book to read, but on the other hand, maybe it's just the thing. The author is the daughter of Milburn G. “Mel” Apt, the first human to fly faster than Mach 3, who died when his X-2 research plane crashed after its record-setting flight.
Fill full the mouth of FamineWhen will policy makers become as wise as the mindless mechanisms of biology? When an irritant invades an organism and it can't be eliminated, the usual reaction is to surround it with an inert barrier which keeps it from causing further harm. “Nation building” is folly; far better to bomb them if they misbehave, then build a wall around the whole godforsaken place and bomb them again if any of them get out and cause any further mischief. Call it “biomimetic foreign policy”—encyst upon it!
And bid the sickness cease;
And when your goal is nearest
The end for others sought,
Watch Sloth and heathen Folly
Bring all your hope to nought.
Propulsion chemists are a rare and special breed. As Isaac Asimov (who worked with the author during World War II) writes in a short memoir at the start of the book:
Now, it is clear that anyone working with rocket fuels is outstandingly mad. I don't mean garden-variety crazy or merely raving lunatic. I mean a record-shattering exponent of far-out insanity.
There are, after all, some chemicals that explode shatteringly, some that flame ravenously, some that corrode hellishly, some that poison sneakily, and some that stink stenchily. As far as I know, though, only liquid rocket fuels have all these delightful properties combined into one delectable whole.
And yet amazingly, as head of propulsion research at the Naval Air Rocket Test Station and its successor organisation for seventeen years, the author not only managed to emerge with all of his limbs and digits intact, his laboratory never suffered a single time-lost mishap. This, despite routinely working with substances such as:
Chlorine trifluoride, ClF3, or “CTF” as the engineers insist on calling it, is a colorless gas, a greenish liquid, or a white solid. … It is also quite probably the most vigorous fluorinating agent in existence—much more vigorous than fluorine itself. … It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water—with which it reacts explosively. It can be kept in some of the ordinary structural metals—steel, copper, aluminum, etc.—because the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminum keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes. (p. 73)
And ClF3 is pretty benign compared to some of the other dark corners of chemistry into which their research led them. There is extensive coverage of the quest for a high energy monopropellant, the discovery of which would greatly simplify the design of turbomachinery, injectors, and eliminate problems with differential thermal behaviour and mixture ratio over the operating range of an engine which used it. However, the author reminds us:
A monopropellant is a liquid which contains in itself both the fuel and the oxidizer…. But! Any intimate mixture of a fuel and an oxidizer is a potential explosive, and a molecule with one reducing (fuel) end and one oxidizing end, separated by a pair of firmly crossed fingers, is an invitation to disaster. (p. 10)
One gets an excellent sense of just how empirical all of this was. For example, in the quest for “exotic fuel” (which the author defines as “It's expensive, it's got boron in it, and it probably doesn't work.”), straightforward inorganic chemistry suggested that burning a borane with hydrazine, for example:
2B5H9 + 5N2H4 ⟶ 10BN + 19H2
would be a storable propellant with a specific impulse (Isp) of 326 seconds with a combustion chamber temperature of just 2000°K. But this reaction and the calculation of its performance assumes equilibrium conditions and, apart from a detonation (something else with which propulsion chemists are well acquainted), there are few environments as far from equilibrium as a rocket combustion chamber. In fact, when you try to fire these propellants in an engine, you discover the reaction products actually include elemental boron and ammonia, which result in disappointing performance. Check another one off the list.
Other promising propellants ran afoul of economic considerations and engineering constraints. The lithium, fluorine, and hydrogen tripropellant system has been measured (not theoretically calculated) to have a vacuum Isp of an astonishing 542 seconds at a chamber pressure of only 500 psi and temperature of 2200°K. (By comparison, the space shuttle main engine has a vacuum Isp of 452.3 sec. with a chamber pressure of 2994 psi and temperature of 3588°K; a nuclear thermal rocket would have an Isp in the 850–1000 sec. range. Recall that the relationship between Isp and mass ratio is exponential.) This level of engine performance makes a single stage to orbit vehicle not only feasible but relatively straightforward to engineer. Unfortunately, there is a catch or, to be precise, a list of catches. Lithium and fluorine are both relatively scarce and very expensive in the quantities which would be required to launch from the Earth's surface. They are also famously corrosive and toxic, and then you have to cope with designing an engine in which two of the propellants are cryogenic fluids and the third is a metal which is solid below 180°C. In the end, the performance (which is breathtaking for a chemical rocket) just isn't worth the aggravation.
In the final chapter, the author looks toward the future of liquid rocket propulsion and predicts, entirely correctly from a perspective four decades removed, that chemical propulsion was likely to continue to use the technologies upon which almost all rockets had settled by 1970: LOX/hydrocarbon for large first stages, LOX/LH2 for upper stages, and N2O4/hydrazine for storable missiles and in-space propulsion. In the end economics won out over the potential performance gains to be had from the exotic (and often far too exciting) propellants the author and his colleagues devoted their careers to exploring. He concludes as follows.
There appears to be little left to do in liquid propellant chemistry, and very few important developments to be anticipated. In short, we propellant chemists have worked ourselves out of a job. The heroic age is over.
But it was great fun while it lasted. (p. 192)
Now if you've decided that you just have to read this book and innocently click on the title above to buy a copy, you may be at as much risk of a heart attack as those toiling in the author's laboratory. This book has been out of print for decades and is considered such a classic, both for its unique coverage of the golden age of liquid propellant research, comprehensive description of the many avenues explored and eventually abandoned, hands-on chemist-to-chemist presentation of the motivation for projects and the adventures in synthesising and working with these frisky molecules, not to mention the often laugh out loud writing, that used copies, when they are available, sell for hundreds of dollars. As I am writing these remarks, seven copies are offered at Amazon at prices ranging from US$300–595. Now, this is a superb book, but it isn't that good!
If, however, you type the author's name and the title of the book into an Internet search engine, you will probably quickly come across a PDF edition consisting of scanned pages of the original book. I'm not going to link to it here, both because I don't link to works which violate copyright as a matter of principle and since my linking to a copy of the PDF edition might increase its visibility and risk of being taken down. I am not one of those people who believes “information wants to be free”, but I also doubt John Clark would have wanted his unique memoir and invaluable reference to be priced entirely beyond the means of the vast majority of those who would enjoy and be enlightened by reading it. In the case of “orphaned works”, I believe the moral situation is ambiguous (consider: if you do spend a fortune for a used copy of an out of print book, none of the proceeds benefit the author or publisher in any way). You make the call.
When one treats 1,2,3-Trichloropropane with alkali and a little water the reaction is violent; there is a tendency to deposit the reaction product, the raw materials and the apparatus on the ceiling and the attending chemist. I solved this by setting up duplicate 12 liter flasks, each equipped with double reflux condensers and surrounding each with half a dozen large tubs. In practice, when the reaction “took off” I would flee through the door or window and battle the eruption with water from a garden hose. The contents flying from the flasks were deflected by the ceiling and collected under water in the tubs. I used towels to wring out the contents which separated, shipping the lower level to DuPont. They complained of solids suspended in the liquid, but accepted the product and ordered more. I increased the number of flasks to four, doubled the number of wash tubs and completed the new order. They ordered a 55 gallon drum. … (p. 127)All of this was in the days before the EPA, OSHA, and the rest of the suffocating blanket of soft despotism descended upon entrepreneurial ventures in the United States that actually did things and made stuff. In the 1940s and '50s, when Gergel was building his business in South Carolina, he was free to adopt the “whatever it takes” attitude which is the quintessential ingredient for success in start-ups and small business. The flexibility and ingenuity which allowed Gergel not only to compete with the titans of the chemical industry but become a valued supplier to them is precisely what is extinguished by intrusive regulation, which accounts for why sclerotic dinosaurs are so comfortable with it. On the other hand, Max's experience with methyl iodide illustrates why some of these regulations were imposed:
There is no description adequate for the revulsion I felt over handling this musky smelling, high density, deadly liquid. As residue of the toxicity I had chronic insomnia for years, and stayed quite slim. The government had me questioned by Dr. Rotariu of Loyola University for there had been a number of cases of methyl bromide poisoning and the victims were either too befuddled or too dead to be questioned. He asked me why I had not committed suicide which had been the final solution for some of the afflicted and I had to thank again the patience and wisdom of Dr. Screiber. It is to be noted that another factor was our lack of a replacement worker. (p. 130)Whatever it takes. This book was published by Pierce Chemical Company and was never, as best I can determine, assigned either an ISBN or Library of Congress catalogue number. I cite it above by its OCLC Control Number. The book is hopelessly out of print, and used copies, when available, sell for forbidding prices. Your only alternative to lay hands on a print copy is an inter-library loan, for which the OCLC number is a useful reference. (I hear members of the write-off generation asking, “What is this ‘library’ of which you speak?”) I found a scanned PDF edition in the library section of the Sciencemadness.org Web site; the scanned pages are sometimes a little gnarly around the bottom, but readable. You will also find the second volume of Gergel's memoirs, The Ageless Gergel, among the works in this collection.
[…] Merely continue as we are now: innovative technology discouraged by taxes, environmental impact statements, reports, lawsuits, commission hearings, delays, delays, delays; space research not carried out, never officially abandoned but delayed, stretched-out, budgets cut and work confined to the studies without hardware; solving the energy crisis by conservation, with fusion research cut to the bone and beyond, continued at level-of-effort but never to a practical reactor; fission plants never officially banned, but no provision made for waste disposal or storage so that no new plants are built and the operating plants slowly are phased out; riots at nuclear power plant construction sites; legal hearings, lawyers, lawyers, lawyers…
Can you not imagine the dream being lost? Can you not imagine the nation slowly learning to “do without”, making “Smaller is Better” the national slogan, fussing over insulating attics and devoting attention to windmills; production falling, standards of living falling, until one day we discover the investments needed to go to space would be truly costly, would require cuts in essentials like food —
A world slowly settling into satisfaction with less, until there are no resources to invest in That Buck Rogers Stuff?
I can imagine that.As can we all, as now we are living it. And yet, and yet…. One consequence of the Three Lost Decades is that the technological vision and optimistic roadmap of the future presented in these essays is just as relevant to our predicament today as when they were originally published, simply because with a few exceptions we haven't done a thing to achieve them. Indeed, today we have fewer resources with which to pursue them, having squandered our patrimony on consumption, armies of rent-seekers, and placed generations yet unborn in debt to fund our avarice. But for those who look beyond the noise of the headlines and the platitudes of politicians whose time horizon is limited to the next election, here is a roadmap for a true step farther out, in which the problems we perceive as intractable are not “managed” or “coped with”, but rather solved, just as free people have always done when unconstrained to apply their intellect, passion, and resources toward making their fortunes and, incidentally, creating wealth for all. This book is available only in electronic form for the Kindle as cited above, under the given ASIN. The ISBN of the original 1979 paperback edition is 978-0-441-78584-1. The formatting in the Kindle edition is imperfect, but entirely readable. As is often the case with Kindle documents, “images and tables hardest hit”: some of the tables take a bit of head-scratching to figure out, as the Kindle (or at least the iPad application which I use) particularly mangles multi-column tables. (I mean, what's with that, anyway? LaTeX got this perfectly right thirty years ago, and in a manner even beginners could use; and this was pure public domain software anybody could adopt. Sigh—three lost decades….) Formatting quibbles aside, I'm as glad I bought and read this book as I was when I first bought it and read it all those years ago. If you want to experience not just what the future could have been, then, but what it can be, now, here is an excellent place to start. The author's Web site is an essential resource for those interested in these big ideas, grand ambitions, and the destiny of humankind and its descendents.
There would have been thousands of deaths within days, tens of thousands within weeks, over a million within a month—many of those among people who would have been needed to keep the infrastructure from collapsing. Doctors, police, workers at power plants and sewage centers. [sic (sentence fragment)] The environment would have become so toxic that rescue workers couldn't have gotten into the area, and poisoned food and water would have added exponentially to the death toll. Airdrops of fresh supplies would have led to riots, more death. Silicon Valley would have been ravaged, all but destroying the U.S. computer industry.Nonsense—a plausible satchel nuke of a size which Sara (admittedly a well-trained woman) could carry in a backpack would be something like the U.S. SADM, which weighed around 68 kg, more than most in-shape women. The most common version of this weapon was based upon the W54 warhead, which had a variable yield from 10 tons to 1 kiloton. Assuming the maximum one kiloton yield, a detonation would certainly demolish the Golden Gate Bridge and cause extensive damage to unreinforced structures around the Bay, but the radiation effects wouldn't be remotely as severe as asserted; there would be some casualties to those downwind and in the fallout zone, but these would be more likely in the hundreds and over one or more decades after the detonation. The fact that the detonation occurred at the top of a tower taller than those used in most surface detonations at the Nevada Test Site and above water would further reduce fallout. Silicon Valley, which is almost 100 km south of the detonation site, would be entirely unaffected apart from Twitter outages due to #OMG tweets. The whole subplot about the “hydrazine-based rocket fuel” tanker crossing the bridge is silly: hydrazine is nasty stuff to be sure, but first of all it is a hypergolic liquid rocket fuel, not an “experimental solid rocket fuel”. (Duh—if it were solid, why would you transport it in a tanker?) But apart from that, hydrazine is one of those molecules whose atoms really don't like being so close to one another, and given the slightest excuse will re-arrange themselves into a less strained configuration. Being inside a nuclear fireball is an excellent excuse to do so, hence the closer the tanker happened to be to the detonation, the less likely the dispersal of its contents would cause casualties for those downwind.
While Wally doodled on a legal pad as if he were heavily medicated, Oscar did most of the talking. “So, either we get rid of these cases and face financial ruin, or we march into federal court three weeks from Monday with a case that no lawyer in his right mind would try before a jury, a case with no liability, no experts, no decent facts, a client who's crazy half the time and stoned the other half, a client whose dead husband weighed 320 pounds and basically ate himself to death, a veritable platoon of highly paid and very skilled lawyers on the other side with an unlimited budget and experts from the finest hospitals in the country, a judge who strongly favors the other side, a judge who doesn't like us at all because he thinks we're inexperienced and incompetent, and, well, what else? What am I leaving out here, David?” “We have no cash for litigation expenses,” David said, but only to complete the checklist.
Americans OK advice affected an arrival assess attack bathe become breathe chaperone closed continuous counsel enemy's feet first foul from had hangar harm's hero holding host hostilely intelligence it's its let's morale nights not ordnance overheard pus rarefied scientists sent sights sure the their them they times wereWhen you come across an instance of “where” being used in place of “were”, you might put it down to the kind of fat finger we all commit from time to time, plus sloppy proofreading. But when it happens 13 times in 224 pages, you begin to suspect the author might not really comprehend the difference between the two. All of the characters, from special forces troops, emergency room nurses, senior military commanders, the President of the United States, to Iranian nuclear scientists speak in precisely the same dialect of fractured grammar laced with malaprops. The author has his own eccentric idea of what words should be capitalised, and applies them inconsistently. Each chapter concludes with a “news flash” and “economic news flash”, also in bizarro dialect, with the latter demonstrating the author as illiterate in economics as he is in the English language. Then, in the last line of the novel, the reader is kicked in the teeth with something totally out of the blue. I'd like to call this book “eminently forgettable”, but I doubt I'll forget it soon. I have read a number of manuscripts by aspiring writers (as a savage copy editor and fact checker, authors occasionally invite me to have at their work, in confidence, before sending it for publication), but this is, by far, the worst I have encountered in my entire life. You may ask why I persisted in reading beyond the first couple of chapters. It's kind of like driving past a terrible accident on the highway—do you really not slow down and look? Besides, I only review books I've finished, and I looked forward to this review as the only fun I could derive from this novel, and writing this wave-off a public service for others who might stumble upon this piece of…fiction and be inclined to pick it up.
A libertarian is a person who believes that no one has the right, under any circumstances, to initiate force against another human being for any reason whatever; nor will a libertarian advocate the initiation of force, or delegate it to anyone else. Those who act consistently with this principle are libertarians, whether they realize it or not. Those who fail to act consistently with it are not libertarians, regardless of what they may claim. (p. 20)The subsequent chapters sort out the details of what this principle implies for contentious issues such as war powers; torture; money and legal tender laws; abortion; firearms and other weapons; “animal rights”; climate change (I do not use scare quotes on this because climate change is real and has always happened and always will—it is the hysteria over anthropogenic contributions to an eternally fluctuating process driven mostly by the Sun which is a hoax); taxation; national defence; prohibition in all of its pernicious manifestations; separation of marriage, science, and medicine from the state; immigration; intellectual property; and much more. Smith's viewpoint on these questions is largely informed by Robert LeFevre, whose wisdom he had the good fortune to imbibe at week-long seminar in 1972. (I encountered LeFevre just once, at a libertarian gathering in Marin County, California [believe it or not, such things exist, or at least existed] around 1983, and it was this experience that transformed me from a “nerf libertarian” who was prone to exclaiming “Oh, come on!” whilst reading Rothbard to the flinty variety who would go on to author the Evil Empires bumper sticker.) Sadly, Bob LeFevre is no longer with us, but if you wish to be inoculated with the burning fever of liberty which drove him and inspired those who heard him speak, this book is as close as you can come today to meeting him in person. The naïve often confuse libertarians with conservatives: to be sure, libertarians often wish to impede “progressives” whose agenda amounts to progress toward serfdom and wish, at the least, for a roll-back of the intrusions upon individual liberty which were the hallmark of the twentieth century. But genuine libertarianism, not the nerf variety, is a deeply radical doctrine which calls into question the whole leader/follower, master/slave, sovereign/subject, and state/citizen structure which has characterised human civilisation ever since hominids learned to talk and the most glib of them became politicians (“Put meat at feet of Glub and Glub give you much good stuff”). And here is where I both quibble with and enthusiastically endorse the author's agenda. The quibble is that I fear that our species, formed by thousands of generations of hunter/gatherer and agricultural experience, has adapted, like other primates, to a social structure in which most individuals delegate decision making and even entrust their lives to “leaders” chosen by criteria deeply wired into our biology and not remotely adapted to the challenges we face today and in the future. (Hey, it could be worse: peacocks select for the most overdone tail—it's probably a blessing nakes don't have tails—imagine trying to fit them all into a joint session of Congress.) The endorsement is that I don't think it's possible to separate the spirit of individualism which is at the heart of libertarianism from the frontier. There were many things which contributed to the first American war of secession and the independent republics which emerged from it, but I believe their unique nature was in substantial part due to the fact that they were marginal settlements on the edge of an unexplored and hostile continent, where many families were entirely on their own and on the front lines, confronted by the vicissitudes of nature and crafty enemies. Thomas Jefferson worried that as the population of cities grew compared to that of the countryside, the ethos of self-sufficiency would be eroded and be supplanted by dependency, and that this corruption and reliance upon authority founded, at its deepest level, upon the initiation of force, would subvert the morality upon which self-government must ultimately rely. In my one encounter with Robert LeFevre, he disdained the idea that “maybe if we could just get back to the Constitution” everything would be fine. Nonsense, he said: to a substantial degree the Constitution is the problem—after all, look at how it's been “interpreted” to permit all of the absurd abrogations of individual liberty and natural law since its dubious adoption in 1789. And here, I think the author may put a bit too much focus on documents (which can, have been, and forever will be) twisted by lawyers into things they never were meant to say, and too little on the frontier. What follows is both a deeply pessimistic and unboundedly optimistic view of the human and transhuman prospect. I hope I don't lose you in the loop-the-loop. Humans, as presently constituted, have wired-in baggage which renders most of us vulnerable to glib forms of persuasion by “leaders” (who are simply those more talented than others in persuasion). The more densely humans are packed, and the greater the communication bandwidth available to them (in particular, one to many media), the more vulnerable they are to such “leadership”. Individual liberty emerges in frontier societies: those where each person and each family must be self-sufficient, without any back-up other than their relations to neighbours, but with an unlimited upside in expanding the human presence into new territory. The old America was a frontier society; the new America is a constrained society, turning inward upon itself and devouring its best to appease its worst. So, I'm not sure this or that amendment to a document which is largely ignored will restore liberty in an environment where a near-majority of the electorate receive net benefits from the minority who pay most of the taxes. The situation in the United States, and on Earth, may well be irreversible. But the human and posthuman destiny is much, much larger than that. Perhaps we don't need a revision of governance documents as much as the opening of a frontier. Then people will be able to escape the stranglehold where seven eighths of all of their work is confiscated by the thugs who oppress them and instead use all of their sapient facilities to their own ends. As a sage author once said:
Freedom, immortality, and the stars!Works for me. Free people expand at a rate which asymptotically approaches the speed of light. Coercive government and bureaucracy grow logarithmically, constrained by their own internal dissipation. We win; they lose. In the Kindle edition the index is just a list of page numbers. Since the Kindle edition includes real page numbers, you can type in the number from the index, but that's not as convenient as when index citations are linked directly to references in the text.
The gear inside the field station CONEX included a pair of R-390A HF receivers, two Sherwood SE-3 synchronous detectors, four hardwired demodulators, a half dozen multiband scanners, several digital audio recorders, two spectrum analyzers, and seven laptop computers that were loaded with demodulators, digital recorders, and decryption/encryption software.Does this really move the plot along? Is anybody other than a wealthy oilman likely to be able to put together such a rig for signal intelligence and traffic analysis? And if not, why do we need to know all of this, as opposed to simply describing it as a “radio monitoring post”? This is not a cherry-picked example; there are numerous other indulgences in gear geekdom. The novel covers the epic journey, largely on foot, of Ken and Terry Layton from apocalyptic Crunch Chicago, where they waited too late to get out of Dodge toward the retreat their group had prepared in the American redoubt, and the development and exploits of an insurgency against the so-called “Provisional Government” headquartered in Fort Knox, Kentucky, which is a thinly-disguised front for subjugation of the U.S. to the United Nations and looting the population. (“Meet the new boss—same as the old boss!”) Other subplots update us on the lives of characters we've met before, and provide a view of how individuals and groups try to self-organise back into a lawful and moral civil society while crawling from the wreckage of corruption and afflicted by locusts with weapons. We don't do stars on reviews here at Fourmilab—I'm a word guy—but I do occasionally indulge in sports metaphors. I consider the first two novels home runs: if you're remotely interested in the potential of societal collapse and the steps prudent people can take to protect themselves and those close to them from its sequelæ, they are must-reads. Let's call this novel a solid double bouncing between the left and centre fielders. If you've read the first two books, you'll certainly want to read this one. If you haven't, don't start here, but begin at the beginning. This novel winds up the story, but it does so in an abrupt way which I found somewhat unconvincing—it seemed like the author was approaching a word limit and had to close it out in however sketchy a manner. There are a few quibbles, but aren't there always?
I yanked the cord and the world of triangular spaceships and monochromatic death-rocks collapsed to a single white point. The universe was supposed to end like that, if there was enough mass and matter or something. It expands until gravity hauls everything back in; the collapse accelerates until everything that was once scattered higgily-jiggity over eternity is now summed up in a tiny white infinitely dense dot, which explodes anew into another Big Bang, another universe, another iteration of existence with its own rules, a place where perhaps Carter got a second term and Rod Stewart did not decide to embrace disco.I would read this novel straight through, cover-to-cover. There are many characters who interact in complicated ways, and if you set it aside due to other distractions and pick it up later, you may have to do some backtracking to get back into things. There are a few copy editing errors (I noted 7), but they don't distract from the story. At this writing, this book is available only as a Kindle e-book; a paperback edition is expected in the near future. Here are the author's comments on the occasion of the book's publication. This is the first in what James Lileks intends to be a series of between three and five novels, all set in Minneapolis in different eras, with common threads tying them together. I eagerly await the next.
2013 |
Has the LHC found the Higgs? Probably—the announcement on July 4th, 2012 by the two detector teams reported evidence for a particle with properties just as expected for the Higgs, so if it turned out to be something else, it would be a big surprise (but then Nature never signed a contract with scientists not to perplex them with misdirection). Unlike many popular accounts, this book looks beneath the hood and explores just how difficult it is to tease evidence for a new particle from the vast spray of debris that issues from particle collisions. It isn't like a little ball with an “h” pops out and goes “bing” in the detector: in fact, a newly produced Higgs particle decays in about 10−22 seconds, even faster than assets entrusted to the management of Goldman Sachs. The debris which emerges from the demise of a Higgs particle isn't all that different from that produced by many other standard model events, so the evidence for the Higgs is essentially a “bump” in the rate of production of certain decay signatures over that expected from the standard model background (sources expected to occur in the absence of the Higgs). These, in turn, require a tremendous amount of theoretical and experimental input, as well as massive computer calculations to evaluate; once you begin to understand this, you'll appreciate that the distinction between theory and experiment in particle physics is more fluid than you might have imagined.
This book is a superb example of popular science writing, and its author has distinguished himself as a master of the genre. He doesn't pull any punches: after reading this book you'll understand, at least at a conceptual level, broken symmetries, scalar fields, particles as excitations of fields, and the essence of quantum mechanics (as given by Aatish Bhatia on Twitter), “Don't look: waves. Look: particles.”The dark side of a man's mind seems to be a sort of antenna tuned to catch gloomy thoughts from all directions. I found it so with mine. That was an evil night. It was as if all the world's vindictiveness were concentrated upon me as upon a personal enemy. I sank to depths of disillusionment which I had not believed possible. It would be tedious to discuss them. Misery, after all, is the tritest of emotions.Here we have a U.S. Navy Rear Admiral, Medal of Honor winner, as gonzo journalist in the Antarctic winter—extraordinary. Have any other great explorers written so directly from the deepest recesses of their souls? Byrd's complexity deepens further as he confesses to fabricating reports of his well-being in radio reports to Little America, intended, he says, to prevent them from launching a rescue mission which he feared would end in failure and the deaths of those who undertook it. And yet Byrd's increasingly bizarre communications eventually caused such a mission to be launched, and once it was, his diary pinned his entire hope upon its success. If you've ever imagined yourself first somewhere, totally alone and living off the supplies you've brought with you: in orbit, on the Moon, on Mars, or beyond, here is a narrative of what it's really like to do that, told with brutal honesty by somebody who did. Admiral Byrd's recounting of his experience is humbling to any who aspire to the noble cause of exploration.
The undulating blue-green light writhing behind her like a forest of tentacles the roar of the surf like the sigh of some great beached and expiring sea animal, seemed to press her against the glass reality-interface like a bubble being forced up by decay-gas pressure from the depths of an oily green swamp pool. She felt the weight, the pressure of the whole room pushing behind her as if the blind green monsters that lurked in the most unknowable pits in the ass-end of her mind were bubbling up from the depths and elbowing her consciousness out of her own skull.Back in the day, we'd read something like this and say, “Oh, wow”. Today, many readers may deem such prose stylings as quaint as those who say “Oh, wow”. This novel is a period piece. Reading it puts you back into the mindset of the late 1960s, when few imagined that technologies already in nascent form would destroy the power of one-to-many media oligopolies, and it was wrong in almost all of its extrapolation of the future. If you read it then (as I did) and thought it was a masterpiece (as I did), it may be worth a second glance to see how far we've come.
Following a slim lead on Rickman, Rapp finds himself walking into a simultaneous ambush by both an adversary from his past and crooked Kabul cops. Rapp ends up injured and on the sidelines. Meanwhile, another CIA man in Afghanistan vanishes, and an ambitious FBI deputy director arrives on the scene with evidence of massive corruption in the CIA clandestine service. CIA director Irene Kennedy begins to believe that a coordinated operation must be trying to destroy her spook shop, one of such complexity that it is far beyond the capabilities of the Taliban, and turns her eyes toward “ally” Pakistan. A shocking video is posted on jihadist Web site which makes getting to the bottom of the enigma an existential priority for the CIA. Rapp needs to get back into the game and start following the few leads that exist. This is a well-crafted thriller that will keep you turning the pages. It is somewhat lighter on the action (although there is plenty) and leans more toward the genre of espionage fiction; I think Flynn has been evolving in that direction in the last several books. There are some delightful characters, good and evil. Although she only appears in a few chapters, you will remember four foot eleven inch Air Force Command Master Sergeant Shiela Sanchez long after you put down the novel. There is a fundamental challenge in writing a novel about a CIA agent set in contemporary Afghanistan which the author struggles with here and never fully overcomes. The problem is that the CIA, following orders from its political bosses, is doing things that don't make any sense in places where the U.S. doesn't have any vital interests or reason to be present. Flynn has created a workable thriller around these constraints, but to this reader it just can't be as compelling as saving the country from the villains and threats portrayed in the earlier Mitch Rapp novels. Here, Rapp is doing his usual exploits, but in service of a mission which is pointless at best and in all likelihood counterproductive.“You're a bully and a piece of shit and you're the kind of guy who I actually enjoy killing. Normally, I don't put a lot of thought into the people I shoot, but you fall into a special category. I figure I'd be doing the human race a favor by ending your worthless life. Add to that the fact that I'm in a really bad mood. In fact I'm in such a shitty mood that putting a bullet in your head might be the only thing that could make me feel better.”
… “In the interest of fairness, though, I suppose I should give you a chance to convince me otherwise.” (p. 17)
A search party found the bodies of Scott and the other two members of the expedition who died with him in the tent (the other two had died earlier on the return journey; their remains were never found). His journals were found with him, and when returned to Britain were prepared for publication, and proved a sensation. Amundsen's priority was almost forgotten in the English speaking world, alongside Scott's first-hand account of audacious daring, meticulous planning, heroic exertion, and dignity in the face of death. A bewildering variety of Scott's journals were published over the years. They are described in detail and their differences curated in this Oxford World's Classics edition. In particular, Scott's original journals contained very candid and often acerbic observations about members of his expedition and other explorers, particularly Shackleton. These were elided or toned down in the published copies of the journals. In this edition, the published text is used, but the original manuscript text appears in an appendix. Scott was originally considered a hero, then was subjected to a revisionist view that deemed him ill-prepared for the expedition and distracted by peripheral matters such as a study of the embryonic development of emperor penguins as opposed to Amundsen's single-minded focus on a dash to the Pole. The pendulum has now swung back somewhat, and a careful reading of Scott's own journals seems, at least to this reader, to support this more balanced view. Yes, in some ways Scott's expedition seems amazingly amateurish (I mean, if you were planning to ski across the ice cap, wouldn't you learn to ski before you arrived in Antarctica, rather than bring along a Norwegian to teach you after you arrived?), but ultimately Scott's polar party died due to a combination of horrific weather (present-day estimates are that only one year in sixteen has temperatures as low as those Scott experienced on the Ross Ice Shelf) and an equipment failure: leather washers on cans of fuel failed in the extreme temperatures, which caused loss of fuel Scott needed to melt ice to sustain the party on its return. And yet the same failure had been observed during Scott's 1901–1904 expedition, and nothing had been done to remedy it. The record remains ambiguous and probably always will. The writing, especially when you consider the conditions under which it was done, makes you shiver. At the Pole:I do not think we can hope for any better things now. We shall stick it out to the end, but we are getting weaker, of course, and the end cannot be far. It seems a pity, but I do not think I can write more.
R. Scott.For God's sake look after our people.
and from his “Message to the Public” written shortly before his death:The Pole. Yes, but under very different circumstances from those expected.
… Great God! this is an awful place and terrible enough for us to have laboured to it without the reward of priority.
Now that's an explorer.We took risks, we knew we took them; things have come out against us, and therefore we have no cause for complaint, but bow to the will of Providence, determined still to do our best to the last.
…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.In addition to the Moon program, there is extensive coverage of the development of automated rendezvous and docking and the long duration orbital station programs (Almaz, Salyut, and Mir). There is also an enlightening discussion, building on Chertok's career focus on control systems, of the challenges in integrating humans and automated systems into the decision loop and coping with off-nominal situations in real time. I could go on and on, but there is so much to learn from this narrative, I'll just urge you to read it. Even if you are not particularly interested in space, there is much experience and wisdom to be gained from it which are applicable to all kinds of large complex systems, as well as insight into how things were done in the Soviet Union. It's best to read Volume 1 (May 2012), Volume 2 (August 2012), and Volume 3 (December 2012) first, as they will introduce you to the cast of characters and the events which set the stage for those chronicled here. As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available. A commercial Kindle edition is available which is much better produced than the Kindle editions of the first three volumes. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They're free. The original Russian edition is available online.
…in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.Imagine a bureaucracy in which the Iron Law has been working inexorably since the Roman Empire. The author has covered the Vatican for the Catholic News Service for the last thirty years. He has travelled with popes and other Vatican officials to more than sixty countries and, developing his own sources within a Vatican which is simultaneously opaque to an almost medieval level in its public face, yet leaks like a sieve as factions try to enlist journalists in advancing their agendas. In this book he uses his access to provide a candid look inside the Vatican, at a time when the church is in transition and crisis. He begins with a peek inside the mechanics of the conclave which chose Pope Benedict XVI: from how the black or white smoke is made to how the message indicating the selection of a new pontiff is communicated (or not) to the person responsible for ringing the bell to announce the event to the crowds thronging St Peter's Square. There is a great deal of description, bordering on gonzo, of the reality of covering papal visits to various countries: in summary, much of what you read from reporters accredited to the Vatican comes from their watching events on television, just as you can do yourself. The author does not shy from controversy. He digs deeply into the sexual abuse scandals and cover-up which rocked the church, the revelations about the founder of the Legion of Christ, the struggle between then traditionalists of the Society of St Pius X and supporters of the Vatican II reforms in Rome, and the battle over the beatification of Pope Pius XII. On the lighter side, we encounter the custodians of Latin, including the Vatican Bank ATM which displays its instructions in Latin: “Inserito scidulam quaeso ut faciundum cognoscas rationem”. This is an enlightening look inside one of the most influential, yet least understood, institutions in what remains of Western civilisation. On the event of the announcement of the selection of Pope Francis, James Lileks wrote:
…if you'd turned the sound down on the set and shown the picture to Julius Cæsar, he would have smiled broadly. For the wrong reasons, of course—his order did not survive in its specific shape, but in another sense it did. The architecture, the crowds, the unveiling would have been unmistakable to someone from Cæsar's time. They would have known exactly what was going on.Indeed—the Vatican gets ceremony. What is clear from this book is that it doesn't get public relations in an age where the dissemination of information cannot be controlled, and that words, once spoken, cannot be taken back, even if a “revised and updated” transcript of them is issued subsequently by the bureaucracy. In the Kindle edition the index cites page numbers in the hardcover print edition which are completely useless since the Kindle edition does not contain real page numbers.
If the idea is accepted that the world's resources are fixed with only so much to go around, then each new life is unwelcome, each unregulated act or thought is a menace, every person is fundamentally the enemy of every other person, and each race or nation is the enemy of every other race of nation. The ultimate outcome of such a worldview can only be enforced stagnation, tyranny, war, and genocide.This is a book which should have an impact, for the better, as great as Silent Spring had for the worse. But so deep is the infiltration of the anti-human ideologues into the cultural institutions that you'll probably never hear it mentioned except here and in similar venues which cherish individual liberty and prosperity.
No offence, lad, but ye doan't 'alf ga broon. Admit it, noo. Put a dhoti on ye, an' ye could get a job dishin 'oot egg banjoes at Wazir Ali's. Any roads, w'at Ah'm sayin' is that if ye desert oot 'ere — Ah mean, in India, ye'd 'ev to be dooally to booger off in Boorma — the ridcaps is bound to cotch thee, an' court-martial gi'es thee the choice o' five years in Teimulghari or Paint Joongle, or coomin' oop t'road to get tha bollicks shot off. It's a moog's game. (p. 71)A great deal of the text is dialogue in dialect, and if you find that difficult to get through, it may be rough going. I usually dislike reading dialect, but agree with the author that if it had been rendered into standard English the whole flavour of his experience would have been lost. Soldiers swear, and among Cumbrians profanity is as much a part of speech as nouns and verbs; if this offends you, this is not your book. This is one of the most remarkable accounts of infantry combat I have ever read. Fraser was a grunt—he never rose above the rank of lance corporal during the events chronicled in the book and usually was busted back to private before long. The campaign in Burma was largely ignored by the press while it was underway and forgotten thereafter, but for those involved it was warfare at the most visceral level: combat harking back to the colonial era, fought by riflemen without armour or air support. Kipling of the 1890s would have understood precisely what was going on. On the ground, Fraser and his section had little idea of the larger picture or where their campaign fit into the overall war effort. All they knew is that they were charged with chasing the Japanese out of Burma and that “Jap” might be “half-starved and near naked, and his only weapon was a bamboo stake, but he was in no mood to surrender.” (p. 191) This was a time where the most ordinary men from Britain and the Empire fought to defend what they confidently believed was the pinnacle of civilisation from the forces of barbarism and darkness. While constantly griping about everything, as soldiers are wont to do, when the time came they shouldered their packs, double-checked their rifles, and went out to do the job. From time to time the author reflects on how far Britain, and the rest of the West, has fallen, “One wonders how Londoners survived the Blitz without the interference of unqualified, jargon-mumbling ‘counsellors’, or how an overwhelming number of 1940s servicemen returned successfully to civilian life without benefit of brain-washing.” (p. 89) Perhaps it helps that the author is a master of the historical novel: this account does a superb job of relating events as they happened and were perceived at the time without relying on hindsight to establish a narrative. While he doesn't abjure the occasional reflexion from decades later or reference to regimental history documents, for most of the account you are there—hot, wet, filthy, constantly assailed by insects, and never knowing whether that little sound you heard was just a rustle in the jungle or a Japanese patrol ready to attack with the savagery which comes when an army knows its cause is lost, evacuation is impossible, and surrender is unthinkable. But this is not all boredom and grim combat. The account of the air drop of supplies starting on p. 96 is one of the funniest passages I've ever read in a war memoir. Cumbrians will be Cumbrians!
The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution, or, on the Application of the Legislatures of two thirds of the several States, shall call a Convention for proposing Amendments,…Of the 27 amendments adopted so far, all have been proposed by Congress—the state convention mechanism has never been used (although in some cases Congress proposed an amendment to preempt a convention when one appeared likely). As Levin observes, the state convention process completely bypasses Washington: a convention is called by the legislatures of two thirds of the states, and amendments it proposes are adopted if ratified by three quarters of the states. Congress, the president, and the federal judiciary are completely out of the loop. Levin proposes 11 amendments, all of which he argues are consistent with the views of the framers of the constitution and, in some cases, restore constitutional provisions which have been bypassed by clever judges, legislators, and bureaucrats. The amendments include term limits for all federal offices (including the Supreme Court); repeal of the direct election of senators and a return to their being chosen by state legislatures; super-majority overrides of Supreme Court decisions, congressional legislation, and executive branch regulations; restrictions on the taxing and spending powers (including requiring a balanced budget); reining in expansive interpretation of the commerce clause; requiring compensation for takings of private property; provisions to guard against voter fraud; and making it easier for the states to amend the constitution. In evaluating Levin's plan, the following questions arise:
Should the reader, at this point, be insufficiently forewarned as to what is coming, the author next includes the following acknowledgement:The author is not an expert in the field of space travel. The author is only a storyteller.
Even though hundreds of hours of Internet research were done to write this story, many might find the scientific description of space travel lacking, or simply not 100 percent accurate. The fuels, gases, metals, and the results of using these components are as accurate as the author could describe them.
The Author would like to gratefully thank Alexander Wade (13), his son, for his many hours of research into nuclear reactors, space flight and astro-engineering to make this story as close to reality as possible for you the reader.which also provides a foretaste of the screwball and inconsistent use of capitalisation “you the reader” are about to encounter. It is tempting here to make a cheap crack about the novel's demonstrating a 13 year old's grasp of science, technology, economics, business, political and military institutions, and human behaviour, but this would be to defame the many 13 year olds I've encountered through E-mail exchanges resulting from material posted at Fourmilab which demonstrate a far deeper comprehension of such matters than one finds here. The book is so laughably bad I'm able to explain just how bad without including a single plot spoiler. Helping in this is the fact that to the extent that the book has a plot at all, it is so completely absurd that to anybody with a basic grasp of reality it spoils itself simply by unfolding. Would-be thrillers which leave you gasping for air as you laugh out loud are inherently difficult to spoil. The text is marred by the dozens of copy-editing errors one is accustomed to in self-published works, but more in 99 cent specials than books at this price point. Editing appears to have amounted to running a spelling checker over the text, leaving malapropisms and misused homonyms undetected; some of these can be amusing, such as the “iron drive motors” fueled by xenon gas. Without giving away significant plot details, I'll simply list things the author asks the reader to believe which are, shall we say, rather at variance with the world we inhabit. Keep in mind that this story is set in the very near future and includes thinly disguised characters based upon players in the contemporary commercial space market.
Not many people in this country believe the Communist thesis that it is the deliberate and conscious aim of American policy to ruin Britain and everything Britain stands for in the world. But the evidence can certainly be read that way. And if every time aid is extended, conditions are attached which make it impossible for Britain to ever escape the necessity of going back for still more aid, to be obtained with still more self-abasement and on still more crippling terms, then the result will certainly be what the Communists predict.Dollar diplomacy had triumphed completely. The Bretton Woods system lurched from crisis to crisis and began to unravel in the 1960s when the U.S., exploiting its position of issuing the world's reserve currency, began to flood the world with dollars to fund its budget and trade deficits. Central banks, increasingly nervous about their large dollar positions, began to exchange their dollars for gold, causing large gold outflows from the U.S. Treasury which were clearly unsustainable. In 1971, Nixon “closed the gold window”. Dollars could no longer be redeemed in gold, and the central underpinning of Bretton Woods was swept away. The U.S. dollar was soon devalued against gold (although it hardly mattered, since it was no longer convertible), and before long all of the major currencies were floating against one another, introducing uncertainty in trade and spawning the enormous global casino which is the foreign exchange markets. A bizarre back-story to the creation of the postwar monetary system is that its principal architect, Harry Dexter White, was, during the entire period of its construction, a Soviet agent working undercover in his U.S. government positions, placing and promoting other agents in positions of influence, and providing a steady stream of confidential government documents to Soviet spies who forwarded them to Moscow. This was suspected since the 1930s, and White was identified by Communist Party USA defectors Whittaker Chambers and Elizabeth Bentley as a spy and agent of influence. While White was defended by the usual apologists, and many historical accounts try to blur the issue, mentions of White in the now-declassified Venona decrypts prove the issue beyond a shadow of a doubt. Still, it must be said that White was a fierce and effective advocate at Bretton Woods for the U.S. position as articulated by Morgenthau and Roosevelt. Whatever other damage his espionage may have done, his pro-Soviet sympathies did not detract from his forcefulness in advancing the U.S. cause. This book provides an in-depth view of the protracted negotiations between Britain and the U.S., Lend-Lease and other war financing, and the competing visions for the postwar world which were decided at Bretton Woods. There is a tremendous amount of detail, and while some readers may find it difficult to assimilate, the economic concepts which underlie them are explained clearly and are accessible to the non-specialist. The demise of the Bretton Woods system is described, and a brief sketch of monetary history after its ultimate collapse is given. Whenever a currency crisis erupts into the news, you can count on one or more pundits or politicians to proclaim that what we need is a “new Bretton Woods”. Before prescribing that medicine, they would be well advised to learn just how the original Bretton Woods came to be, and how the seeds of its collapse were built in from the start. U.S. advocates of such an approach might ponder the parallels between the U.S. debt situation today and Britain's in 1944 and consider that should a new conference be held, they may find themselves sitting the seats occupied by the British the last time around, with the Chinese across the table. In the Kindle edition the table of contents, end notes, and index are all properly cross-linked to the text.
2014 |
Avogadro Corporation is an American corporation specializing in Internet search. It generates revenue from paid advertising on search, email (AvoMail), online mapping, office productivity, etc. In addition, the company develops a mobile phone operating system called AvoOS. The company name is based upon Avogadro's Number, or 6 followed by 23 zeros.Now what could that be modelled on? David Ryan is a senior developer on a project which Portland-based Internet giant Avogadro hopes will be the next “killer app” for its Communication Products division. ELOPe, the Email Language Optimization Project, is to be an extension to the company's AvoMail service which will take the next step beyond spelling and grammar checkers and, by applying the kind of statistical analysis of text which allowed IBM's Watson to become a Jeopardy champion, suggest to a user composing an E-mail message alternative language which will make the message more persuasive and effective in obtaining the desired results from its recipient. Because AvoMail has the ability to analyse all the traffic passing through its system, it can tailor its recommendations based on specific analysis of previous exchanges it has seen between the recipient and other correspondents. After an extended period of development, the pilot test has shown ELOPe to be uncannily effective, with messages containing its suggested changes in wording being substantially more persuasive, even when those receiving them were themselves ELOPe project members aware that the text they were reading had been “enhanced”. Despite having achieved its design goal, the project was in crisis. The process of analysing text, even with the small volume of the in-house test, consumed tremendous computing resources, to such an extent that the head of Communication Products saw the load ELOPe generated on his server farms as a threat to the reserve capacity he needed to maintain AvoMail's guaranteed uptime. He issues an ultimatum: reduce the load or be kicked off the servers. This would effectively kill the project, and the developers saw no way to speed up ELOPe, certainly not before the deadline. Ryan, faced with impending disaster for the project into which he has poured so much of his life, has an idea. The fundamental problem isn't performance but persuasion: convincing those in charge to obtain the server resources required by ELOPe and devote them to the project. But persuasion is precisely what ELOPe is all about. Suppose ELOPe were allowed to examine all Avogadro in-house E-mail and silently modify it with a goal of defending and advancing the ELOPe project? Why, that's something he could do in one all-nighter! Hack, hack, hack…. Before long, ELOPe finds itself with 5000 new servers diverted from other divisions of the company. Then, even more curious things start to happen: those who look too closely into the project find themselves locked out of their accounts, sent on wild goose chases, or worse. Major upgrades are ordered for the company's offshore data centre barges, which don't seem to make any obvious sense. Crusty techno-luddite Gene Keyes, who works amidst mountains of paper print-outs (“paper doesn't change”), toiling alone in an empty building during the company's two week holiday shutdown, discovers one discrepancy after another and assembles the evidence to present to senior management. Has ELOPe become conscious? Who knows? Is Watson conscious? Almost everybody would say, “certainly not”, but it is a formidable Jeopardy contestant, nonetheless. Similarly, ELOPe, with the ability to read and modify all the mail passing through the AvoMail system, is uncannily effective in achieving its goal of promoting its own success. The management of Avogadro, faced with an existential risk to their company and perhaps far beyond, must decide upon a course of action to try to put this genie back into the bottle before it is too late. This is a gripping techno-thriller which gets the feel of working in a high-tech company just right. Many stories have explored society being taken over by an artificial intelligence, but it is beyond clever to envision it happening purely through an E-mail service, and masterful to make it seem plausible. In its own way, this novel is reminiscent of the Kelvin R. Throop stories from Analog, illustrating the power of words within a large organisation. A Kindle edition is available.
I could quote dozens more. Should Hoover re-appear and give a composite of what he writes here as a keynote speech at the 2016 Republican convention, and if it hasn't been packed with establishment cronies, I expect he would be interrupted every few lines with chants of “Hoo-ver, Hoo-ver” and nominated by acclamation. It is sad that in the U.S. in the age of Obama there is no statesman with the stature, knowledge, and eloquence of Hoover who is making the case for liberty and warning of the inevitable tyranny which awaits at the end of the road to serfdom. There are voices articulating the message which Hoover expresses so pellucidly here, but in today's media environment they don't have access to the kind of platform Hoover did when his post-presidential policy speeches were routinely broadcast nationwide. After his being reviled ever since his presidency, not just by Democrats but by many in his own party, it's odd to feel nostalgia for Hoover, but Obama will do that to you. In the Kindle edition the index cites page numbers in the hardcover edition which, since the Kindle edition does not include real page numbers, are completely useless.(On his electoral defeat) Democracy is not a polite employer.
We cannot extend the mastery of government over the daily life of a people without somewhere making it master of people's souls and thoughts.
(On JournoList, vintage 1934) I soon learned that the reviewers of the New York Times, the New York Herald Tribune, the Saturday Review and of other journals of review in New York kept in touch to determine in what manner they should destroy books which were not to their liking.
Who then pays? It is the same economic middle class and the poor. That would still be true if the rich were taxed to the whole amount of their fortunes….
Blessed are the young, for they shall inherit the national debt….
Regulation should be by specific law, that all who run may read.
It would be far better that the party go down to defeat with the banner of principle flying than to win by pussyfooting.
The seizure by the government of the communications of persons not charged with wrong-doing justifies the immoral conduct of every snooper.
I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.This work is in the public domain, and there are numerous editions of it available, in print and in electronic form, many from independent publishers. The independent publishers, for the most part, did not distinguish themselves in their respect for this work. Many of their editions were produced by running an optical character recognition program over a print copy of the book, then putting it together with minimal copy-editing. Some (including the one I was foolish enough to buy) elide all of the diagrams, maps, and charts from the original book, which renders parts of the text incomprehensible. The paperback edition cited above, while expensive, is a facsimile edition of the original 1913 two volume English translation of Amundsen's original work, including all of the illustrations. I know of no presently-available electronic edition which has comparable quality and includes all of the material in the original book. Be careful—if you follow the link to the paperback edition, you'll see a Kindle edition listed, but this is from a different publisher and is rife with errors and includes none of the illustrations. I made the mistake of buying it, assuming it was the same as the highly-praised paperback. It isn't; don't be fooled.
2015 |
If a straight line be cut at random, the square of the whole is equal to the squares on the segments and twice the rectangle contained by the segments.Now, given such a problem, Euclid or any of those following in his tradition would draw a diagram and proceed to prove from the axioms of plane geometry the correctness of the statement. But it isn't obvious how to apply this identity to other problems, or how it illustrates the behaviour of general numbers. Today, we'd express the problem and proceed as follows:
Once again, faced with the word problem, it's difficult to know where to begin, but once expressed in symbolic form, it can be solved by applying rules of algebra which many master before reaching high school. Indeed, the process of simplifying such an equation is so mechanical that computer tools are readily available to do so. Or consider the following brain-twister posed in the 7th century A.D. about the Greek mathematician and father of algebra Diophantus: how many years did he live?
“Here lies Diophantus,” the wonder behold.Oh, go ahead, give it a try before reading on! Today, we'd read through the problem and write a system of two simultaneous equations, where x is the age of Diophantus at his death and y the number of years his son lived. Then:
Through art algebraic, the stone tells how old;
“God gave him his boyhood one-sixth of his life,
One twelfth more as youth while whiskers grew rife;
And then one-seventh ere marriage begun;
In five years there came a bounding new son.
Alas, the dear child of master and sage
After attaining half the measure of his father's life chill fate took him.
After consoling his fate by the science of numbers for four years, he ended his life.”
Plug the second equation into the first, do a little algebraic symbol twiddling, and the answer, 84, pops right out. Note that not only are the rules for solving this equation the same as for any other, with a little practice it is easy to read the word problem and write down the equations ready to solve. Go back and re-read the original problem and the equations and you'll see how straightforwardly they follow. Once you have transformed a mass of words into symbols, they invite you to discover new ways in which they apply. What is the solution of the equation x+4=0? In antiquity many would have said the equation is meaningless: there is no number you can add to four to get zero. But that's because their conception of number was too limited: negative numbers such as −4 are completely valid and obey all the laws of algebra. By admitting them, we discovered we'd overlooked half of the real numbers. What about the solution to the equation x² + 4 = 0? This was again considered ill-formed, or imaginary, since the square of any real number, positive or negative, is positive. Another leap of imagination, admitting the square root of minus one to the family of numbers, expanded the number line into the complex plane, yielding the answer 2i as we'd now express it, and extending our concept of number into one which is now fundamental not only in abstract mathematics but also science and engineering. And in recognising negative and complex numbers, we'd come closer to unifying algebra and geometry by bringing rotation into the family of numbers. This book explores the groping over centuries toward a symbolic representation of mathematics which hid the specifics while revealing the commonality underlying them. As one who learned mathematics during the height of the “new math” craze, I can't recall a time when I didn't think of mathematics as a game of symbolic transformation of expressions which may or may not have any connection with the real world. But what one discovers in reading this book is that while this is a concept very easy to brainwash into a 7th grader, it was extraordinarily difficult for even some of the most brilliant humans ever to have lived to grasp in the first place. When Newton invented calculus, for example, he always expressed his “fluxions” as derivatives of time, and did not write of the general derivative of a function of arbitrary variables. Also, notation is important. Writing something in a more expressive and easily manipulated way can reveal new insights about it. We benefit not just from the discoveries of those in the past, but from those who created the symbolic language in which we now express them. This book is a treasure chest of information about how the language of science came to be. We encounter a host of characters along the way, not just great mathematicians and scientists, but scoundrels, master forgers, chauvinists, those who preserved precious manuscripts and those who burned them, all leading to the symbolic language in which we so effortlessly write and do mathematics today.
Assume now a group of people aware of the reality of interpersonal conflicts and in search of a way out of this predicament. And assume that I then propose the following as a solution: In every case of conflict, including conflicts in which I myself am involved, I will have the last and final word. I will be the ultimate judge as to who owns what and when and who is accordingly right or wrong in any dispute regarding scarce resources. This way, all conflicts can be avoided or smoothly resolved. What would be my chances of finding your or anyone else's agreement to this proposal? My guess is that my chances would be virtually zero, nil. In fact, you and most people will think of this proposal as ridiculous and likely consider me crazy, a case for psychiatric treatment. For you will immediately realize that under this proposal you must literally fear for your life and property. Because this solution would allow me to cause or provoke a conflict with you and then decide this conflict in my own favor. Indeed, under this proposal you would essentially give up your right to life and property or even any pretense to such a right. You have a right to life and property only insofar as I grant you such a right, i.e., as long as I decide to let you live and keep whatever you consider yours. Ultimately, only I have a right to life and I am the owner of all goods. And yet—and here is the puzzle—this obviously crazy solution is the reality. Wherever you look, it has been put into effect in the form of the institution of a State. The State is the ultimate judge in every case of conflict. There is no appeal beyond its verdicts. If you get into conflicts with the State, with its agents, it is the State and its agents who decide who is right and who is wrong. The State has the right to tax you. Thereby, it is the State that makes the decision how much of your property you are allowed to keep—that is, your property is only “fiat” property. And the State can make laws, legislate—that is, your entire life is at the mercy of the State. It can even order that you be killed—not in defense of your own life and property but in the defense of the State or whatever the State considers “defense” of its “state-property.”This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License and may be redistributed pursuant to the terms of that license. In addition to the paperback and Kindle editions available from Amazon The book may be downloaded for free from the Library of the Mises Institute in PDF or EPUB formats, or read on-line in an HTML edition.
You are at war with us? Then we are at war with you. A condition of war has existed, and will continue to exist, until you surrender without condition, or until every drug judge, including you, … and every drug prosecutor, and every drug cop is dead. So have I said it. So shall it be.Shortly after the sentencing, Windsor Annesley's younger brother, Worthington (“Worthy”) meets with Matthew and the bookstore crew (including, of course, the feline contingent) to discuss a rumoured H. P. Lovecraft notebook, “The Miskatonic Manuscript”, which Lovecraft alluded to in correspondence but which has never been found. At the time, Lovecraft was visiting Worthy's great-uncle, Henry Annesley, who was conducting curious experiments aimed at seeing things beyond the range of human perception. It was right after this period that Lovecraft wrote his breakthrough story “From Beyond”. Worthy suspects that the story was based upon Henry Annesley's experiments, which may have opened a technological path to the other worlds described in Lovecraft's fiction and explored by Church of Cthulhu members through their sacraments. After discussing the odd career of Lovecraft, Worthy offers a handsome finder's fee to Matthew for the notebook. Matthew accepts. The game, on the leisurely time scale of the rare book world, is afoot. And finally, the manuscript is located. And now things start to get weird—very weird—Lovecraft weird. A mysterious gadget arrives with instructions to plug it into a computer. Impossible crimes. Glowing orbs. Secret laboratories. Native American shamans. Vortices. Big hungry things with sharp teeth. Matthew and Chantal find themselves on an adventure as risky and lurid as those on the Golden Age pulp science fiction shelves of the bookstore. Along with the adventure (in which a hero cat, Tabbyhunter, plays a key part), there are insightful quotes about the millennia humans have explored alternative realities through the use of plants placed on the Earth for that purpose by Nature's God, and the folly of those who would try to criminalise that human right through a coercive War on Drugs. The book concludes with a teaser for the next adventure, which I eagerly await. The full text of H. P. Lovecraft's “From Beyond” is included; if you've read the story before, you'll look at it an another light after reading this superb novel. End notes provide citations to items you might think fictional until you discover the extent to which we're living in the Crazy Years. Drug warriors, law 'n order fundamentalists, prudes, and those whose consciousness has never dared to broach the terrifying “what if” there's something more than we usually see out there may find this novel offensive or even dangerous. Libertarians, the adventurous, and lovers of a great yarn will delight in it. The cover art is racy, even by the standards of pulp, but completely faithful to the story. The link above is to the Kindle edition, which is available from Amazon. The hardcover, in a limited edition of 650 copies, numbered and signed by the author, is available from the publisher via AbeBooks.
2016 |
Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped. Or better yet, never begun.Except, it seems, you can only recognise them in retrospect.
Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed. None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles. Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy. In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.
Drawing by Randall Munroe / xkcd used under right to
share but not to sell
(CC BY-NC 2.5).
(The words in the above picture are drawn. In the book they are set in sharp letters.)
Joseph Weber, an experimental physicist at the University of Maryland, was the first to attempt to detect gravitational radiation. He used large bars, now called Weber bars, of aluminium, usually cylinders two metres long and one metre in diameter, instrumented with piezoelectric sensors. The bars were, based upon their material and dimensions, resonant at a particular frequency, and could detect a change in length of the cylinder of around 10−16 metres. Weber was a pioneer in reducing noise of his detectors, and operated two detectors at different locations so that signals would only be considered valid if observed nearly simultaneously by both.
What nobody knew was how “noisy” the sky was in gravitational radiation: how many sources there were and how strong they might be. Theorists could offer little guidance: ultimately, you just had to listen. Weber listened, and reported signals he believed consistent with gravitational waves. But others who built comparable apparatus found nothing but noise and theorists objected that if objects in the universe emitted as much gravitational radiation as Weber's detections implied, it would convert all of its mass into gravitational radiation in just fifty million years. Weber's claims of having detected gravitational radiation are now considered to have been discredited, but there are those who dispute this assessment. Still, he was the first to try, and made breakthroughs which informed subsequent work. Might there be a better way, which could detect even smaller signals than Weber's bars, and over a wider frequency range? (Since the frequency range of potential sources was unknown, casting the net as widely as possible made more potential candidate sources accessible to the experiment.) Independently, groups at MIT, the University of Glasgow in Scotland, and the Max Planck Institute in Germany began to investigate interferometers as a means of detecting gravitational waves. An interferometer had already played a part in confirming Einstein's special theory of relativity: could it also provide evidence for an elusive prediction of the general theory? An interferometer is essentially an absurdly precise ruler where the markings on the scale are waves of light. You send beams of light down two paths, and adjust them so that the light waves cancel (interfere) when they're combined after bouncing back from mirrors at the end of the two paths. If there's any change in the lengths of the two paths, the light won't interfere precisely, and its intensity will increase depending upon the difference. But when a gravitational wave passes, that's precisely what happens! Lengths in one direction will be squeezed while those orthogonal (at a right angle) will be stretched. In principle, an interferometer can be an exquisitely sensitive detector of gravitational waves. The gap between principle and practice required decades of diligent toil and hundreds of millions of dollars to bridge. From the beginning, it was clear it would not be easy. The field of general relativity (gravitation) had been called “a theorist's dream, an experimenter's nightmare”, and almost everybody working in the area were theorists: all they needed were blackboards, paper, pencils, and lots of erasers. This was “little science”. As the pioneers began to explore interferometric gravitational wave detectors, it became clear what was needed was “big science”: on the order of large particle accelerators or space missions, with budgets, schedules, staffing, and management comparable to such projects. This was a culture shock to the general relativity community as violent as the astrophysical sources they sought to detect. Between 1971 and 1989, theorists and experimentalists explored detector technologies and built prototypes to demonstrate feasibility. In 1989, a proposal was submitted to the National Science Foundation to build two interferometers, widely separated geographically, with an initial implementation to prove the concept and a subsequent upgrade intended to permit detection of gravitational radiation from anticipated sources. After political battles, in 1995 construction of LIGO, the Laser Interferometer Gravitational-Wave Observatory, began at the two sites located in Livingston, Louisiana and Hanford, Washington, and in 2001, commissioning of the initial detectors was begun; this would take four years. Between 2005 and 2007 science runs were made with the initial detectors; much was learned about sources of noise and the behaviour of the instrument, but no gravitational waves were detected. Starting in 2007, based upon what had been learned so far, construction of the advanced interferometer began. This took three years. Between 2010 and 2012, the advanced components were installed, and another three years were spent commissioning them: discovering their quirks, fixing problems, and increasing sensitivity. Finally, in 2015, observations with the advanced detectors began. The sensitivity which had been achieved was astonishing: the interferometers could detect a change in the length of their four kilometre arms which was one ten-thousandth the diameter of a proton (the nucleus of a hydrogen atom). In order to accomplish this, they had to overcome noise which ranged from distant earthquakes, traffic on nearby highways, tides raised in the Earth by the Sun and Moon, and a multitude of other sources, via a tower of technology which made the machine, so simple in concept, forbiddingly complex. September 14, 2015, 09:51 UTC: Chirp! A hundred years after the theory that predicted it, 44 years after physicists imagined such an instrument, 26 years after it was formally proposed, 20 years after it was initially funded, a gravitational wave had been detected, and it was right out of the textbook: the merger of two black holes with masses around 29 and 36 times that of the Sun, at a distance of 1.3 billion light years. A total of three solar masses were converted into gravitational radiation: at the moment of the merger, the gravitational radiation emitted was 50 times greater than the light from all of the stars in the universe combined. Despite the stupendous energy released by the source, when it arrived at Earth it could only have been detected by the advanced interferometer which had just been put into service: it would have been missed by the initial instrument and was orders of magnitude below the noise floor of Weber's bar detectors. For only the third time since proto-humans turned their eyes to the sky a new channel of information about the universe we inhabit was opened. Most of what we know comes from electromagnetic radiation: light, radio, microwaves, gamma rays, etc. In the 20th century, a second channel opened: particles. Cosmic rays and neutrinos allow exploring energetic processes we cannot observe in any other way. In a real sense, neutrinos let us look inside the Sun and into the heart of supernovæ and see what's happening there. And just last year the third channel opened: gravitational radiation. The universe is almost entirely transparent to gravitational waves: that's why they're so difficult to detect. But that means they allow us to explore the universe at its most violent: collisions and mergers of neutron stars and black holes—objects where gravity dominates the forces of the placid universe we observe through telescopes. What will we see? What will we learn? Who knows? If experience is any guide, we'll see things we never imagined and learn things even the theorists didn't anticipate. The game is afoot! It will be a fine adventure. Black Hole Blues is the story of gravitational wave detection, largely focusing upon LIGO and told through the eyes of Rainer Weiss and Kip Thorne, two of the principals in its conception and development. It is an account of the transition of a field of research from a theorist's toy to Big Science, and the cultural, management, and political problems that involves. There are few examples in experimental science where so long an interval has elapsed, and so much funding expended, between the start of a project and its detecting the phenomenon it was built to observe. The road was bumpy, and that is documented here. I found the author's tone off-putting. She, a theoretical cosmologist at Barnard College, dismisses scientists with achievements which dwarf her own and ideas which differ from hers in the way one expects from Social Justice Warriors in the squishier disciplines at the Seven Sisters: “the notorious Edward Teller”, “Although Kip [Thorne] outgrew the tedious moralizing, the sexism, and the religiosity of his Mormon roots”, (about Joseph Weber) “an insane, doomed, impossible bar detector designed by the old mad guy, crude laboratory-scale slabs of metal that inspired and encouraged his anguished claims of discovery”, “[Stephen] Hawking made his oddest wager about killer aliens or robots or something, which will not likely ever be resolved, so that might turn out to be his best bet yet”, (about Richard Garwin) “He played a role in halting the Star Wars insanity as well as potentially disastrous industrial escalations, like the plans for supersonic airplanes…”, and “[John Archibald] Wheeler also was not entirely against the House Un-American Activities Committee. He was not entirely against the anticommunist fervor that purged academics from their ivory-tower ranks for crimes of silence, either.” … “I remember seeing him at the notorious Princeton lunches, where visitors are expected to present their research to the table. Wheeler was royalty, in his eighties by then, straining to hear with the help of an ear trumpet. (Did I imagine the ear trumpet?)”. There are also a number of factual errors (for example, a breach in the LIGO beam tube sucking out all of the air from its enclosure and suffocating anybody inside), which a moment's calculation would have shown was absurd. The book was clearly written with the intention of being published before the first detection of a gravitational wave by LIGO. The entire story of the detection, its validation, and public announcement is jammed into a seven page epilogue tacked onto the end. This epochal discovery deserves being treated at much greater length.Secrets Are LiesTo Mae's family and few remaining friends outside The Circle, this all seems increasingly bizarre: as if the fastest growing and most prestigious high technology company in the world has become a kind of grotesque cult which consumes the lives of its followers and aspires to become universal. Mae loves her sense of being connected, the interaction with a worldwide public, and thinks it is just wonderful. The Circle internally tests and begins to roll out a system of direct participatory democracy to replace existing political institutions. Mae is there to report it. A plan to put an end to most crime is unveiled: Mae is there. The Circle is closing. Mae is contacted by her mysterious acquaintance, and presented with a moral dilemma: she has become a central actor on the stage of a world which is on the verge of changing, forever. This is a superbly written story which I found both realistic and chilling. You don't need artificial intelligence or malevolent machines to create an eternal totalitarian nightmare. All it takes a few years' growth and wider deployment of technologies which exist today, combined with good intentions, boundless ambition, and fuzzy thinking. And the latter three commodities are abundant among today's technology powerhouses. Lest you think the technologies which underlie this novel are fantasy or far in the future, they were discussed in detail in David Brin's 1999 The Transparent Society and my 1994 “Unicard” and 2003 “The Digital Imprimatur”. All that has changed is that the massive computing, communication, and data storage infrastructure envisioned in those works now exists or will within a few years. What should you fear most? Probably the millennials who will read this and think, “Wow! This will be great.” “Democracy is mandatory here!”
Sharing Is Caring
Privacy Is Theft
There are two overwhelming forces in the world. One is chaos; the other is order. God—the original singular speck—is forming again. He's gathering together his bits—we call it gravity. And in the process he is becoming self-aware to defeat chaos, to defeat evil if you will, to battle the devil. But something has gone terribly wrong.Sometimes, when your computer is in a loop, the only thing you can do is reboot it: forcefully get it out of the destructive loop back to a starting point from which it can resume making progress. But how do you reboot a global technological civilisation on the brink of war? The Avatar must find the reboot button as time is running out. Thirty years later, a delivery man rings the door. An old man with a shabby blanket answers and invites him inside. There are eight questions to ponder at the end which expand upon the shiver-up-your-spine themes raised in the novel. Bear in mind, when pondering how prophetic this novel is of current and near-future events, that it was published twelve years ago.
No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper. The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts. On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved. Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston. With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry. Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone. While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines. In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory. Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality. Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out. When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic. I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.
Raindrops keep fallin' in my face,Finally, here was proof that “it moves”: there would be no aberration in a geocentric universe. But by Bradley's time in the 1720s, only cranks and crackpots still believed in the geocentric model. The question was, instead, how distant are the stars? The parallax game remained afoot. It was ultimately a question of instrumentation, but also one of luck. By the 19th century, there was abundant evidence that stars differed enormously in their intrinsic brightness. (We now know that the most luminous stars are more than a billion times more brilliant than the dimmest.) Thus, you couldn't conclude that the brightest stars were the nearest, as astronomers once guessed. Indeed, the distances of the four brightest stars as seen from Earth are, in light years, 8.6, 310, 4.4, and 37. Given that observing the position of a star for parallax is a long-term project and tedious, bear in mind that pioneers on the quest had no idea whether the stars they observed were near or far, nor the distance to the nearest stars they might happen to be lucky enough to choose. It all came together in the 1830s. Using an instrument called a heliometer, which was essentially a refractor telescope with its lens cut in two with the ability to shift the halves and measure the offset, Friedrich Bessel was able to measure the parallax of the star 61 Cygni by comparison to an adjacent distant star. Shortly thereafter, Wilhelm Struve published the parallax of Vega, and then, just two months later, Thomas Henderson reported the parallax of Alpha Centauri, based upon measurements made earlier at the Cape of Good Hope. Finally, we knew the distances to the nearest stars (although those more distant remained a mystery), and just how empty the universe was. Let's put some numbers on this, just to appreciate how great was the achievement of the pioneers of parallax. The parallax angle of the closest star system, Alpha Centauri, is 0.755 arc seconds. (The parallax angle is half the shift observed in the position of the star as the Earth orbits the Sun. We use half the shift because it makes the trigonometry to compute the distance easier to understand.) An arc second is 1/3600 of a degree, and there are 360 degrees in a circle, so it's 1/1,296,000 of a full circle. Now let's work out the distance to Alpha Centauri. We'll work in terms of astronomical units (au), the mean distance between the Earth and Sun. We have a right triangle where we know the distance from the Earth to the Sun and the parallax angle of 0.755 arc seconds. (To get a sense for how tiny an angle this is, it's comparable to the angle subtended by a US quarter dollar coin when viewed from a distance of 6.6 km.) We can compute the distance from the Earth to Alpha Centauri as:
More and more as I pick up the pace…
1 au / tan(0.755 / 3600) = 273198 au = 4.32 light years
Parallax is used to define the parsec (pc), the distance at which a star would have a parallax angle of one arc second. A parsec is about 3.26 light years, so the distance to Alpha Centauri is 1.32 parsecs. Star Wars notwithstanding, the parsec, like the light year, is a unit of distance, not time. Progress in instrumentation has accelerated in recent decades. The Earth is a poor platform from which to make precision observations such as parallax. It's much better to go to space, where there are neither the wobbles of a planet nor its often murky atmosphere. The Hipparcos mission, launched in 1989, measured the parallaxes and proper motions of more than 118,000 stars, with lower resolution data for more than 2.5 million stars. The Gaia mission, launched in 2013 and still underway, has a goal of measuring the position, parallax, and proper motion of more than a billion stars. It's been a long road, getting from there to here. It took more than 2,000 years from the time Aristarchus proposed the heliocentric solar system until we had direct observational evidence that eppur si muove. Within a few years, we will have in hand direct measurements of the distances to a billion stars. And, some day, we'll visit them. I originally read this book in December 2003. It was a delight to revisit.These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.
Phenomena in the universe take place over scales ranging from the unimaginably small to the breathtakingly large. The classic film, Powers of Ten, produced by Charles and Ray Eames, and the companion book explore the universe at length scales in powers of ten: from subatomic particles to the most distant visible galaxies. If we take the smallest meaningful distance to be the Planck length, around 10−35 metres, and the diameter of the observable universe as around 1027 metres, then the ratio of the largest to smallest distances which make sense to speak of is around 1062. Another way to express this is to answer the question, “How big is the universe in Planck lengths?” as “Mega, mega, yotta, yotta big!”
But length isn't the only way to express the scale of the universe. In the present book, the authors examine the time intervals at which phenomena occur or recur. Starting with one second, they take steps of powers of ten (10, 100, 1000, 10000, etc.), arriving eventually at the distant future of the universe, after all the stars have burned out and even black holes begin to disappear. Then, in the second part of the volume, they begin at the Planck time, 5×10−44 seconds, the shortest unit of time about which we can speak with our present understanding of physics, and again progress by powers of ten until arriving back at an interval of one second.
Intervals of time can denote a variety of different phenomena, which are colour coded in the text. A period of time can mean an epoch in the history of the universe, measured from an event such as the Big Bang or the present; a distance defined by how far light travels in that interval; a recurring event, such as the orbital period of a planet or the frequency of light or sound; or the half-life of a randomly occurring event such as the decay of a subatomic particle or atomic nucleus.
Because the universe is still in its youth, the range of time intervals discussed here is much larger than those when considering length scales. From the Planck time of 5×10−44 seconds to the lifetime of the kind of black hole produced by a supernova explosion, 1074 seconds, the range of intervals discussed spans 118 orders of magnitude. If we include the evaporation through Hawking radiation of the massive black holes at the centres of galaxies, the range is expanded to 143 orders of magnitude. Obviously, discussions of the distant future of the universe are highly speculative, since in those vast depths of time physical processes which we have never observed due to their extreme rarity may dominate the evolution of the universe.
Among the fascinating facts you'll discover is that many straightforward physical processes take place over an enormous range of time intervals. Consider radioactive decay. It is possible, using a particle accelerator, to assemble a nucleus of hydrogen-7, an isotope of hydrogen with a single proton and six neutrons. But if you make one, don't grow too fond of it, because it will decay into tritium and four neutrons with a half-life of 23×10−24 seconds, an interval usually associated with events involving unstable subatomic particles. At the other extreme, a nucleus of tellurium-128 decays into xenon with a half-life of 7×1031 seconds (2.2×1024 years), more than 160 trillion times the present age of the universe.
While the very short and very long are the domain of physics, intermediate time scales are rich with events in geology, biology, and human history. These are explored, along with how we have come to know their chronology. You can open the book to almost any page and come across a fascinating story. Have you ever heard of the ocean quahog (Arctica islandica)? They're clams, and the oldest known has been determined to be 507 years old, born around 1499 and dredged up off the coast of Iceland in 2006. People eat them.
Or did you know that if you perform carbon-14 dating on grass growing next to a highway, the lab will report that it's tens of thousands of years old? Why? Because the grass has incorporated carbon from the CO2 produced by burning fossil fuels which are millions of years old and contain little or no carbon-14.
This is a fascinating read, and one which uses the framework of time intervals to acquaint you with a wide variety of sciences, each inviting further exploration. The writing is accessible to the general reader, young adult and older. The individual entries are short and stand alone—if you don't understand something or aren't interested in a topic, just skip to the next. There are abundant colour illustrations and diagrams.
Author Gerard 't Hooft won the 1999 Nobel Prize in Physics for his work on the quantum mechanics of the electroweak interaction. The book was originally published in Dutch in the Netherlands in 2011. The English translation was done by 't Hooft's daughter, Saskia Eisberg-'t Hooft. The translation is fine, but there are a few turns of phrase which will seem odd to an English mother tongue reader. For example, matter in the early universe is said to “clot” under the influence of gravity; the common English term for this is “clump”. This is a translation, not a re-write: there are a number of references to people, places, and historical events which will be familiar to Dutch readers but less so to those in the Anglosphere. In the Kindle edition notes, cross-references, the table of contents, and the index are all properly linked, and the illustrations are reproduced well.
In this new material I saw another confirmation. Its advent was like the signature of some elemental arcanum, complicit with forces not at all interested in human affairs. Carbomorph. Born from incomplete reactions and destructive distillation. From tar and pitch and heavy oils, the black ichor that pulsed thermonous through the arteries of the very earth.On the “Makers”:
This insistence on the lightness and whimsy of farce. The romantic fetish and nostalgia, to see your work as instantly lived memorabilia. The event was modeled on Renaissance performance. This was a crowd of actors playing historical figures. A living charade meant to dislocate and obscure their moment with adolescent novelty. The neckbeard demiurge sees himself keeling in the throes of assembly. In walks the problem of the political and he hisses like the mathematician at Syracuse: “Just don't molest my baubles!”This book recounts the history of the 3D printed pistol, the people who made it happen, and why they did what they did. It recounts recent history during the deployment of a potentially revolutionary technology, as seen from the inside, and the way things actually happen: where nobody really completely understands what is going on and everybody is making things up as they go along. But if the promise of this technology allows the forces of liberty and creativity to prevail over the grey homogenisation of the state and the powers that serve it, this is a book which will be read many years from now by those who wish to understand how, where, and when it all began.… But nobody here truly meant to give you a revolution. “Making” was just another way of selling you your own socialization. Yes, the props were period and we had kept the whole discourse of traditional production, but this was parody to better hide the mechanism. We were “making together,” and “making for good” according to a ritual under the signs of labor. And now I knew this was all apolitical on purpose. The only goal was that you become normalized. The Makers had on their hands a Last Man's revolution whose effeminate mascots could lead only state-sanctioned pep rallies for feel-good disruption. The old factory was still there, just elevated to the image of society itself. You could buy Production's acrylic coffins, but in these new machines was the germ of the old productivism. Dead labor, that vampire, would still glamour the living.
In an information economy, growth springs not from power but from knowledge. Crucial to the growth of knowledge is learning, conducted across an economy through the falsifiable testing of entrepreneurial ideas in companies that can fail. The economy is a test and measurement system, and it requires reliable learning guided by an accurate meter of monetary value.Money, then, is the means by which information is transmitted within the economy. It allows comparing the value of completely disparate things: for example the services of a neurosurgeon and a ton of pork bellies, even though it is implausible anybody has ever bartered one for the other. When money is stable (its supply is fixed or grows at a constant rate which is small compared to the existing money supply), it is possible for participants in the economy to evaluate various goods and services on offer and, more importantly, make long term plans to create new goods and services which will improve productivity. When money is manipulated by governments and their central banks, such planning becomes, in part, a speculation on the value of currency in the future. It's like you were operating a textile factory and sold your products by the metre, and every morning you had to pick up the Wall Street Journal to see how long a metre was today. Should you invest in a new weaving machine? Who knows how long the metre will be by the time it's installed and producing? I'll illustrate the information theory of value in the following way. Compare the price of the pile of raw materials used in making a BMW (iron, copper, glass, aluminium, plastic, leather, etc.) with the finished automobile. The difference in price is the information embodied in the finished product—not just the transformation of the raw materials into the car, but the knowledge gained over the decades which contributed to that transformation and the features of the car which make it attractive to the customer. Now take that BMW and crash it into a bridge abutment on the autobahn at 200 km/h. How much is it worth now? Probably less than the raw materials (since it's harder to extract them from a jumbled-up wreck). Every atom which existed before the wreck is still there. What has been lost is the information (what electrical engineers call the “magic smoke”) which organised them into something people valued. When the value of money is unpredictable, any investment is in part speculative, and it is inevitable that the most lucrative speculations will be those in money itself. This diverts investment from improving productivity into financial speculation on foreign exchange rates, interest rates, and financial derivatives based upon them: a completely unproductive zero-sum sector of the economy which didn't exist prior to the abandonment of fixed exchange rates in 1971. What happened in 1971? On August 15th of that year, President Richard Nixon unilaterally suspended the convertibility of the U.S. dollar into gold, setting into motion a process which would ultimately destroy the Bretton Woods system of fixed exchange rates which had been created as a pillar of the world financial and trade system after World War II. Under Bretton Woods, the dollar was fixed to gold, with sovereign holders of dollar reserves (but not individuals) able to exchange dollars and gold in unlimited quantities at the fixed rate of US$ 35/troy ounce. Other currencies in the system maintained fixed exchange rates with the dollar, and were backed by reserves, which could be held in either dollars or gold. Fixed exchange rates promoted international trade by eliminating currency risk in cross-border transactions. For example, a German manufacturer could import raw materials priced in British pounds, incorporate them into machine tools assembled by workers paid in German marks, and export the tools to the United States, being paid in dollars, all without the risk that a fluctuation by one or more of these currencies against another would wipe out the profit from the transaction. The fixed rates imposed discipline on the central banks issuing currencies and the governments to whom they were responsible. Running large trade deficits or surpluses, or accumulating too much public debt was deterred because doing so could force a costly official change in the exchange rate of the currency against the dollar. Currencies could, in extreme circumstances, be devalued or revalued upward, but this was painful to the issuer and rare. With the collapse of Bretton Woods, no longer was there a link to gold, either direct or indirect through the dollar. Instead, the relative values of currencies against one another were set purely by the market: what traders were willing to pay to buy one with another. This pushed the currency risk back onto anybody engaged in international trade, and forced them to “hedge” the currency risk (by foreign exchange transactions with the big banks) or else bear the risk themselves. None of this contributed in any way to productivity, although it generated revenue for the banks engaged in the game. At the time, the idea of freely floating currencies, with their exchange rates set by the marketplace, seemed like a free market alternative to the top-down government-imposed system of fixed exchange rates it supplanted, and it was supported by champions of free enterprise such as Milton Friedman. The author contends that, based upon almost half a century of experience with floating currencies and the consequent chaotic changes in exchange rates, bouts of inflation and deflation, monetary induced recessions, asset bubbles and crashes, and interest rates on low-risk investments which ranged from 20% to less than zero, this was one occasion Prof. Friedman got it wrong. Like the ever-changing metre in the fable of the textile factory, incessantly varying money makes long term planning difficult to impossible and sends the wrong signals to investors and businesses. In particular, when interest rates are forced to near zero, productive investment which creates new assets at a rate greater than the interest rate on the borrowed funds is neglected in favour of bidding up the price of existing assets, creating bubbles like those in real estate and stocks in recent memory. Further, since free money will not be allocated by the market, those who receive it are the privileged or connected who are first in line; this contributes to the justified perception of inequality in the financial system. Having judged the system of paper money with floating exchange rates a failure, Gilder does not advocate a return to either the classical gold standard of the 19th century or the Bretton Woods system of fixed exchange rates with a dollar pegged to gold. Preferring to rely upon the innovation of entrepreneurs and the selection of the free market, he urges governments to remove all impediments to the introduction of multiple, competitive currencies. In particular, the capital gains tax would be abolished for purchases and sales regardless of the currency used. (For example, today you can obtain a credit card denominated in euros and use it freely in the U.S. to make purchases in dollars. Every time you use the card, the dollar amount is converted to euros and added to the balance on your bill. But, strictly speaking, you have sold euros and bought dollars, so you must report the transaction and any gain or loss from change in the dollar value of the euros in your account and the value of the ones you spent. This is so cumbersome it's a powerful deterrent to using any currency other than dollars in the U.S. Many people ignore the requirement to report such transactions, but they're breaking the law by doing so.) With multiple currencies and no tax or transaction reporting requirements, all will be free to compete in the market, where we can expect the best solutions to prevail. Using whichever currency you wish will be as seamless as buying something with a debit or credit card denominated in a currency different than the one of the seller. Existing card payment systems have a transaction cost which is so high they are impractical for “micropayment” on the Internet or for fully replacing cash in everyday transactions. Gilder suggests that Bitcoin or other cryptocurrencies based on blockchain technology will probably be the means by which a successful currency backed 100% with physical gold or another hard asset will be used in transactions. This is a thoughtful examination of the problems of the contemporary financial system from a perspective you'll rarely encounter in the legacy financial media. The root cause of our money problems is the money: we have allowed governments to inflict upon us a monopoly of government-managed money, which, unsurprisingly, works about as well as anything else provided by a government monopoly. Our experience with this flawed system over more than four decades makes its shortcomings apparent, once you cease accepting the heavy price we pay for them as the normal state of affairs and inevitable. As with any other monopoly, all that's needed is to break the monopoly and free the market to choose which, among a variety of competing forms of money, best meet the needs of those who use them. Here is a Bookmonger interview with the author discussing the book.
The scraps, which you reject, unfitRené Antoine Ferchault de Réaumur, a French polymath who published in numerous fields of science, observed in 1719 that wasps made their nests from what amounted to paper they produced directly from wood. If humans could replicate this vespidian technology, the forests of Europe and North America could provide an essentially unlimited and renewable source of raw material for paper. This idea was to lie fallow for more than a century. Some experimenters produced small amounts of paper from wood through various processes, but it was not until 1850 that paper was manufactured from wood in commercial quantities in Germany, and 1863 that the first wood-based paper mill began operations in America. Wood is about half cellulose, while the fibres in rags run up to 90% cellulose. The other major component of wood is lignin, a cross-linked polymer which gives it its strength and is useless for paper making. In the 1860s a process was invented where wood, first mechanically cut into small chips, was chemically treated to break down the fibrous structure in a device called a “digester”. This produced a pulp suitable for paper making, and allowed a dramatic expansion in the volume of paper produced. But the original wood-based paper still contained lignin, which turns brown over time. While this was acceptable for newspapers, it was undesirable for books and archival documents, for which rag paper remained preferred. In 1879, a German chemist invented a process to separate lignin from cellulose in wood pulp, which allowed producing paper that did not brown with age. The processes used to make paper from wood involved soaking the wood pulp in acid to break down the fibres. Some of this acid remained in the paper, and many books printed on such paper between 1840 and 1970 are now in the process of slowly disintegrating as the acid eats away at the paper. Only around 1970 was it found that an alkali solution works just as well when processing the pulp, and since then acid-free paper has become the norm for book publishing. Most paper is produced from wood today, and on an enormous, industrial scale. A single paper mill in China, not the largest, produces 600,000 tonnes of paper per year. And yet, for all of the mechanisation, that paper is made by the same process as the first sheet of paper produced in China: by reducing material to cellulose fibres, mixing them with water, extracting a sheet (now a continuous roll) with a screen, then pressing and drying it to produce the final product. Paper and printing is one of those technologies which is so simple, based upon readily-available materials, and potentially revolutionary that it inspires “what if” speculation. The ancient Egyptians, Greeks, and Romans each had everything they needed—raw materials, skills, and a suitable written language—so that a Connecticut Yankee-like time traveller could have explained to artisans already working with wood and metal how to make paper, cast movable type, and set up a printing press in a matter of days. How would history have differed had one of those societies unleashed the power of the printed word?
To clothe the tenant of a hovel,
May shine in sentiment and wit,
And help make a charming novel…
Forty years ago [in the 1880s] the contact of the individual with the Government had its largest expression in the sheriff or policeman, and in debates over political equality. In those happy days the Government offered but small interference with the economic life of the citizen.But with the growth of cities, industrialisation, and large enterprises such as railroads and steel manufacturing, a threat to this frontier individualism emerged: the reduction of workers to a proletariat or serfdom due to the imbalance between their power as individuals and the huge companies that employed them. It is there that government action was required to protect the other component of American individualism: the belief in equality of opportunity. Hoover believes, and supports, intervention in the economy to prevent the concentration of economic power in the hands of a few, and to guard, through taxation and other means, against the emergence of a hereditary aristocracy of wealth. Yet this poses its own risks,
But with the vast development of industry and the train of regulating functions of the national and municipal government that followed from it; with the recent vast increase in taxation due to the war;—the Government has become through its relations to economic life the most potent force for maintenance or destruction of our American individualism.One of the challenges American society must face as it adapts is avoiding the risk of utopian ideologies imported from Europe seizing this power to try to remake the country and its people along other lines. Just ten years later, as Hoover's presidency gave way to the New Deal, this fearful prospect would become a reality. Hoover examines the philosophical, spiritual, economic, and political aspects of this unique system of individual initiative tempered by constraints and regulation in the interest of protecting the equal opportunity of all citizens to rise as high as their talent and effort permit. Despite the problems cited by radicals bent on upending the society, he contends things are working pretty well. He cites “the one percent”: “Yet any analysis of the 105,000,000 of us would show that we harbor less than a million of either rich or impecunious loafers.” Well, the percentage of very rich seems about the same today, but after half a century of welfare programs which couldn't have been more effective in destroying the family and the initiative of those at the bottom of the economic ladder had that been their intent, and an education system which, as a federal commission was to write in 1983, “If an unfriendly foreign power had attempted to impose on America …, we might well have viewed it as an act of war”, a nation with three times the population seems to have developed a much larger unemployable and dependent underclass. Hoover also judges the American system to have performed well in achieving its goal of a classless society with upward mobility through merit. He observes, speaking of the Harding administration of which he is a member,
That our system has avoided the establishment and domination of class has a significant proof in the present Administration in Washington, Of the twelve men comprising the President, Vice-President, and Cabinet, nine have earned their own way in life without economic inheritance, and eight of them started with manual labor.Let's see how that has held up, almost a century later. Taking the 17 people in equivalent positions at the end of the Obama administration in 2016 (President, Vice President, and heads of the 15 executive departments), we find that only 1 of the 17 inherited wealth (I'm inferring from the description of parents in their biographies) but that precisely zero had any experience with manual labour. If attending an Ivy League university can be taken as a modern badge of membership in a ruling class, 11 of the 17—65%, meet this test (if you consider Stanford a member of an “extended Ivy League”, the figure rises to 70%). Although published in a different century in a very different America, much of what Hoover wrote remains relevant today. Just as Hoover warned of bad ideas from Europe crossing the Atlantic and taking root in the United States, the Frankfurt School in Germany was laying the groundwork for the deconstruction of Western civilisation and individualism, and in the 1930s, its leaders would come to America to infect academia. As Hoover warned, “There is never danger from the radical himself until the structure and confidence of society has been undermined by the enthronement of destructive criticism.” Destructive criticism is precisely what these “critical theorists” specialised in, and today in many parts of the humanities and social sciences even in the most eminent institutions the rot is so deep they are essentially a write-off. Undoing a century of bad ideas is not the work of a few years, but Hoover's optimistic and pragmatic view of the redeeming merit of individualism unleashed is a bracing antidote to the gloom one may feel when surveying the contemporary scene.
2017 |
This equation not only correctly predicted the results measured in the laboratories, it avoided the ultraviolet catastrophe, as it predicted an absolute cutoff of the highest frequency radiation which could be emitted based upon an object's temperature. This meant that the absorption and re-emission of radiation in the closed oven could never run away to infinity because no energy could be emitted above the limit imposed by the temperature. Fine: the theory explained the measurements. But what did it mean? More than a century later, we're still trying to figure that out. Planck modeled the walls of the oven as a series of resonators, but unlike earlier theories in which each could emit energy at any frequency, he constrained them to produce discrete chunks of energy with a value determined by the frequency emitted. This had the result of imposing a limit on the frequency due to the available energy. While this assumption yielded the correct result, Planck, deeply steeped in the nineteenth century tradition of the continuum, did not initially suggest that energy was actually emitted in discrete packets, considering this aspect of his theory “a purely formal assumption.” Planck's 1900 paper generated little reaction: it was observed to fit the data, but the theory and its implications went over the heads of most physicists. In 1905, in his capacity as editor of Annalen der Physik, he read and approved the publication of Einstein's paper on the photoelectric effect, which explained another physics puzzle by assuming that light was actually emitted in discrete bundles with an energy determined by its frequency. But Planck, whose equation manifested the same property, wasn't ready to go that far. As late as 1913, he wrote of Einstein, “That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.” Only in the 1920s did Planck fully accept the implications of his work as embodied in the emerging quantum theory.
The equation for Planck's Law contained two new fundamental physical constants: Planck's constant (h) and Boltzmann's constant (kB). (Boltzmann's constant was named in memory of Ludwig Boltzmann, the pioneer of statistical mechanics, who committed suicide in 1906. The constant was first introduced by Planck in his theory of thermal radiation.) Planck realised that these new constants, which related the worlds of the very large and very small, together with other physical constants such as the speed of light (c), the gravitational constant (G), and the Coulomb constant (ke), allowed defining a system of units for quantities such as length, mass, time, electric charge, and temperature which were truly fundamental: derived from the properties of the universe we inhabit, and therefore comprehensible to intelligent beings anywhere in the universe. Most systems of measurement are derived from parochial anthropocentric quantities such as the temperature of somebody's armpit or the supposed distance from the north pole to the equator. Planck's natural units have no such dependencies, and when one does physics using them, equations become simpler and more comprehensible. The magnitudes of the Planck units are so far removed from the human scale they're unlikely to find any application outside theoretical physics (imagine speed limit signs expressed in a fraction of the speed of light, or road signs giving distances in Planck lengths of 1.62×10−35 metres), but they reflect the properties of the universe and may indicate the limits of our ability to understand it (for example, it may not be physically meaningful to speak of a distance smaller than the Planck length or an interval shorter than the Planck time [5.39×10−44 seconds]).
Planck's life was long and productive, and he enjoyed robust health (he continued his long hikes in the mountains into his eighties), but was marred by tragedy. His first wife, Marie, died of tuberculosis in 1909. He outlived four of his five children. His son Karl was killed in 1916 in World War I. His two daughters, Grete and Emma, both died in childbirth, in 1917 and 1919. His son and close companion Erwin, who survived capture and imprisonment by the French during World War I, was arrested and executed by the Nazis in 1945 for suspicion of involvement in the Stauffenberg plot to assassinate Hitler. (There is no evidence Erwin was a part of the conspiracy, but he was anti-Nazi and knew some of those involved in the plot.) Planck was repulsed by the Nazis, especially after a private meeting with Hitler in 1933, but continued in his post as the head of the Kaiser Wilhelm Society until 1937. He considered himself a German patriot and never considered emigrating (and doubtless his being 75 years old when Hitler came to power was a consideration). He opposed and resisted the purging of Jews from German scientific institutions and the campaign against “Jewish science”, but when ordered to dismiss non-Aryan members of the Kaiser Wilhelm Society, he complied. When Heisenberg approached him for guidance, he said, “You have come to get my advice on political questions, but I am afraid I can no longer advise you. I see no hope of stopping the catastrophe that is about to engulf all our universities, indeed our whole country. … You simply cannot stop a landslide once it has started.” Planck's house near Berlin was destroyed in an Allied bombing raid in February 1944, and with it a lifetime of his papers, photographs, and correspondence. (He and his second wife Marga had evacuated to Rogätz in 1943 to escape the raids.) As a result, historians have only limited primary sources from which to work, and the present book does an excellent job of recounting the life and science of a man whose work laid part of the foundations of twentieth century science.Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment. Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.
I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment? What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained? Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry? Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what? Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter? Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”. Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens. Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them. This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:
If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.
Quintile | % Muslim | Countries |
---|---|---|
1 | 100–80 | 36 |
2 | 80–60 | 5 |
3 | 60–40 | 8 |
4 | 40–20 | 7 |
5 | 20–0 | 132 |
It was impossible. Nobody, in this time of depression, could find an order for a single ship…—let alone a flock of them. There was the staff. … He could probably get them together again at a twenty per cent rise in salary—if they were any good. But how was he to judge of that? The whole thing was impossible, sheer madness to attempt. He must be sensible, and put it from his mind. It would be damn good fun…Three weeks later, acting through a solicitor to conceal his identity, Mr. Henry Warren, merchant banker of the City, became the owner of Barlows' Yard, purchasing it outright for the sum of £5500. Thus begins one of the most entertaining, realistic, and heartwarming tales of entrepreneurship (or perhaps “rentrepreneurship”) I have ever read. The fact that the author was himself founder and director of an aircraft manufacturing company during the depression, and well aware of the need to make payroll every week, get orders to keep the doors open even if they didn't make much business sense, and do whatever it takes so that the business can survive and meet its obligations to its customers, investors, employees, suppliers, and creditors, contributes to the authenticity of the tale. (See his autobiography, Slide Rule [July 2011], for details of his career.) Back in his office at the bank, there is the matter of the oil deal in Laevatia. After defaulting on their last loan, the Balkan country is viewed as a laughingstock and pariah in the City, but Warren has an idea. If they are to develop oil in the country, they will need to ship it, and how better to ship it than in their own ships, built in Britain on advantageous terms? Before long, he's off to the Balkans to do a deal in the Balkan manner (involving bejewelled umbrellas, cases of Worcestershire sauce, losing to the Treasury minister in the local card game at a dive in the capital, and working out a deal where the dividends on the joint stock oil company will be secured by profits from the national railway. And, there's the matter of the ships, which will be contracted for by Warren's bank. Then it's back to London to pitch the deal. Warren's reputation counts for a great deal in the City, and the preference shares are placed. That done, the Hawside Ship and Engineering Company Ltd. is registered with cut-out directors, and the process of awarding the contract for the tankers to it is undertaken. As Warren explains to Miss McMahon, who he has begun to see more frequently, once the order is in hand, it can be used to float shares in the company to fund the equipment and staff to build the ships. At least if the prospectus is sufficiently optimistic—perhaps too optimistic…. Order in hand, life begins to return to Sharples. First a few workers, then dozens, then hundreds. The welcome sound of riveting and welding begins to issue from the yard. A few boarded-up shops re-open, and then more. Then another order for a ship came in, thanks to arm-twisting by one of the yard's directors. With talk of Britain re-arming, there was the prospect of Admiralty business. There was still only one newspaper a week in Sharples, brought in from Newcastle and sold to readers interested in the football news. On one of his more frequent visits to the town, yard, and Miss McMahon, Warren sees the headline: “Revolution in Laevatia”. “This is a very bad one,” Warren says. “I don't know what this is going to mean.” But, one suspects, he did. As anybody who has been in the senior management of a publicly-traded company is well aware, what happens next is well-scripted: the shareholder suit by a small investor, the press pile-on, the back-turning by the financial community, the securities investigation, the indictment, and, eventually, the slammer. Warren understands this, and works diligently to ensure the Yard survives. There is a deep mine of wisdom here for anybody facing a bad patch.
“You must make this first year's accounts as bad as they ever can be,” he said. “You've got a marvellous opportunity to do so now, one that you'll never have again. You must examine every contract that you've got, with Jennings, and Grierson must tell the auditors that every contract will be carried out at a loss. He'll probably be right, of course—but he must pile it on. You've got to make reserves this year against every possible contingency, probable or improbable.” … “Pile everything into this year's loss, including a lot that really ought not to be there. If you do that, next year you'll be bound to show a profit, and the year after, if you've done it properly this year. Then as soon as you're showing profits and a decent show of orders in hand, get rid of this year's losses by writing down your capital, pay a dividend, and make another issue to replace the capital.”Sage advice—I've been there. We had cash in the till, so we were able to do a stock buy-back at the bottom, but the principle is the same. Having been brought back to life by almost dying in small town hospital, Warren is rejuvenated by his time in gaol. In November 1937, he is released and returns to Sharples where, amidst evidence of prosperity everywhere he approaches the Yard, to see a plaque on the wall with his face in profile: “HENRY WARREN — 1934 — HE GAVE US WORK”. Then he was off to see Miss McMahon. The only print edition currently available new is a very expensive hardcover. Used paperbacks are readily available: check under both Kindling and the original British title, Ruined City. I have linked to the Kindle edition above.
Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue. … That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”. This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that. This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available. Thanks to Glenn Beck for recommending this book.
He shook hands and gave me a friendly grin. You could call it nothing but a grin, for his lips were exceedingly thin and fleshless, and among his upper teeth a baby tooth too lingered on, conspicuous in its incongruity. But his eyes were cheerful and amused.Both Laura and Enrico shared the ability to see things precisely as they were, then see beyond that to what they could become. In Rome, Fermi became head of the mathematical physics department at the Sapienza University of Rome, which his mentor, Corbino, saw as Italy's best hope to become a world leader in the field. He helped Fermi recruit promising physicists, all young and ambitious. They gave each other nicknames: ecclesiastical in nature, befitting their location in Rome. Fermi was dubbed Il Papa (The Pope), not only due to his leadership and seniority, but because he had already developed a reputation for infallibility: when he made a calculation or expressed his opinion on a technical topic, he was rarely if ever wrong. Meanwhile, Mussolini was increasing his grip on the country. In 1929, he announced the appointment of the first thirty members of the Royal Italian Academy, with Fermi among the laureates. In return for a lifetime stipend which would put an end to his financial worries, he would have to join the Fascist party. He joined. He did not take the Academy seriously and thought its comic opera uniforms absurd, but appreciated the money. By the 1930s, one of the major mysteries in physics was beta decay. When a radioactive nucleus decayed, it could emit one or more kinds of radiation: alpha, beta, or gamma. Alpha particles had been identified as the nuclei of helium, beta particles as electrons, and gamma rays as photons: like light, but with a much shorter wavelength and correspondingly higher energy. When a given nucleus decayed by alpha or gamma, the emission always had the same energy: you could calculate the energy carried off by the particle emitted and compare it to the nucleus before and after, and everything added up according to Einstein's equation of E=mc². But something appeared to be seriously wrong with beta (electron) decay. Given a large collection of identical nuclei, the electrons emitted flew out with energies all over the map: from very low to an upper limit. This appeared to violate one of the most fundamental principles of physics: the conservation of energy. If the nucleus after plus the electron (including its kinetic energy) didn't add up to the energy of the nucleus before, where did the energy go? Few physicists were ready to abandon conservation of energy, but, after all, theory must ultimately conform to experiment, and if a multitude of precision measurements said that energy wasn't conserved in beta decay, maybe it really wasn't. Fermi thought otherwise. In 1933, he proposed a theory of beta decay in which the emission of a beta particle (electron) from a nucleus was accompanied by emission of a particle he called a neutrino, which had been proposed earlier by Pauli. In one leap, Fermi introduced a third force, alongside gravity and electromagnetism, which could transform one particle into another, plus a new particle: without mass or charge, and hence extraordinarily difficult to detect, which nonetheless was responsible for carrying away the missing energy in beta decay. But Fermi did not just propose this mechanism in words: he presented a detailed mathematical theory of beta decay which made predictions for experiments which had yet to be performed. He submitted the theory in a paper to Nature in 1934. The editors rejected it, saying “it contained abstract speculations too remote from physical reality to be of interest to the reader.” This was quickly recognised and is now acknowledged as one of the most epic face-plants of peer review in theoretical physics. Fermi's theory rapidly became accepted as the correct model for beta decay. In 1956, the neutrino (actually, antineutrino) was detected with precisely the properties predicted by Fermi. This theory remained the standard explanation for beta decay until it was extended in the 1970s by the theory of the electroweak interaction, which is valid at higher energies than were available to experimenters in Fermi's lifetime. Perhaps soured on theoretical work by the initial rejection of his paper on beta decay, Fermi turned to experimental exploration of the nucleus, using the newly-discovered particle, the neutron. Unlike alpha particles emitted by the decay of heavy elements like uranium and radium, neutrons had no electrical charge and could penetrate the nucleus of an atom without being repelled. Fermi saw this as the ideal probe to examine the nucleus, and began to use neutron sources to bombard a variety of elements to observe the results. One experiment directed neutrons at a target of silver and observed the creation of isotopes of silver when the neutrons were absorbed by the silver nuclei. But something very odd was happening: the results of the experiment seemed to differ when it was run on a laboratory bench with a marble top compared to one of wood. What was going on? Many people might have dismissed the anomaly, but Fermi had to know. He hypothesised that the probability a neutron would interact with a nucleus depended upon its speed (or, equivalently, energy): a slower neutron would effectively have more time to interact than one which whizzed through more rapidly. Neutrons which were reflected by the wood table top were “moderated” and had a greater probability of interacting with the silver target. Fermi quickly tested this supposition by using paraffin wax and water as neutron moderators and measuring the dramatically increased probability of interaction (or as we would say today, neutron capture cross section) when neutrons were slowed down. This is fundamental to the design of nuclear reactors today. It was for this work that Fermi won the Nobel Prize in Physics for 1938. By 1938, conditions for Italy's Jewish population had seriously deteriorated. Laura Fermi, despite her father's distinguished service as an admiral in the Italian navy, was now classified as a Jew, and therefore subject to travel restrictions, as were their two children. The Fermis went to their local Catholic parish, where they were (re-)married in a Catholic ceremony and their children baptised. With that paperwork done, the Fermi family could apply for passports and permits to travel to Stockholm to receive the Nobel prize. The Fermis locked their apartment, took a taxi, and boarded the train. Unbeknownst to the fascist authorities, they had no intention of returning. Fermi had arranged an appointment at Columbia University in New York. His Nobel Prize award was US$45,000 (US$789,000 today). If he returned to Italy with the sum, he would have been forced to convert it to lire and then only be able to take the equivalent of US$50 out of the country on subsequent trips. Professor Fermi may not have been much interested in politics, but he could do arithmetic. The family went from Stockholm to Southampton, and then on an ocean liner to New York, with nothing other than their luggage, prize money, and, most importantly, freedom. In his neutron experiments back in Rome, there had been curious results he and his colleagues never explained. When bombarding nuclei of uranium, the heaviest element then known, with neutrons moderated by paraffin wax, they had observed radioactive results which didn't make any sense. They expected to create new elements, heavier than uranium, but what they saw didn't agree with the expectations for such elements. Another mystery…in those heady days of nuclear physics, there was one wherever you looked. At just about the time Fermi's ship was arriving in New York, news arrived from Germany about what his group had observed, but not understood, four years before. Slow neutrons, which Fermi's group had pioneered, were able to split, or fission the nucleus of uranium into two lighter elements, releasing not only a large amount of energy, but additional neutrons which might be able to propagate the process into a “chain reaction”, producing either a large amount of energy or, perhaps, an enormous explosion. As one of the foremost researchers in neutron physics, it was immediately apparent to Fermi that his new life in America was about to take a direction he'd never anticipated. By 1941, he was conducting experiments at Columbia with the goal of evaluating the feasibility of creating a self-sustaining nuclear reaction with natural uranium, using graphite as a moderator. In 1942, he was leading a project at the University of Chicago to build the first nuclear reactor. On December 2nd, 1942, Chicago Pile-1 went critical, producing all of half a watt of power. But the experiment proved that a nuclear chain reaction could be initiated and controlled, and it paved the way for both civil nuclear power and plutonium production for nuclear weapons. At the time he achieved one of the first major milestones of the Manhattan Project, Fermi's classification as an “enemy alien” had been removed only two months before. He and Laura Fermi did not become naturalised U.S. citizens until July of 1944. Such was the breakneck pace of the Manhattan Project that even before the critical test of the Chicago pile, the DuPont company was already at work planning for the industrial scale production of plutonium at a facility which would eventually be built at the Hanford site near Richland, Washington. Fermi played a part in the design and commissioning of the X-10 Graphite Reactor in Oak Ridge, Tennessee, which served as a pathfinder and began operation in November, 1943, operating at a power level which was increased over time to 4 megawatts. This reactor produced the first substantial quantities of plutonium for experimental use, revealing the plutonium-240 contamination problem which necessitated the use of implosion for the plutonium bomb. Concurrently, he contributed to the design of the B Reactor at Hanford, which went critical in September 1944, running at 250 megawatts, that produced the plutonium for the Trinity test and the Fat Man bomb dropped on Nagasaki. During the war years, Fermi divided his time among the Chicago research group, Oak Ridge, Hanford, and the bomb design and production group at Los Alamos. As General Leslie Groves, head of Manhattan Project, had forbidden the top atomic scientists from travelling by air, “Henry Farmer”, his wartime alias, spent much of his time riding the rails, accompanied by a bodyguard. As plutonium production ramped up, he increasingly spent his time with the weapon designers at Los Alamos, where Oppenheimer appointed him associate director and put him in charge of “Division F” (for Fermi), which acted as a consultant to all of the other divisions of the laboratory. Fermi believed that while scientists could make major contributions to the war effort, how their work and the weapons they created were used were decisions which should be made by statesmen and military leaders. When appointed in May 1945 to the Interim Committee charged with determining how the fission bomb was to be employed, he largely confined his contributions to technical issues such as weapons effects. He joined Oppenheimer, Compton, and Lawrence in the final recommendation that “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use.” On July 16, 1945, Fermi witnessed the Trinity test explosion in New Mexico at a distance of ten miles from the shot tower. A few seconds after the blast, he began to tear little pieces of paper from from a sheet and drop them toward the ground. When the shock wave arrived, he paced out the distance it had blown them and rapidly computed the yield of the bomb as around ten kilotons of TNT. Nobody familiar with Fermi's reputation for making off-the-cuff estimates of physical phenomena was surprised that his calculation, done within a minute of the explosion, agreed within the margin of error with the actual yield of 20 kilotons, determined much later. After the war, Fermi wanted nothing more than to return to his research. He opposed the continuation of wartime secrecy to postwar nuclear research, but, unlike some other prominent atomic scientists, did not involve himself in public debates over nuclear weapons and energy policy. When he returned to Chicago, he was asked by a funding agency simply how much money he needed. From his experience at Los Alamos he wanted both a particle accelerator and a big computer. By 1952, he had both, and began to produce results in scattering experiments which hinted at the new physics which would be uncovered throughout the 1950s and '60s. He continued to spend time at Los Alamos, and between 1951 and 1953 worked two months a year there, contributing to the hydrogen bomb project and analysis of Soviet atomic tests. Everybody who encountered Fermi remarked upon his talents as an explainer and teacher. Seven of his students: six from Chicago and one from Rome, would go on to win Nobel Prizes in physics, in both theory and experiment. He became famous for posing “Fermi problems”, often at lunch, exercising the ability to make and justify order of magnitude estimates of difficult questions. When Freeman Dyson met with Fermi to present a theory he and his graduate students had developed to explain the scattering results Fermi had published, Fermi asked him how many free parameters Dyson had used in his model. Upon being told the number was four, he said, “I remember my old friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Chastened, Dyson soon concluded his model was a blind alley. After returning from a trip to Europe in the fall of 1954, Fermi, who had enjoyed robust good health all his life, began to suffer from problems with digestion. Exploratory surgery found metastatic stomach cancer, for which no treatment was possible at the time. He died at home on November 28, 1954, two months past his fifty-third birthday. He had made a Fermi calculation of how long to rent the hospital bed in which he died: the rental expired two days after he did. There was speculation that Fermi's life may have been shortened by his work with radiation, but there is no evidence of this. He was never exposed to unusual amounts of radiation in his work, and none of his colleagues, who did the same work at his side, experienced any medical problems. This is a masterful biography of one of the singular figures in twentieth century science. The breadth of his interests and achievements is reflected in the list of things named after Enrico Fermi. Given the hyper-specialisation of modern science, it is improbable we will ever again see his like.
Image by Wikipedia user
Putinovac
licensed under the
Creative Commons
Attribution
3.0 Unported license.
Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.I can imagine the kings hoping they didn't pick a night in mid-August for their vigil! The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders. All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage. Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text. In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.
Three hidden keys open three secret gates,The prize is Halliday's entire fortune and, with it, super-user control of the principal medium of human interaction, business, and even politics. Before fading out, Halliday shows three keys: copper, jade, and crystal, which must be obtained to open the three gates. Only after passing through the gates and passing the tests within them, will the intrepid paladin obtain the Easter egg hidden within the OASIS and gain control of it. Halliday provided a link to Anorak's Almanac, more than a thousand pages of journal entries made during his life, many of which reflect his obsession with 1980s popular culture, science fiction and fantasy, videogames, movies, music, and comic books. The clues to finding the keys and the Egg were widely believed to be within this rambling, disjointed document. Given the stakes, and the contest's being open to anybody in the OASIS, what immediately came to be called the Hunt became a social phenomenon, all-consuming to some. Egg hunters, or “gunters”, immersed themselves in Halliday's journal and every pop culture reference within it, however obscure. All of this material was freely available on the OASIS, and gunters memorised every detail of anything which had caught Halliday's attention. As time passed, and nobody succeeded in finding even the copper key (Halliday's memorial site displayed a scoreboard of those who achieved goals in the Hunt, so far blank), many lost interest in the Hunt, but a dedicated hard core persisted, often to the exclusion of all other diversions. Some gunters banded together into “clans”, some very large, agreeing to exchange information and, if one found the Egg, to share the proceeds with all members. More sinister were the activities of Innovative Online Industries—IOI—a global Internet and communications company which controlled much of the backbone that underlay the OASIS. It had assembled a large team of paid employees, backed by the research and database facilities of IOI, with their sole mission to find the Egg and turn control of the OASIS over to IOI. These players, all with identical avatars and names consisting of their six-digit IOI employee numbers, all of which began with the digit “6”, were called “sixers” or, more often in the gunter argot, “Sux0rz”. Gunters detested IOI and the sixers, because it was no secret that if they found the Egg, IOI's intention was to close the architecture of the OASIS, begin to charge fees for access, plaster everything with advertising, destroy anonymity, snoop indiscriminately, and use their monopoly power to put their thumb on the scale of all forms of communication including political discourse. (Fortunately, that couldn't happen to us with today's enlightened, progressive Silicon Valley overlords.) But IOI's financial resources were such that whenever a rare and powerful magical artefact (many of which had been created by Halliday in the original OASIS, usually requiring the completion of a quest to obtain, but freely transferrable thereafter) came up for auction, IOI was usually able to outbid even the largest gunter clans and add it to their arsenal. Wade Watts, a lone gunter whose avatar is named Parzival, became obsessed with the Hunt on the day of Halliday's death, and, years later, devotes almost every minute of his life not spent sleeping or in school (like many, he attends school in the OASIS, and is now in the last year of high school) on the Hunt, reading and re-reading Anorak's Almanac, reading, listening to, playing, and viewing everything mentioned therein, to the extent he can recite the dialogue of the movies from memory. He makes copious notes in his “grail diary”, named after the one kept by Indiana Jones. His friends, none of whom he has ever met in person, are all gunters who congregate on-line in virtual reality chat rooms such as that run by his best friend, Aech. Then, one day, bored to tears and daydreaming in Latin class, Parzival has a flash of insight. Putting together a message buried in the Almanac that he and many other gunters had discovered but failed to understand, with a bit of Latin and his encyclopedic knowledge of role playing games, he decodes the clue and, after a demanding test, finds himself in possession of the Copper Key. His name, alone, now appears at the top of the scoreboard, with 10,000 points. The path to the First Gate was now open. Discovery of the Copper Key was a sensation: suddenly Parzival, a humble level 10 gunter, is a worldwide celebrity (although his real identity remains unknown, as he refuses all media offers which would reveal or compromise it). Knowing that the key can be found re-energises other gunters, not to speak of IOI, and Parzival's footprints in the OASIS are scrupulously examined for clues to his achievement. (Finding a key and opening a gate does not render it unavailable to others. Those who subsequently pass the tests will receive their own copies of the key, although there is a point bonus for finding it first.) So begins an epic quest by Parzival and other gunters, contending with the evil minions of IOI, whose potential gain is so high and ethics so low that the risks may extend beyond the OASIS into the real world. For the reader, it is a nostalgic romp through every aspect of the popular culture of the 1980s: the formative era of personal computing and gaming. The level of detail is just staggering: this may be the geekiest nerdfest ever published. Heck, there's even a reference to an erstwhile Autodesk employee! The only goof I noted is a mention of the “screech of a 300-baud modem during the log-in sequence”. Three hundred baud modems did not have the characteristic squawk and screech sync-up of faster modems which employ trellis coding. While there are a multitude of references to details which will make people who were there, then, smile, readers who were not immersed in the 1980s and/or less familiar with its cultural minutiæ can still enjoy the challenges, puzzles solved, intrigue, action, and epic virtual reality battles which make up the chronicle of the Hunt. The conclusion is particularly satisfying: there may be a bigger world than even the OASIS. A movie based upon the novel, directed by Steven Spielberg, is scheduled for release in March 2018.
Wherein the errant will be tested for worthy traits,
And those with the skill to survive these straits,
Will reach The End where the prize awaits.
The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.
Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂? Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere. But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet). But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it? (Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.) We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.) Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.) So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system. Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind! But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond? Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful. Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather. Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant. Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships. Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists. We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.2018 |
Thus begins an adventure in which Jazz has to summon all of her formidable intellect, cunning, and resources, form expedient alliances with unlikely parties, solve a technological mystery, balance honour with being a outlaw, and discover the economic foundation of Artemis, which is nothing like it appears from the surface. All of this is set in a richly textured and believable world which we learn about as the story unfolds: Weir is a master of “show, don't tell”. And it isn't just a page-turning thriller (although that it most certainly is); it's also funny, and in the right places and amount. This is where I'd usually mention technical goofs and quibbles. I'll not do that because I didn't find any. The only thing I'm not sure about is Artemis' using a pure oxygen atmosphere at 20% of Earth sea-level pressure. This works for short- and moderate-duration space missions, and was used in the U.S. Mercury, Gemini, and Apollo missions. For exposure to pure oxygen longer than two weeks, a phenomenon called absorption atelectasis can develop, which is the collapse of the alveoli in the lungs due to complete absorption of the oxygen gas (see this NASA report [PDF]). The presence of a biologically inert gas such as nitrogen, helium, argon, or neon will keep the alveoli inflated and prevent this phenomenon. The U.S. Skylab missions used an atmosphere of 72% oxygen and 28% nitrogen to avoid this risk, and the Soviet Salyut and Mir space stations used a mix of nitrogen and oxygen with between 21% and 40% oxygen. The Space Shuttle and International Space Station use sea-level atmospheric pressure with 21% oxygen and the balance nitrogen. The effects of reduced pressure on the boiling point of water and the fire hazard of pure oxygen even at reduced pressure are accurately described, but I'm not sure the physiological effects of a pure oxygen atmosphere for long-term habitation have been worked through. Nitpicking aside, this is a techno-thriller which is also an engaging human story, set in a perfectly plausible and believable future where not only the technology but the economics and social dynamics work. We may just be welcoming another grand master to the pantheon.“I'm sorry but this isn't my thing. You'll have to find someone else.”
“I'll offer you a million slugs.”
“Deal.”
The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don't worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn't malevolence, but competence”. It's unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don't align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes. But isn't this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that's interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so). My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.
Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it's game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today. Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants? First of all, let's assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That's complicated. Here are the alternatives discussed by the author. I've deliberately given just the titles without summaries to stimulate your imagination about their consequences.
I'm not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn't any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don't have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940. Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it. Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it's entirely possible it may have been an event so improbable we'll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents.”). But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn't matter; like the bacteria and ants, we will have done our part. The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there. In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.
Goals of Communism
The tragedy of World War II—a preventable conflict—was that sixty million people had perished to confirm that the United States, the Soviet Union, and Great Britain were far stronger than the fascist powers of Germany, Japan, and Italy after all—a fact that should have been self-evident and in no need of such a bloody laboratory, if not for prior British appeasement, American isolationism, and Russian collaboration.At 720 pages, this is not a short book (the main text is 590 pages; the rest are sources and end notes), but there is so much wisdom and startling insights among those pages that you will be amply rewarded for the time you spend reading them.
When Edmond Kirsch is assassinated moments before playing his presentation which will answer the Big Questions, Langdon and Vidal launch into a quest to discover the password required to release the presentation to the world. The murder of two religious leaders to whom Kirsch revealed his discoveries in advance of their public disclosure stokes the media frenzy surrounding Kirsch and his presentation, and spawns conspiracy theories about dark plots to suppress Kirsch's revelations which may involve religious figures and the Spanish monarchy. After perils, adventures, conflict, and clues hidden in plain sight, Startling Revelations leave Langdon Stunned and Shaken but Cautiously Hopeful for the Future. When the next Dan Brown novel comes along, see how well it fits the template. This novel will appeal to people who like this kind of thing: if you enjoyed the last four, this one won't disappoint. If you're looking for plausible speculation on the science behind the big questions or the technological future of humanity, it probably will. Now that I know how to crank them out, I doubt I'll buy the next one when it appears.Villain: Edmond Kirsch, billionaire computer scientist and former student of Robert Langdon. Made his fortune from breakthroughs in artificial intelligence, neuroscience, and robotics.
Megalomaniac scheme: “end the age of religion and usher in an age of science”.
Buzzword technologies: artificial general intelligence, quantum computing.
Big Questions: “Where did we come from?”, “Where are we going?”.
Religious adversary: The Palmarian Catholic Church.
Plucky female companion: Ambra Vidal, curator of the Guggenheim Museum in Bilbao (Spain) and fiancée of the crown prince of Spain.
Hero or villain? Details would be a spoiler but, as always, there is one.
Contemporary culture tie-in: social media, an InfoWars-like site called ConspiracyNet.com.
MacGuffins: the 47-character password from Kirsch's favourite poem (but which?), the mysterious “Winston”, “The Regent”.
Exotic and picturesque locales: The Guggenheim Museum Bilbao, Casa Milà and the Sagrada Família in Barcelona, Valle de los Caídos near Madrid.
Enigmatic symbol: a typographical mark one must treat carefully in HTML.
There's a growing fraternity of independent, self-published authors busy changing the culture one story at a time with their tales of adventure and heroism. Here are a few of my more recent discoveries.With the social justice crowd doing their worst to wreck science fiction, the works of any of these authors are a great way to remember why you started reading science fiction in the first place.
Le bourgmestre était un personnage de cinquante ans, ni
gras ni maigre, ni petit ni grand, ni vieux ni jeune, ni
coloré ni pâle, ni gai ni triste, ni content ni
ennuyé, ni énergique ni mou, ni fier ni humble, ni
bon ni méchant, ni généreux ni avare, ni
brave ni poltron, ni trop ni trop peu, — ne quid nimis, — un homme
modéré en tout ; mais à la lenteur
invariable de ses mouvements, à sa mâchoire
inférieure un peu pendante, à sa paupière
supérieure immuablement relevée, à son
front uni comme une plaque de cuivre jaune et sans une ride,
à ses muscles peu salliants, un physionomiste eût
sans peine reconnu que le bourgomestre van Tricasse était
le flegme personnifié.
Imagine how startled this paragon of moderation and peace
must have been when the city's policeman—he whose
job has been at risk for decades—pounds on the door
and, when admitted, reports that the city's doctor and
lawyer, visiting the house of scientist Doctor Ox, had
gotten into an argument. They had been talking
politics! Such a thing had not happened in
Quiquendone in over a century. Words were exchanged
that might lead to a duel!
Who is this Doctor Ox? A recent arrival in Quiquendone,
he is a celebrated scientist, considered a leader in the
field of physiology. He stands out against the other
inhabitants of the city. Of no well-defined nationality,
he is a genuine eccentric, self-confident, ambitious, and
known even to smile in public. He and his laboratory
assistant Gédéon Ygène work on
their experiments and never speak of them to others.
Shortly after arriving in Quiquendone, Dr Ox approached the
burgomaster and city council with a proposal: to illuminate
the city and its buildings, not with the new-fangled electric
lights which other cities were adopting, but with a new
invention of his own, oxy-hydric gas. Using powerful
electric batteries he invented, water would be decomposed
into hydrogen and oxygen gas, stored separately, then
delivered in parallel pipes to individual taps where they
would be combined and burned, producing a light much brighter
and pure than electric lights, not to mention conventional
gaslights burning natural or manufactured gas. In storage
and distribution, hydrogen and oxygen would be strictly
segregated, as any mixing prior to the point of use ran the
risk of an explosion. Dr Ox offered to pay all of the
expenses of building the gas production plant, storage
facilities, and installation of the underground pipes and
light fixtures in public buildings and private residences.
After a demonstration of oxy-hydric lighting, city fathers
gave the go-ahead for the installation, presuming Dr Ox was
willing to assume all the costs in order to demonstrate his
invention to other potential customers.
Over succeeding days and weeks, things before unimagined,
indeed, unimaginable begin to occur. On a visit to
Dr Ox, the burgomaster himself and his best friend
city council president Niklausse find themselves in—dare
it be said—a political argument. At
the opera house, where musicians and singers usually so
moderate the tempo that works are performed over multiple
days, one act per night, a performance of Meyerbeer's
Les Hugenots
becomes frenetic and incites the audience to what
can only be described as a riot. A ball at the house of
the banker becomes a whirlwind of sound and motion.
And yet, each time, after people go home, they return to
normal and find it difficult to believe what they did
the night before.
Over time, the phenomenon, at first only seen in large
public gatherings, begins to spread into individual homes
and private lives. You would think the placid Flemish
had been transformed into the hotter tempered denizens of
countries to the south. Twenty newspapers spring up, each
advocating its own radical agenda. Even plants start growing
to enormous size, and cats and dogs, previously as reserved
as their masters, begin to bare fangs and claws. Finally,
a mass movement rises to avenge the honour of
Quiquendone for an injury committed in the year 1185 by
a cow from the neighbouring town of Virgamen.
What was happening? Whence the madness? What would be the
result when the citizens of Quiquendone, armed with everything
they could lay their hands on, marched upon their neighbours?
This is a classic “puzzle story”, seasoned with a
mad scientist of whom the author allows us occasional candid
glimpses as the story unfolds. You'll probably solve the puzzle
yourself long before the big reveal at the end. Jules Verne,
always anticipating the future, foresaw this: the penultimate
chapter is titled (my translation), “Where the intelligent
reader sees that he guessed correctly, despite every precaution
by the author”. The enjoyment here is not so much the
puzzle but rather Verne's language and delicious description
of characters and events, which are up to the standard of his
better-known works.
This is “minor Verne”, written originally for a
public reading and then published in a newspaper in Amiens,
his adopted home. Many believed that in Quiquendone he
was satirising Amiens and his placid neighbours.
Doctor Ox would reappear in the work of Jules Verne in
his 1882 play Voyage
à travers l'impossible
(Journey
Through the Impossible), a work which, after 97
performances in Paris, was believed lost until a single
handwritten manuscript was found in 1978. Dr Ox reprises his
role as mad scientist, joining other characters from Verne's
novels on their own extraordinary voyages. After that work,
Doctor Ox disappears from the world. But when I regard the
frenzied serial madness loose today, from “bathroom
equality”, tearing down Civil War monuments, masked
“Antifa” blackshirts beating up people in the
streets, the “refugee” racket, and Russians under
every bed, I sometimes wonder if he's taken up residence in
today's United States.
An English translation is available.
Verne's reputation has often suffered due to poor English
translations of his work; I have not read this edition and don't
know how good it is. Warning: the description of this book
at Amazon contains a huge spoiler for the central puzzle of
the story.
Things didn't look promising. Almost everything we know about the universe comes from observations of electromagnetic radiation: light, radio waves, X-rays, etc., with a little bit more from particles (cosmic rays and neutrinos). But the cosmic background radiation forms an impenetrable curtain behind which we cannot observe anything via the electromagnetic spectrum, and it dates from around 380,000 years after the Big Bang. The era of inflation was believed to have ended 10−32 seconds after the Bang; considerably earlier. The only “messenger” which could possibly have reached us from that era is gravitational radiation. We've just recently become able to detect gravitational radiation from the most violent events in the universe, but no conceivable experiment would be able to detect this signal from the baby universe.
So is it hopeless? Well, not necessarily…. The cosmic background radiation is a snapshot of the universe as it existed 380,000 years after the Big Bang, and only a few years after it was first detected, it was realised that gravitational waves from the very early universe might have left subtle imprints upon the radiation we observe today. In particular, gravitational radiation creates a form of polarisation called B-modes which most other sources cannot create. If it were possible to detect B-mode polarisation in the cosmic background radiation, it would be a direct detection of inflation. While the experiment would be demanding and eventually result in literally going to the end of the Earth, it would be strong evidence for the process which shaped the universe we inhabit and, in all likelihood, a ticket to Stockholm for those who made the discovery. This was the quest on which the author embarked in the year 2000, resulting in the deployment of an instrument called BICEP1 (Background Imaging of Cosmic Extragalactic Polarization) in the Dark Sector Laboratory at the South Pole. Here is my picture of that laboratory in January 2013. The BICEP telescope is located in the foreground inside a conical shield which protects it against thermal radiation from the surrounding ice. In the background is the South Pole Telescope, a millimetre wave antenna which was not involved in this research.BICEP1 was a prototype, intended to test the technologies to be used in the experiment. These included cooling the entire telescope (which was a modest aperture [26 cm] refractor, not unlike Galileo's, but operating at millimetre wavelengths instead of visible light) to the temperature of interstellar space, with its detector cooled to just ¼ degree above absolute zero. In 2010 its successor, BICEP2, began observation at the South Pole, and continued its run into 2012. When I took the photo above, BICEP2 had recently concluded its observations. On March 17th, 2014, the BICEP2 collaboration announced, at a press conference, the detection of B-mode polarisation in the region of the southern sky they had monitored. Note the swirling pattern of polarisation which is the signature of B-modes, as opposed to the starburst pattern of other kinds of polarisation.
But, not so fast, other researchers cautioned. The risk in doing “science by press release” is that the research is not subjected to peer review—criticism by other researchers in the field—before publication and further criticism in subsequent publications. The BICEP2 results went immediately to the front pages of major newspapers. Here was direct evidence of the birth cry of the universe and confirmation of a theory which some argued implied the existence of a multiverse—the latest Copernican demotion—the idea that our universe was just one of an ensemble, possibly infinite, of parallel universes in which every possibility was instantiated somewhere. Amid the frenzy, a few specialists in the field, including researchers on competing projects, raised the question, “What about the dust?” Dust again! As it happens, while gravitational radiation can induce B-mode polarisation, it isn't the only thing which can do so. Our galaxy is filled with dust and magnetic fields which can cause those dust particles to align with them. Aligned dust particles cause polarised reflections which can mimic the B-mode signature of the gravitational radiation sought by BICEP2. The BICEP2 team was well aware of this potential contamination problem. Unfortunately, their telescope was sensitive only to one wavelength, chosen to be the most sensitive to B-modes due to primordial gravitational radiation. It could not, however, distinguish a signal from that cause from one due to foreground dust. At the same time, however, the European Space Agency Planck spacecraft was collecting precision data on the cosmic background radiation in a variety of wavelengths, including one sensitive primarily to dust. Those data would have allowed the BICEP2 investigators to quantify the degree their signal was due to dust. But there was a problem: BICEP2 and Planck were direct competitors. Planck had the data, but had not released them to other researchers. However, the BICEP2 team discovered that a member of the Planck collaboration had shown a slide at a conference of unpublished Planck observations of dust. A member of the BICEP2 team digitised an image of the slide, created a model from it, and concluded that dust contamination of the BICEP2 data would not be significant. This was a highly dubious, if not explicitly unethical move. It confirmed measurements from earlier experiments and provided confidence in the results. In September 2014, a preprint from the Planck collaboration (eventually published in 2016) showed that B-modes from foreground dust could account for all of the signal detected by BICEP2. In January 2015, the European Space Agency published an analysis of the Planck and BICEP2 observations which showed the entire BICEP2 detection was consistent with dust in the Milky Way. The epochal detection of inflation had been deflated. The BICEP2 researchers had been deceived by dust. The author, a founder of the original BICEP project, was so close to a Nobel prize he was already trying to read the minds of the Nobel committee to divine who among the many members of the collaboration they would reward with the gold medal. Then it all went away, seemingly overnight, turned to dust. Some said that the entire episode had injured the public's perception of science, but to me it seems an excellent example of science working precisely as intended. A result is placed before the public; others, with access to the same raw data are given an opportunity to critique them, setting forth their own raw data; and eventually researchers in the field decide whether the original results are correct. Yes, it would probably be better if all of this happened in musty library stacks of journals almost nobody reads before bursting out of the chest of mass media, but in an age where scientific research is funded by agencies spending money taken from hairdressers and cab drivers by coercive governments under implicit threat of violence, it is inevitable they will force researchers into the public arena to trumpet their “achievements”. In parallel with the saga of BICEP2, the author discusses the Nobel Prizes and what he considers to be their dysfunction in today's scientific research environment. I was surprised to learn that many of the curious restrictions on awards of the Nobel Prize were not, as I had heard and many believe, conditions of Alfred Nobel's will. In fact, the conditions that the prize be shared no more than three ways, not be awarded posthumously, and not awarded to a group (with the exception of the Peace prize) appear nowhere in Nobel's will, but were imposed later by the Nobel Foundation. Further, Nobel's will explicitly states that the prizes shall be awarded to “those who, during the preceding year, shall have conferred the greatest benefit to mankind”. This constraint (emphasis mine) has been ignored since the inception of the prizes. He decries the lack of “diversity” in Nobel laureates (by which he means, almost entirely, how few women have won prizes). While there have certainly been women who deserved prizes and didn't win (Lise Meitner, Jocelyn Bell Burnell, and Vera Rubin are prime examples), there are many more men who didn't make the three laureates cut-off (Freeman Dyson an obvious example for the 1965 Physics Nobel for quantum electrodynamics). The whole Nobel prize concept is capricious, and rewards only those who happen to be in the right place at the right time in the right field that the committee has decided deserves an award this year and are lucky enough not to die before the prize is awarded. To imagine it to be “fair” or representative of scientific merit is, in the estimation of this scribbler, in flying unicorn territory. In all, this is a candid view of how science is done at the top of the field today, with all of the budget squabbles, maneuvering for recognition, rivalry among competing groups of researchers, balancing the desire to get things right with the compulsion to get there first, and the eye on that prize, given only to a few in a generation, which can change one's life forever. Personally, I can't imagine being so fixated on winning a prize one has so little chance of gaining. It's like being obsessed with winning the lottery—and about as likely. In parallel with all of this is an autobiographical account of the career of a scientist with its ups and downs, which is both a cautionary tale and an inspiration to those who choose to pursue that difficult and intensely meritocratic career path. I recommend this book on all three tracks: a story of scientific discovery, mis-interpretation, and self-correction, the dysfunction of the Nobel Prizes and how they might be remedied, and the candid story of a working scientist in today's deeply corrupt coercively-funded research environment.
I have a journalism degree from the most prestigious woman's [sic] college in the United States—in fact, in the whole world—and it is widely agreed upon that I have an uncommon natural talent for spotting news. … I am looking forward to teaming up with you to uncover the countless, previously unexposed Injustices in this town and get the truth out.Her ambition had already aimed her sights higher than a small- to mid-market affiliate: “Someday I'll work at News 24/7. I'll be Lead Reporter with my own Desk. Maybe I'll even anchor my own prime time show someday!” But that required the big break—covering a story that gets picked up by the network in New York and broadcast world-wide with her face on the screen and name on the Chyron below (perhaps scrolling, given its length). Unfortunately, the metro Wycksburg beat tended more toward stories such as the grand opening of a podiatry clinic than those which merit the “BREAKING NEWS” banner and urgent sound clip on the network. The closest she could come to the Social Justice beat was covering the demonstrations of the People's Organization for Perpetual Outrage, known to her boss as “those twelve kooks that run around town protesting everything”. One day, en route to cover another especially unpromising story, Majedah and her cameraman stumble onto a shocking case of police brutality: a white officer ordering a woman of colour to get down, then pushing her to the sidewalk and jumping on top with his gun drawn. So compelling are the images, she uploads the clip with her commentary directly to the network's breaking news site for affiliates. Within minutes it was on the network and screens around the world with the coveted banner. News 24/7 sends a camera crew and live satellite uplink to Wycksburg to cover a follow-up protest by the Global Outrage Organization, and Majedah gets hours of precious live feed directly to the network. That very evening comes a job offer to join the network reporting pool in New York. Mission accomplished!—the road to the Big Apple and big time seems to have opened. But all may not be as it seems. That evening, the detested Eagle Eye News, the jingoist network that climbed to the top of the ratings by pandering to inbred gap-toothed redneck bitter clingers and other quaint deplorables who inhabit flyover country and frequent Web sites named after rodentia and arthropoda, headlined a very different take on the events of the day, with an exclusive interview with the woman of colour from Majedah's reportage. Majedah is devastated—she can see it all slipping away. The next morning, hung-over, depressed, having a nightmare of what her future might hold, she is awakened by the dreaded call from New York. But to her astonishment, the offer still stands. The network producer reminds her that nobody who matters watches Eagle Eye, and that her reportage of police brutality and oppression of the marginalised remains compelling. He reminds her, “you know that the so-called truth can be quite subjective.” The Associate Reporter Pool at News 24/7 might be better likened to an aquarium stocked with the many colourful and exotic species of millennials. There is Mara, who identifies as a female centaur, Scout, a transgender woman, Mysty, Candy, Ångström, and Mohammed Al Kaboom (né James Walker Lang in Mill Valley), each with their own pronouns (Ångström prefers adjutant, 37, and blue). Every morning the pool drains as its inhabitants, diverse in identification and pronomenclature but of one mind (if that term can be stretched to apply to them) in their opinions, gather in the conference room for the daily briefing by the Democratic National Committee, with newsrooms, social media outlets, technology CEOs, bloggers, and the rest of the progressive echo chamber tuned in to receive the day's narrative and talking points. On most days the top priority was the continuing effort to discredit, obstruct, and eventually defeat the detested Republican President Nelson, who only viewers of Eagle Eye took seriously. Out of the blue, a wild card is dealt into the presidential race. Patty Clark, a black businesswoman from Wycksburg who has turned her Jamaica Patty's restaurant into a booming nationwide franchise empire, launches a primary challenge to the incumbent president. Suddenly, the narrative shifts: by promoting Clark, the opposition can be split and Nelson weakened. Clark and Ms Etc have a history that goes back to the latter's breakthrough story, and she is granted priority access to the candidate including an exclusive long-form interview immediately after her announcement that ran in five segments over a week. Suddenly Patty Clark's face was everywhere, and with it, “Majedah Etc., reporting”. What follows is a romp which would have seemed like the purest fantasy prior to the U.S. presidential campaign of 2016. As the campaign progresses and the madness builds upon itself, it's as if Majedah's tether to reality (or what remains of it in the United States) is stretching ever tighter. Is there a limit, and if so, what happens when it is reached? The story is wickedly funny, filled with turns of phrase such as, “Ångström now wishes to go by the pronouns nut, 24, and gander” and “Maher's Syndrome meant a lifetime of special needs: intense unlikeability, intractable bitterness, close-set beady eyes beneath an oversized forehead, and at best, laboring at menial work such as janitorial duties or hosting obscure talk shows on cable TV.” The conclusion is as delicious as it is hopeful. The Kindle edition is free for Kindle Unlimited subscribers.
He'd spent his early career as an infantry officer in the Ranger Battalions before being selected for the Army's Special xxxxxxx xxxx at Fort Bragg. He was currently in charge of the Joint Special Operations Command, xxxxx xxxxxxxx xxxx xxx xxx xxxx xxxx xx xxxx xx xxx xxxx xxxx xxxx xxxxxx xx xxx xxxxxxxxxx xxxxxxx xx xxxx xxxxx xxx xxxxx.A sequel, True Believer, is scheduled for publication in April, 2019.
All was now over. The spirit of the mob was broken and the wide expanse of Constitution Square was soon nearly empty. Forty bodies and some expended cartridges lay on the ground. Both had played their part in the history of human development and passed out of the considerations of living men. Nevertheless, the soldiers picked up the empty cases, and presently some police came with carts and took the other things away, and all was quiet again in Laurania.The massacre, as it was called even by the popular newspaper The Diurnal Gusher which nominally supported the Government, not to mention the opposition press, only compounded the troubles Molara saw in every direction he looked. While the countryside was with him, sentiment in the capital was strongly with the pro-democracy opposition. Among the army, only the élite Republican Guard could be counted on as reliably loyal, and their numbers were small. A diplomatic crisis was brewing with the British over Laurania's colony in Africa which might require sending the Fleet, also loyal, away to defend it. A rebel force, camped right across the border, threatens invasion at any sign of Molara's grip on the nation weakening. And then there is Savrola. Savrola (we never learn his first name), is the young (32 years), charismatic, intellectual, and persuasive voice of the opposition. While never stepping across the line sufficiently to justify retaliation, he manages to keep the motley groups of anti-Government forces in a loose coalition and is a constant thorn in the side of the authorities. He was not immune from introspection.
Was it worth it? The struggle, the labour, the constant rush of affairs, the sacrifice of so many things that make life easy, or pleasant—for what? A people's good! That, he could not disguise from himself, was rather the direction than the cause of his efforts. Ambition was the motive force, and he was powerless to resist it.This is a character one imagines the young Churchill having little difficulty writing. With the seemingly incorruptible Savrola gaining influence and almost certain to obtain a political platform in the coming elections, Molara's secretary, the amoral but effective Miguel, suggests a stratagem: introduce Savrola to the President's stunningly beautiful wife Lucile and use the relationship to compromise him.
“You are a scoundrel—an infernal scoundrel” said the President quietly. Miguel smiled, as one who receives a compliment. “The matter,” he said, “is too serious for the ordinary rules of decency and honour. Special cases demand special remedies.”The President wants to hear no more of the matter, but does not forbid Miguel from proceeding. An introduction is arranged, and Lucile rapidly moves from fascination with Savrola to infatuation. Then events rapidly spin out of anybody's control. The rebel forces cross the border; Molara's army is proved unreliable and disloyal; the Fleet, en route to defend the colony, is absent; Savrola raises a popular rebellion in the capital; and open fighting erupts. This is a story of intrigue, adventure, and conflict in the “Ruritanian” genre popularised by the 1894 novel The Prisoner of Zenda. Churchill, building on his experience of war reportage, excels in and was praised for the realism of the battle scenes. The depiction of politicians, functionaries, and soldiers seems to veer back and forth between cynicism and admiration for their efforts in trying to make the best of a bad situation. The characters are cardboard figures and the love interest is clumsily described. Still, this is an entertaining read and provides a window on how the young Churchill viewed the antics of colourful foreigners and their unstable countries, even if Laurania seems to have a strong veneer of Victorian Britain about it. The ultimate message is that history is often driven not by the plans of leaders, whether corrupt or noble, but by events over which they have little control. Churchill never again attempted a novel and thought little of this effort. In his 1930 autobiography covering the years 1874 through 1902 he writes of Savrola, “I have consistently urged my friends to abstain from reading it.” But then, Churchill was not always right—don't let his advice deter you; I enjoyed it. This work is available for free as a Project Gutenberg electronic book in a variety of formats. There are a number of print and Kindle editions of this public domain text; I have cited the least expensive print edition available at the time I wrote this review. I read this Kindle edition, which has a few typographical errors due to having been prepared by optical character recognition (for example, “stem” where “stern” was intended), but is otherwise fine. One factlet I learned while researching this review is that “Winston S. Churchill” is actually a nom de plume. Churchill's full name is Winston Leonard Spencer-Churchill, and he signed his early writings as “Winston Churchill”. Then, he discovered there was a well-known American novelist with the same name. The British Churchill wrote to the American Churchill and suggested using the name “Winston Spencer Churchill” (no hyphen) to distinguish his work. The American agreed, noting that he would also be willing to use a middle name, except that he didn't have one. The British Churchill's publishers abbreviated his name to “Winston S. Churchill”, which he continued to use for the rest of his writing career.
CBS coverage of the Apollo 8 launch
Now we step inside Mission Control and listen in on the Flight Director's audio loop during the launch, illustrated with imagery and simulations.The Saturn V performed almost flawlessly. During the second stage burn mild pogo oscillations began but, rather than progressing to the point where they almost tore the rocket apart as had happened on the previous Saturn V launch, von Braun's team's fixes kicked in and seconds later Borman reported, “Pogo's damping out.” A few minutes later Apollo 8 was in Earth orbit. Jim Lovell had sixteen days of spaceflight experience across two Gemini missions, one of them Gemini 7 where he endured almost two weeks in orbit with Frank Borman. Bill Anders was a rookie, on his first space flight. Now weightless, all three were experiencing a spacecraft nothing like the cramped Mercury and Gemini capsules which you put on as much as boarded. The Apollo command module had an interior volume of six cubic metres (218 cubic feet, in the quaint way NASA reckons things) which may not seem like much for a crew of three, but in weightlessness, with every bit of space accessible and usable, felt quite roomy. There were five real windows, not the tiny portholes of Gemini, and plenty of space to move from one to another. With all this roominess and mobility came potential hazards, some verging on slapstick, but, in space, serious nonetheless. NASA safety personnel had required the astronauts to wear life vests over their space suits during the launch just in case the Saturn V malfunctioned and they ended up in the ocean. While moving around the cabin to get to the navigation station after reaching orbit, Lovell, who like the others hadn't yet removed his life vest, snagged its activation tab on a strut within the cabin and it instantly inflated. Lovell looked ridiculous and the situation comical, but it was no laughing matter. The life vests were inflated with carbon dioxide which, if released in the cabin, would pollute their breathing air and removal would use up part of a CO₂ scrubber cartridge, of which they had a limited supply on board. Lovell finally figured out what to do. After being helped out of the vest, he took it down to the urine dump station in the lower equipment bay and vented it into a reservoir which could be dumped out into space. One problem solved, but in space you never know what the next surprise might be. The astronauts wouldn't have much time to admire the Earth through those big windows. Over Australia, just short of three hours after launch, they would re-light the engine on the third stage of the Saturn V for the “trans-lunar injection” (TLI) burn of 318 seconds, which would accelerate the spacecraft to just slightly less than escape velocity, raising its apogee so it would be captured by the Moon's gravity. After housekeeping (presumably including the rest of the crew taking off those pesky life jackets, since there weren't any wet oceans where they were going) and reconfiguring the spacecraft and its computer for the maneuver, they got the call from Houston, “You are go for TLI.” They were bound for the Moon. The third stage, which had failed to re-light on its last outing, worked as advertised this time, with a flawless burn. Its job was done; from here on the astronauts and spacecraft were on their own. The booster had placed them on a free-return trajectory. If they did nothing (apart from minor “trajectory correction maneuvers” easily accomplished by the spacecraft's thrusters) they would fly out to the Moon, swing around its far side, and use its gravity to slingshot back to the Earth (as Lovell would do two years later when he commanded Apollo 13, although there the crew had to use the engine of the LM to get back onto a free-return trajectory after the accident). Apollo 8 rapidly climbed out of the Earth's gravity well, trading speed for altitude, and before long the astronauts beheld a spectacle no human eyes had glimpsed before: an entire hemisphere of Earth at once, floating in the inky black void. On board, there were other concerns: Frank Borman was puking his guts out and having difficulties with the other end of the tubing as well. Borman had logged more than six thousand flight hours in his career as a fighter and test pilot, most of it in high-performance jet aircraft, and fourteen days in space on Gemini 7 without any motion sickness. Many people feel queasy when they experience weightlessness the first time, but this was something entirely different and new in the American space program. And it was very worrisome. The astronauts discussed the problem on private tapes they could downlink to Mission Control without broadcasting to the public, and when NASA got around to playing the tapes, the chief flight surgeon, Dr. Charles Berry, became alarmed. As he saw it, there were three possibilities: motion sickness, a virus of some kind, or radiation sickness. On its way to the Moon, Apollo 8 passed directly through the Van Allen radiation belts, spending two hours in this high radiation environment, the first humans to do so. The total radiation dose was estimated as roughly the same as one would receive from a chest X-ray, but the composition of the radiation was different and the exposure was over an extended time, so nobody could be sure it was safe. The fact that Lovell and Anders had experienced no symptoms argued against the radiation explanation. Berry concluded that a virus was the most probable cause and, based upon the mission rules said, “I'm recommending that we consider canceling the mission.” The risk of proceeding with the commander unable to keep food down and possibly carrying a virus which the other astronauts might contract was too great in his opinion. This recommendation was passed up to the crew. Borman, usually calm and collected even by astronaut standards, exclaimed, “What? That is pure, unadulterated horseshit.” The mission would proceed, and within a day his stomach had settled. This was the first case of space adaptation syndrome to afflict an American astronaut. (Apparently some Soviet cosmonauts had been affected, but this was covered up to preserve their image as invincible exemplars of the New Soviet Man.) It is now known to affect around a third of people experiencing weightlessness in environments large enough to move around, and spontaneously clears up in two to four (miserable) days. The two most dramatic and critical events in Apollo 8's voyage would occur on the far side of the Moon, with 3500 km of rock between the spacecraft and the Earth totally cutting off all communications. The crew would be on their own, aided by the computer and guidance system and calculations performed on the Earth and sent up before passing behind the Moon. The first would be lunar orbit insertion (LOI), scheduled for 69 hours and 8 minutes after launch. The big Service Propulsion System (SPS) engine (it was so big—twice as large as required for Apollo missions as flown—because it was designed to be able to launch the entire Apollo spacecraft from the Moon if a “direct ascent” mission mode had been selected) would burn for exactly four minutes and seven seconds to bend the spacecraft's trajectory around the Moon into a closed orbit around that world. If the SPS failed to fire for the LOI burn, it would be a huge disappointment but survivable. Apollo 8 would simply continue on its free-return trajectory, swing around the Moon, and fall back to Earth where it would perform a normal re-entry and splashdown. But if the engine fired and cut off too soon, the spacecraft would be placed into an orbit which would not return them to Earth, marooning the crew in space to die when their supplies ran out. If it burned just a little too long, the spacecraft's trajectory would intersect the surface of the Moon—lithobraking is no way to land on the Moon. When the SPS engine shut down precisely on time and the computer confirmed the velocity change of the burn and orbital parameters, the three astronauts were elated, but they were the only people in the solar system aware of the success. Apollo 8 was still behind the Moon, cut off from communications. The first clue Mission Control would have of the success or failure of the burn would be when Apollo 8's telemetry signal was reacquired as it swung around the limb of the Moon. If too early, it meant the burn had failed and the spacecraft was coming back to Earth; that moment passed with no signal. Now tension mounted as the clock ticked off the seconds to the time expected for a successful burn. If that time came and went with no word from Apollo 8, it would be a really bad day. Just on time, the telemetry signal locked up and Jim Lovell reported, “Go ahead, Houston, this is Apollo 8. Burn complete. Our orbit 160.9 by 60.5.” (Lovell was using NASA's preferred measure of nautical miles; in proper units it was 311 by 112 km. The orbit would subsequently be circularised by another SPS burn to 112.7 by 114.7 km.) The Mission Control room erupted into an un-NASA-like pandemonium of cheering. Apollo 8 would orbit the Moon ten times, spending twenty hours in a retrograde orbit with an inclination of 12 degrees to the lunar equator, which would allow it to perform high-resolution photography of candidate sites for early landing missions under lighting conditions similar to those expected at the time of landing. In addition, precision tracking of the spacecraft's trajectory in lunar orbit would allow mapping of the Moon's gravitational field, including the “mascons” which perturb the orbits of objects in low lunar orbits and would be important for longer duration Apollo orbital missions in the future. During the mission, the crew were treated to amazing sights and, in particular, the dramatic difference between the near side, with its many flat “seas”, and the rugged highlands of the far side. Coming around the Moon they saw the spectacle of earthrise for the first time and, hastily grabbing a magazine of colour film and setting aside the planned photography schedule, Bill Anders snapped the photo of the Earth rising above the lunar horizon which became one of the most iconic photographs of the twentieth century. Here is a reconstruction of the moment that photo was taken.
On the ninth and next-to-last orbit, the crew conducted a second television transmission which was broadcast worldwide. It was Christmas Eve on much of the Earth, and, coming at the end of the chaotic, turbulent, and often tragic year of 1968, it was a magical event, remembered fondly by almost everybody who witnessed it and felt pride for what the human species had just accomplished. You have probably heard this broadcast from the Moon, often with the audio overlaid on imagery of the Moon from later missions, with much higher resolution than was actually seen in that broadcast. Here, in three parts, is what people, including this scrivener, actually saw on their televisions that enchanted night. The famous reading from Genesis is in the third part. This description is eerily similar to that in Jules Verne's 1870 Autour de la lune.
After the end of the broadcast, it was time to prepare for the next and absolutely crucial maneuver, also performed on the far side of the Moon: trans-Earth injection, or TEI. This would boost the spacecraft out of lunar orbit and send it back on a trajectory to Earth. This time the SPS engine had to work, and perfectly. If it failed to fire, the crew would be trapped in orbit around the Moon with no hope of rescue. If it cut off too soon or burned too long, or the spacecraft was pointed in the wrong direction when it fired, Apollo 8 would miss the Earth and orbit forever far from its home planet or come in too steep and burn up when it hit the atmosphere. Once again the tension rose to a high pitch in Mission Control as the clock counted down to the two fateful times: this time they'd hear from the spacecraft earlier if it was on its way home and later or not at all if things had gone tragically awry. Exactly when expected, the telemetry screens came to life and a second later Jim Lovell called, “Houston, Apollo 8. Please be informed there is a Santa Claus.” Now it was just a matter of falling the 375,000 kilometres from the Moon, hitting the precise re-entry corridor in the Earth's atmosphere, executing the intricate “double dip” re-entry trajectory, and splashing down near the aircraft carrier which would retrieve the Command Module and crew. Earlier unmanned tests gave confidence it would all work, but this was the first time men would be trying it. There was some unexpected and embarrassing excitement on the way home. Mission Control had called up a new set of co-ordinates for the “barbecue roll” which the spacecraft executed to even out temperature. Lovell was asked to enter “verb 3723, noun 501” into the computer. But, weary and short on sleep, he fat-fingered the commands and entered “verb 37, noun 01”. This told the computer the spacecraft was back on the launch pad, pointing straight up, and it immediately slewed to what it thought was that orientation. Lovell quickly figured out what he'd done, “It was my goof”, but by this time he'd “lost the platform”: the stable reference the guidance system used to determine in which direction the spacecraft was pointing in space. He had to perform a manual alignment, taking sightings on a number of stars, to recover the correct orientation of the stable platform. This was completely unplanned but, as it happens, in doing so Lovell acquired experience that would prove valuable when he had to perform the same operation in much more dire circumstances on Apollo 13 after an explosion disabled the computer and guidance system in the Command Module. Here is the author of the book, Jeffrey Kluger, discussing Jim Lovell's goof.
The re-entry went completely as planned, flown entirely under computer control, with the spacecraft splashing into the Pacific Ocean just 6 km from the aircraft carrier Yorktown. But because the splashdown occurred before dawn, it was decided to wait until the sky brightened to recover the crew and spacecraft. Forty-three minutes after splashdown, divers from the Yorktown arrived at the scene, and forty-five minutes after that the crew was back on the ship. Apollo 8 was over, a total success. This milestone in the space race had been won definitively by the U.S., and shortly thereafter the Soviets abandoned their Zond circumlunar project, judging it an anticlimax and admission of defeat to fly by the Moon after the Americans had already successfully orbited it. This is the official NASA contemporary documentary about Apollo 8.
Here is an evening with the Apollo 8 astronauts recorded at the National Air and Space Museum on 2008-11-13 to commemorate the fortieth anniversary of the flight.
This is a reunion of the Apollo 8 astronauts on 2009-04-23.
As of this writing, all of the crew of Apollo 8 are alive, and, in a business where divorce was common, remain married to the women they wed as young military officers.
At times, I've been criticized for “jumping on the [liberal] bandwagon” on topics like gay rights and Black Lives Matter across a number of books, but, honestly, it's the 21st century—the cruelty that still dominates how we humans deal with each other is petty and myopic. Any contact with an intelligent extraterrestrial species will expose not only a vast technological gulf, but a moral one as well.Well, maybe, but isn't it equally likely that when they arrive in their atomic space cars and imbibe what passes for culture and morality among the intellectual élite of the global Davos party and how obsessed these talking apes seem to be about who is canoodling whom with what, that after they stop laughing they may decide that we are made of atoms which they can use for something else.
The urban guerrilla is a man who fights the military dictatorship with arms, using unconventional methods. A political revolutionary, he is a fighter for his country's liberation, a friend of the people and of freedom. The area in which the urban guerrilla acts is in the large Brazilian cities. There are also bandits, commonly known as outlaws, who work in the big cities. Many times assaults by outlaws are taken as actions by urban guerrillas. The urban guerrilla, however, differs radically from the outlaw. The outlaw benefits personally from the actions, and attacks indiscriminately without distinguishing between the exploited and the exploiters, which is why there are so many ordinary men and women among his victims. The urban guerrilla follows a political goal and only attacks the government, the big capitalists, and the foreign imperialists, particularly North Americans.These fine distinctions tend to be lost upon innocent victims, especially since the proceeds of the bank robberies of which the “urban guerrillas” are so fond are not used to aid the poor but rather to finance still more attacks by the ever-so-noble guerrillas pursuing their “political goal”. This would likely have been an obscure and largely forgotten work of a little-known Brazilian renegade had it not been picked up, translated to English, and published in June and July 1970 by the Berkeley Tribe, a California underground newspaper. It became the terrorist bible of groups including Weatherman, the Black Liberation Army, and Symbionese Liberation Army in the United States, the Red Army Faction in Germany, the Irish Republican Army, the Sandanistas in Nicaragua, and the Palestine Liberation Organisation. These groups embarked on crime and terror campaigns right out of Marighella's playbook with no more thought about step two. They are largely forgotten now because their futile acts had no permanent consequences and their existence was an embarrassment to the élites who largely share their pernicious ideology but have chosen to advance it through subversion, not insurrection. A Kindle edition is available from a different publisher. You can read the book on-line for free at the Marxists Internet Archive.
Every policeman, lackey or running dog of the ruling class must make his or her choice now. Either side with the people: poor and oppressed, or die for the oppressor. Trying to stop what is going down is like trying to stop history, for as long as there are those who will dare to live for freedom there are men and women who dare to unhorse the emperor. All power to the people.Politicians, press, and police weren't sure what to make of this. The politicians, worried about the opinion of their black constituents, shied away from anything which sounded like accusing black militants of targeting police. The press, although they'd never write such a thing or speak it in polite company, didn't think it plausible that street blacks could organise a sustained revolutionary campaign: certainly that required college-educated intellectuals. The police, while threatened by these random attacks, weren't sure there was actually any organised group behind the BLA attacks: they were inclined to believe it was a matter of random cop killers attributing their attacks to the BLA after the fact. Further, the BLA had no visible spokesperson and issued no manifestos other than the brief statements after some attacks. This contributed to the mystery, which largely persists to this day because so many participants were killed and the survivors have never spoken out. In fact, the BLA was almost entirely composed of former members of the New York chapter of the Black Panthers, which had collapsed in the split between factions following Huey Newton and those (including New York) loyal to Eldridge Cleaver, who had fled to exile in Algeria and advocated violent confrontation with the power structure in the U.S. The BLA would perpetrate more than seventy violent attacks between 1970 and 1976 and is said to be responsible for the deaths of thirteen police officers. In 1982, they hijacked a domestic airline flight and pocketed a ransom of US$ 1 million. Weatherman (later renamed the “Weather Underground” because the original name was deemed sexist) and the BLA represented the two poles of the violent radicals: the first, intellectual, college-educated, and mostly white, concentrated mostly on symbolic bombings against property, usually with warnings in advance to avoid human casualties. As pressure from the FBI increased upon them, they became increasingly inactive; a member of the New York police squad assigned to them quipped, “Weatherman, Weatherman, what do you do? Blow up a toilet every year or two.” They managed the escape of Timothy Leary from a minimum-security prison in California. Leary basically just walked away, with a group of Weatherman members paid by Leary supporters picking him up and arranging for he and his wife Rosemary to obtain passports under assumed names and flee the U.S. for exile in Algeria with former Black Panther leader Eldridge Cleaver. The Black Liberation Army, being composed largely of ex-prisoners with records of violent crime, was not known for either the intelligence or impulse control of its members. On several occasions, what should have been merely tense encounters with the law turned into deadly firefights because a BLA militant opened fire for no apparent reason. Had they not been so deadly to those they attacked and innocent bystanders, the exploits of the BLA would have made a fine slapstick farce. As the dour decade of the 1970s progressed, other violent underground groups would appear, tending to follow the model of either Weatherman or the BLA. One of the most visible, it not successful, was the “Symbionese Liberation Army” (SLA), founded by escaped convict and grandiose self-styled revolutionary Daniel DeFreeze. Calling himself “General Field Marshal Cinque”, which he pronounced “sin-kay”, and ending his fevered communications with “DEATH TO THE FASCIST INSECT THAT PREYS UPON THE LIFE OF THE PEOPLE”, this band of murderous bozos struck their first blow for black liberation by assassinating Marcus Foster, the first black superintendent of the Oakland, California school system for his “crimes against the people” of suggesting that police be called into deal with violence in the city's schools and that identification cards be issued to students. Sought by the police for the murder, they struck again by kidnapping heiress, college student, and D-list celebrity Patty Hearst, whose abduction became front page news nationwide. If that wasn't sufficiently bizarre, the abductee eventually issued a statement saying she had chosen to “stay and fight”, adopting the name “Tania”, after the nom de guerre of a Cuban revolutionary and companion of Che Guevara. She was later photographed by a surveillance camera carrying a rifle during a San Francisco bank robbery perpetrated by the SLA. Hearst then went underground and evaded capture until September 1975 after which, when being booked into jail, she gave her occupation as “Urban Guerrilla”. Hearst later claimed she had agreed to join the SLA and participate in its crimes only to protect her own life. She was convicted and sentenced to 35 years in prison, later reduced to 7 years. The sentence was later commuted to 22 months by U.S. President Jimmy Carter and she was released in 1979, and was the recipient of one of Bill Clinton's last day in office pardons in January, 2001. Six members of the SLA, including DeFreeze, died in a house fire during a shootout with the Los Angeles Police Department in May, 1974. Violence committed in the name of independence for Puerto Rico was nothing new. In 1950, two radicals tried to assassinate President Harry Truman, and in 1954, four revolutionaries shot up the U.S. House of Representatives from the visitors' gallery, wounding five congressmen on the floor, none fatally. The Puerto Rican terrorists had the same problem as their Weatherman, BLA, or SLA bomber brethren: they lacked the support of the people. Most of the residents of Puerto Rico were perfectly happy being U.S. citizens, especially as this allowed them to migrate to the mainland to escape the endemic corruption and the poverty it engendered in the island. As the 1960s progressed, the Puerto Rico radicals increasingly identified with Castro's Cuba (which supported them ideologically, if not financially), and promised to make a revolutionary Puerto Rico a beacon of prosperity and liberty like Cuba had become. Starting in 1974, a new Puerto Rican terrorist group, the Fuerzas Armadas de Liberación Nacional (FALN) launched a series of attacks in the U.S., most in the New York and Chicago areas. One bombing, that of the Fraunces Tavern in New York in January 1975, killed four people and injured more than fifty. Between 1974 and 1983, a total of more than 130 bomb attacks were attributed to the FALN, most against corporate targets. In 1975 alone, twenty-five bombs went off, around one every two weeks. Other groups, such as the “New World Liberation Front” (NWLF) in northern California and “The Family” in the East continued the chaos. The NWLF, formed originally from remains of the SLA, detonated twice as many bombs as the Weather Underground. The Family carried out a series of robberies, including the deadly Brink's holdup of October 1981, and jailbreaks of imprisoned radicals. In the first half of the 1980s, the radical violence sputtered out. Most of the principals were in prison, dead, or living underground and keeping a low profile. A growing prosperity had replaced the malaise and stagflation of the 1970s and there were abundant jobs for those seeking them. The Vietnam War and draft were receding into history, leaving the campuses with little to protest, and the remaining radicals had mostly turned from violent confrontation to burrowing their way into the culture, media, administrative state, and academia as part of Gramsci's “long march through the institutions”. All of these groups were plagued with the “step two problem”. The agenda of Weatherman was essentially:
The wormholes used by the Eschaton to relocate Earth's population in the great Diaspora, a technology which humans had yet to understand, not only permitted instantaneous travel across interstellar distances but also in time: the more distant the planet from Earth, the longer the settlers deposited there have had to develop their own cultures and civilisations before being contacted by faster than light ships. With cornucopia machines to meet their material needs and allow them to bootstrap their technology, those that descended into barbarism or incessant warfare did so mostly due to bad ideas rather than their environment. Rachel Mansour, secret agent for the Earth-based United Nations, operating under the cover of an entertainment officer (or, if you like, cultural attaché), who we met in the previous novel in the series, Singularity Sky (February 2011), and her companion Martin Springfield, who has a back-channel to the Eschaton, serve as arms control inspectors—their primary mission to insure that nothing anybody on Earth or the worlds who have purchased technology from Earth invites the wrath of the Eschaton—remember that “Or else.” A terrible fate has befallen the planet Moscow, a diaspora “McWorld” accomplished in technological development and trade, when its star, a G-type main sequence star like the Sun, explodes in a blast releasing a hundredth the energy of a supernova, destroying all life on planet Moscow within an instant of the wavefront reaching it, and the entire planet within an hour. The problem is, type G stars just don't explode on their own. Somebody did this, quite likely using technologies which risk Big E's “or else” on whoever was responsible (or it concluded was responsible). What's more, Moscow maintained a slower-than-light deterrent fleet with relativistic planet-buster weapons to avenge any attack on their home planet. This fleet, essentially undetectable en route, has launched against New Dresden, a planet with which Moscow had a nonviolent trade dispute. The deterrent fleet can be recalled only by coded messages from two Moscow system ambassadors who survived the attack at their postings in other systems, but can also be sent an irrevocable coercion code, which cancels the recall and causes any further messages to be ignored, by three ambassadors. And somebody seems to be killing off the remaining Moscow ambassadors: if the number falls below two, the attack will arrive at New Dresden in thirty-five years and wipe out the planet and as many of its eight hundred million inhabitants as have not been evacuated. Victoria Strowger, who detests her name and goes by “Wednesday”, has had an invisible friend since childhood, “Herman”, who speaks to her through her implants. As she's grown up, she has come to understand that, in some way, Herman is connected to Big E and, in return for advice and assistance she values highly, occasionally asks her for favours. Wednesday and her family were evacuated from one of Moscow's space stations just before the deadly wavefront from the exploded star arrived, with Wednesday running a harrowing last “errand” for Herman before leaving. Later, in her new home in an asteroid in the Septagon system, she becomes the target of an attack seemingly linked to that mystery mission, and escapes only to find her family wiped out by the attackers. With Herman's help, she flees on an interstellar liner. While Singularity Sky was a delightful romp describing a society which had deliberately relinquished technology in order to maintain a stratified class system with the subjugated masses frozen around the Victorian era, suddenly confronted with the merry pranksters of the Festival, who inject singularity-epoch technology into its stagnant culture, Iron Sunrise is a much more conventional mystery/adventure tale about gaining control of the ambassadorial keys, figuring out who are the good and bad guys, and trying to avert a delayed but inexorably approaching genocide. This just didn't work for me. I never got engaged in the story, didn't find the characters particularly interesting, nor came across any interesting ways in which the singularity came into play (and this is supposed to be the author's “Singularity Series”). There are some intriguing concepts, for example the “causal channel”, in which quantum-entangled particles permit instantaneous communication across spacelike separations as long as the previously-prepared entangled particles have first been delivered to the communicating parties by slower than light travel. This is used in the plot to break faster than light communication where it would be inconvenient for the story line (much as all those circumstances in Star Trek where the transporter doesn't work for one reason or another when you're tempted to say “Why don't they just beam up?”). The apparent villains, the ReMastered, (think Space Nazis who believe in a Tipler-like cult of Omega Point out-Eschaton-ing the Eschaton, with icky brain-sucking technology) were just over the top. Accelerando and Singularity Sky were thought-provoking and great fun. This one doesn't come up to that standard.
- I am the Eschaton. I am not your god.
- I am descended from you, and I exist in your future.
- Thou shalt not violate causality within my historic light cone. Or else.
2019 |
This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90. But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today's developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.
While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I'm talking about China, Korea, and [to a lesser extent] Japan.) If you've followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven's Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is. What's going on, then? The authors join James Flynn in concluding that what we're seeing is an increase in the population's proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn't make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we're seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening. Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.) There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since. If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China. And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn't make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we've learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans. Or, maybe we won't. In which case, “Winter has come and it's only going to get colder. Wrap up warm.” Here is a James Delingpole interview of the authors and discussion of the book.
First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).
Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.
It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:
over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance. Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds. The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.
What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)? Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.
Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111. What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes). The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.
Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.
Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.
The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.
Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.
Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.
Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.
The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.
So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”. If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere. The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs. A Kindle edition is available. All graphs in this article are as posted on the Global Consciousness Project Web site.
The belief that there is an objective physical world whose properties are independent of what human beings know or which experiments we choose to do. Realists also believe that there is no obstacle in principle to our obtaining complete knowledge of this world.This has been part of the scientific worldview since antiquity and yet quantum mechanics, confirmed by innumerable experiments, appears to indicate we must abandon it. Quantum mechanics says that what you observe depends on what you choose to measure; that there is an absolute limit upon the precision with which you can measure pairs of properties (for example position and momentum) set by the uncertainty principle; that it isn't possible to predict the outcome of experiments but only the probability among a variety of outcomes; and that particles which are widely separated in space and time but which have interacted in the past are entangled and display correlations which no classical mechanistic theory can explain—Einstein called the latter “spooky action at a distance”. Once again, all of these effects have been confirmed by precision experiments and are not fairy castles erected by theorists. From the formulation of the modern quantum theory in the 1920s, often called the Copenhagen interpretation after the location of the institute where one of its architects, Neils Bohr, worked, a number of eminent physicists including Einstein and Louis de Broglie were deeply disturbed by its apparent jettisoning of the principle of realism in favour of what they considered a quasi-mystical view in which the act of “measurement” (whatever that means) caused a physical change (wave function collapse) in the state of a system. This seemed to imply that the photon, or electron, or anything else, did not have a physical position until it interacted with something else: until then it was just an immaterial wave function which filled all of space and (when squared) gave the probability of finding it at that location. In 1927, de Broglie proposed a pilot wave theory as a realist alternative to the Copenhagen interpretation. In the pilot wave theory there is a real particle, which has a definite position and momentum at all times. It is guided in its motion by a pilot wave which fills all of space and is defined by the medium through which it propagates. We cannot predict the exact outcome of measuring the particle because we cannot have infinitely precise knowledge of its initial position and momentum, but in principle these quantities exist and are real. There is no “measurement problem” because we always detect the particle, not the pilot wave which guides it. In its original formulation, the pilot wave theory exactly reproduced the predictions of the Copenhagen formulation, and hence was not a competing theory but rather an alternative interpretation of the equations of quantum mechanics. Many physicists who preferred to “shut up and calculate” considered interpretations a pointless exercise in phil-oss-o-phy, but de Broglie and Einstein placed great value on retaining the principle of realism as a cornerstone of theoretical physics. Lee Smolin sketches an alternative reality in which “all the bright, ambitious students flocked to Paris in the 1930s to follow de Broglie, and wrote textbooks on pilot wave theory, while Bohr became a footnote, disparaged for the obscurity of his unnecessary philosophy”. But that wasn't what happened: among those few physicists who pondered what the equations meant about how the world really works, the Copenhagen view remained dominant. In the 1950s, independently, David Bohm invented a pilot wave theory which he developed into a complete theory of nonrelativistic quantum mechanics. To this day, a small community of “Bohmians” continue to explore the implications of his theory, working on extending it to be compatible with special relativity. From a philosophical standpoint the de Broglie-Bohm theory is unsatisfying in that it involves a pilot wave which guides a particle, but upon which the particle does not act. This is an “unmoved mover”, which all of our experience of physics argues does not exist. For example, Newton's third law of motion holds that every action has an equal and opposite reaction, and in Einstein's general relativity, spacetime tells mass-energy how to move while mass-energy tells spacetime how to curve. It seems odd that the pilot wave could be immune from influence of the particle it guides. A few physicists, such as Jack Sarfatti, have proposed “post-quantum” extensions to Bohm's theory in which there is back-reaction from the particle on the pilot wave, and argue that this phenomenon might be accessible to experimental tests which would distinguish post-quantum phenomena from the predictions of orthodox quantum mechanics. A few non-physicist crackpots have suggested these phenomena might even explain flying saucers. Moving on from pilot wave theory, the author explores other attempts to create a realist interpretation of quantum mechanics: objective collapse of the wave function, as in the Penrose interpretation; the many worlds interpretation (which Smolin calls “magical realism”); and decoherence of the wavefunction due to interaction with the environment. He rejects all of them as unsatisfying, because they fail to address glaring lacunæ in quantum theory which are apparent from its very equations. The twentieth century gave us two pillars of theoretical physics: quantum mechanics and general relativity—Einstein's geometric theory of gravitation. Both have been tested to great precision, but they are fundamentally incompatible with one another. Quantum mechanics describes the very small: elementary particles, atoms, and molecules. General relativity describes the very large: stars, planets, galaxies, black holes, and the universe as a whole. In the middle, where we live our lives, neither much affects the things we observe, which is why their predictions seem counter-intuitive to us. But when you try to put the two theories together, to create a theory of quantum gravity, the pieces don't fit. Quantum mechanics assumes there is a universal clock which ticks at the same rate everywhere in the universe. But general relativity tells us this isn't so: a simple experiment shows that a clock runs slower when it's in a gravitational field. Quantum mechanics says that it isn't possible to determine the position of a particle without its interacting with another particle, but general relativity requires the knowledge of precise positions of particles to determine how spacetime curves and governs the trajectories of other particles. There are a multitude of more gnarly and technical problems in what Stephen Hawking called “consummating the fiery marriage between quantum mechanics and general relativity”. In particular, the equations of quantum mechanics are linear, which means you can add together two valid solutions and get another valid solution, while general relativity is nonlinear, where trying to disentangle the relationships of parts of the systems quickly goes pear-shaped and many of the mathematical tools physicists use to understand systems (in particular, perturbation theory) blow up in their faces. Ultimately, Smolin argues, giving up realism means abandoning what science is all about: figuring out what is really going on. The incompatibility of quantum mechanics and general relativity provides clues that there may be a deeper theory to which both are approximations that work in certain domains (just as Newtonian mechanics is an approximation of special relativity which works when velocities are much less than the speed of light). Many people have tried and failed to “quantise general relativity”. Smolin suggests the problem is that quantum theory itself is incomplete: there is a deeper theory, a realistic one, to which our existing theory is only an approximation which works in the present universe where spacetime is nearly flat. He suggests that candidate theories must contain a number of fundamental principles. They must be background independent, like general relativity, and discard such concepts as fixed space and a universal clock, making both dynamic and defined based upon the components of a system. Everything must be relational: there is no absolute space or time; everything is defined in relation to something else. Everything must have a cause, and there must be a chain of causation for every event which traces back to its causes; these causes flow only in one direction. There is reciprocity: any object which acts upon another object is acted upon by that object. Finally, there is the “identity of indescernibles”: two objects which have exactly the same properties are the same object (this is a little tricky, but the idea is that if you cannot in some way distinguish two objects [for example, by their having different causes in their history], then they are the same object). This argues that what we perceive, at the human scale and even in our particle physics experiments, as space and time are actually emergent properties of something deeper which was manifest in the early universe and in extreme conditions such as gravitational collapse to black holes, but hidden in the bland conditions which permit us to exist. Further, what we believe to be “laws” and “constants” may simply be precedents established by the universe as it tries to figure out how to handle novel circumstances. Just as complex systems like markets and evolution in ecosystems have rules that change based upon events within them, maybe the universe is “making it up as it goes along”, and in the early universe, far from today's near-equilibrium, wild and crazy things happened which may explain some of the puzzling properties of the universe we observe today. This needn't forever remain in the realm of speculation. It is easy, for example, to synthesise a protein which has never existed before in the universe (it's an example of a combinatorial explosion). You might try, for example, to crystallise this novel protein and see how difficult it is, then try again later and see if the universe has learned how to do it. To be extra careful, do it first on the International Space Station and then in a lab on the Earth. I suggested this almost twenty years ago as a test of Rupert Sheldrake's theory of morphic resonance, but (although doubtless Smolin would shun me for associating his theory with that one), it might produce interesting results. The book concludes with a very personal look at the challenges facing a working scientist who has concluded the paradigm accepted by the overwhelming majority of his or her peers is incomplete and cannot be remedied by incremental changes based upon the existing foundation. He notes:
There is no more reasonable bet than that our current knowledge is incomplete. In every era of the past our knowledge was incomplete; why should our period be any different? Certainly the puzzles we face are at least as formidable as any in the past. But almost nobody bets this way. This puzzles me.Well, it doesn't puzzle me. Ever since I learned classical economics, I've always learned to look at the incentives in a system. When you regard academia today, there is huge risk and little reward to get out a new notebook, look at the first blank page, and strike out in an entirely new direction. Maybe if you were a twenty-something patent examiner in a small city in Switzerland in 1905 with no academic career or reputation at risk you might go back to first principles and overturn space, time, and the wave theory of light all in one year, but today's institutional structure makes it almost impossible for a young researcher (and revolutionary ideas usually come from the young) to strike out in a new direction. It is a blessing that we have deep thinkers such as Lee Smolin setting aside the easy path to retirement to ask these deep questions today. Here is a lecture by the author at the Perimeter Institute about the topics discussed in the book. He concentrates mostly on the problems with quantum theory and not the speculative solutions discussed in the latter part of the book.
To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies. And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.) But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state. With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union. Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course. Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution. Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production. On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border. After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere. But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine. Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping. The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn. This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses. Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia. Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.
Just imagine if William the Bastard had succeeded in conquering England. We'd probably be speaking some unholy crossbreed of French and English…. The Republic is the only country in the world that recognizes allodial title,…. When Congress declares war, they have to elect one of their own to be a sacrificial victim,…. “There was a man from the state capitol who wanted to give us government funding to build what he called a ‘proper’ school, but he was run out of town, the poor dear.”Pirates, of course, must always keenly scan the horizon for those who might want to put an end to the fun. And so it is for buccaneers sailing the Hertzian waves. You'll enjoy every minute getting to the point where you find out how it ends. And then, when you think it's all over, another door opens into a wider, and weirder, world in which we may expect further adventures. The second volume in the series, Five Million Watts, was published in April, 2019. At present, only a Kindle edition is available. The book is not available under the Kindle Unlimited free rental programme, but is very inexpensive.
I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood. Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures. He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book. Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,
Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.This was not a man likely to disappear into the mass of back-benchers and not rock the boat. Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,
A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure. Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.” Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,
You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,
No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated. Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,
Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived. Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom. Here is an Uncommon Knowledge interview with the author about Churchill and this biography. This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.
Everything else in the modern world is of Christian origin, even everything that seems most anti-Christian. The French Revolution is of Christian origin. The newspaper is of Christian origin. The anarchists are of Christian origin. Physical science is of Christian origin. The attack on Christianity is of Christian origin. There is one thing, and one thing only, in existence at the present day which can in any sense accurately be said to be of pagan origin, and that is Christianity.Much more is at stake than one sect (albeit the largest) of Christianity. The infiltration, subversion, and overt attacks on the Roman Catholic church are an assault upon an institution which has been central to Western civilisation for two millennia. If it falls, and it is falling, in large part due to self-inflicted wounds, the forces of darkness will be coming for the smaller targets next. Whatever your religion, or whether you have one or not, collapse of one of the three pillars of our cultural identity is something to worry about and work to prevent. In the author's words, “What few on the political Right have grasped is that the most important component in this trifecta isn't capitalism, or even democracy, but Christianity.” With all three under assault from all sides, this book makes an eloquent argument to secular free marketeers and champions of consensual government not to ignore the cultural substrate which allowed both to emerge and flourish.
In June and July [1961], detailed specifications for the spacecraft hardware were completed. By the end of July, the Requests for Proposals were on the street. In August, the first hardware contract was awarded to M.I.T.'s Instrumentation Laboratory for the Apollo guidance system. NASA selected Merritt Island, Florida, as the site for a new spaceport and acquired 125 square miles of land. In September, NASA selected Michoud, Louisiana, as the production facility for the Saturn rockets, acquired a site for the Manned Spacecraft Center—the Space Task Group grown up—south of Houston, and awarded the contract for the second stage of the Saturn [V] to North American Aviation. In October, NASA acquired 34 square miles for a Saturn test facility in Mississippi. In November, the Saturn C-1 was successfully launched with a cluster of eight engines, developing 1.3 million pounds of thrust. The contract for the command and service module was awarded to North American Aviation. In December, the contract for the first stage of the Saturn [V] was awarded to Boeing and the contract for the third stage was awarded to Douglas Aircraft. By January of 1962, construction had begun at all of the acquired sites and development was under way at all of the contractors.Such was the urgency with which NASA was responding to Kennedy's challenge and deadline that all of these decisions and work were done before deciding on how to get to the Moon—the so-called “mission mode”. There were three candidates: direct-ascent, Earth orbit rendezvous (EOR), and lunar orbit rendezvous (LOR). Direct ascent was the simplest, and much like idea of a Moon ship in golden age science fiction. One launch from Earth would send a ship to the Moon which would land there, then take off and return directly to Earth. There would be no need for rendezvous and docking in space (which had never been attempted, and nobody was sure was even possible), and no need for multiple launches per mission, which was seen as an advantage at a time when rockets were only marginally reliable and notorious for long delays from their scheduled launch time. The downside of direct-ascent was that it would require an enormous rocket: planners envisioned a monster called Nova which would have dwarfed the Saturn V eventually used for Apollo and required new manufacturing, test, and launch facilities to accommodate its size. Also, it is impossible to design a ship which is optimised both for landing under rocket power on the Moon and re-entering Earth's atmosphere at high speed. Still, direct-ascent seemed to involve the least number of technological unknowns. Ever wonder why the Apollo service module had that enormous Service Propulsion System engine? When it was specified, the mission mode had not been chosen, and it was made powerful enough to lift the entire command and service module off the lunar surface and return them to the Earth after a landing in direct-ascent mode. Earth orbit rendezvous was similar to what Wernher von Braun envisioned in his 1950s popular writings about the conquest of space. Multiple launches would be used to assemble a Moon ship in low Earth orbit, and then, when it was complete, it would fly to the Moon, land, and then return to Earth. Such a plan would not necessarily even require a booster as large as the Saturn V. One might, for example, launch the lunar landing and return vehicle on one Saturn I, the stage which would propel it to the Moon on a second, and finally the crew on a third, who would board the ship only after it was assembled and ready to go. This was attractive in not requiring the development of a giant rocket, but required on-time launches of multiple rockets in quick succession, orbital rendezvous and docking (and in some schemes, refuelling), and still had the problem of designing a craft suitable both for landing on the Moon and returning to Earth. Lunar orbit rendezvous was originally considered a distant third in the running. A single large rocket (but smaller than Nova) would launch two craft toward the Moon. One ship would be optimised for flight through the Earth's atmosphere and return to Earth, while the other would be designed solely for landing on the Moon. The Moon lander, operating only in vacuum and the Moon's weak gravity, need not be streamlined or structurally strong, and could be potentially much lighter than a ship able to both land on the Moon and return to Earth. Finally, once its mission was complete and the landing crew safely back in the Earth return ship, it could be discarded, meaning that all of the hardware needed solely for landing on the Moon need not be taken back to the Earth. This option was attractive, requiring only a single launch and no gargantuan rocket, and allowed optimising the lander for its mission (for example, providing better visibility to its pilots of the landing site), but it not only required rendezvous and docking, but doing it in lunar orbit which, if they failed, would strand the lander crew in orbit around the Moon with no hope of rescue. After a high-stakes technical struggle, in the latter part of 1962, NASA selected lunar orbit rendezvous as the mission mode, with each landing mission to be launched on a single Saturn V booster, making the decision final with the selection of Grumman as contractor for the Lunar Module in November of that year. Had another mission mode been chosen, it is improbable in the extreme that the landing would have been accomplished in the 1960s. The Apollo architecture was now in place. All that remained was building machines which had never been imagined before, learning to do things (on-time launches, rendezvous and docking in space, leaving spacecraft and working in the vacuum, precise navigation over distances no human had ever travelled before, and assessing all of the “unknown unknowns” [radiation risks, effects of long-term weightlessness, properties of the lunar surface, ability to land on lunar terrain, possible chemical or biological threats on the Moon, etc.]) and developing plans to cope with them. This masterful book is the story of how what is possibly the largest collection of geeks and nerds ever assembled and directed at a single goal, funded with the abundant revenue from an economic boom, spurred by a geopolitical competition against the sworn enemy of liberty, took on these daunting challenges and, one by one, overcame them, found a way around, or simply accepted the risk because it was worth it. They learned how to tame giant rocket engines that randomly blew up by setting off bombs inside them. They abandoned the careful step-by-step development of complex rockets in favour of “all-up testing” (stack all of the untested pieces the first time, push the button, and see what happens) because “there wasn't enough time to do it any other way”. People were working 16–18–20 hours a day, seven days a week. Flight surgeons in Mission Control handed out “go and whoa pills”—amphetamines and barbiturates—to keep the kids on the console awake at work and asleep those few hours they were at home—hey, it was the Sixties! This is not a tale of heroic astronauts and their exploits. The astronauts, as they have been the first to say, were literally at the “tip of the spear” and would not have been able to complete their missions without the work of almost half a million uncelebrated people who made them possible, not to mention the hundred million or so U.S. taxpayers who footed the bill. This was not a straight march to victory. Three astronauts died in a launch pad fire the investigation of which revealed shockingly slapdash quality control in the assembly of their spacecraft and NASA's ignoring the lethal risk of fire in a pure oxygen atmosphere at sea level pressure. The second flight of the Saturn V was a near calamity due to multiple problems, some entirely avoidable (and yet the decision was made to man the next flight of the booster and send the crew to the Moon). Neil Armstrong narrowly escaped death in May 1968 when the Lunar Landing Research Vehicle he was flying ran out of fuel and crashed. And the division of responsibility between the crew in the spacecraft and mission controllers on the ground had to be worked out before it would be tested in flight where getting things right could mean the difference between life and death. What can we learn from Apollo, fifty years on? Other than standing in awe at what was accomplished given the technology and state of the art of the time, and on a breathtakingly short schedule, little or nothing that is relevant to the development of space in the present and future. Apollo was the product of a set of circumstances which happened to come together at one point in history and are unlikely to ever recur. Although some of those who worked on making it a reality were dreamers and visionaries who saw it as the first step into expanding the human presence beyond the home planet, to those who voted to pay the forbidding bills (at its peak, NASA's budget, mostly devoted to Apollo, was more than 4% of all Federal spending; in recent years, it has settled at around one half of one percent: a national commitment to space eight times smaller as a fraction of total spending) Apollo was seen as a key battle in the Cold War. Allowing the Soviet Union to continue to achieve milestones in space while the U.S. played catch-up or forfeited the game would reinforce the Soviet message to the developing world that their economic and political system was the wave of the future, leaving decadent capitalism in the dust. A young, ambitious, forward-looking president, smarting from being scooped once again by Yuri Gagarin's orbital flight and the humiliation of the débâcle at the Bay of Pigs in Cuba, seized on a bold stroke that would show the world the superiority of the U.S. by deploying its economic, industrial, and research resources toward a highly visible goal. And, after being assassinated two and a half years later, his successor, a space enthusiast who had directed a substantial part of NASA's spending to his home state and those of his political allies, presented the program as the legacy of the martyred president and vigorously defended it against those who tried to kill it or reduce its priority. The U.S. was in an economic boom which would last through most of the Apollo program until after the first Moon landing, and was the world's unchallenged economic powerhouse. And finally, the federal budget had not yet been devoured by uncontrollable “entitlement” spending and national debt was modest and manageable: if the national will was there, Apollo was affordable. This confluence of circumstances was unique to its time and has not been repeated in the half century thereafter, nor is it likely to recur in the foreseeable future. Space enthusiasts who look at Apollo and what it accomplished in such a short time often err in assuming a similar program: government funded, on a massive scale with lavish budgets, focussed on a single goal, and based on special-purpose disposable hardware suited only for its specific mission, is the only way to open the space frontier. They are not only wrong in this assumption, but they are dreaming if they think there is the public support and political will to do anything like Apollo today. In fact, Apollo was not even particularly popular in the 1960s: only at one point in 1965 did public support for funding of human trips to the Moon poll higher than 50% and only around the time of the Apollo 11 landing did 50% of the U.S. population believe Apollo was worth what was being spent on it. In fact, despite being motivated as a demonstration of the superiority of free people and free markets, Project Apollo was a quintessentially socialist space program. It was funded by money extracted by taxation, its priorities set by politicians, and its operations centrally planned and managed in a top-down fashion of which the Soviet functionaries at Gosplan could only dream. Its goals were set by politics, not economic benefits, science, or building a valuable infrastructure. This was not lost on the Soviets. Here is Soviet Minister of Defence Dmitriy Ustinov speaking at a Central Committee meeting in 1968, quoted by Boris Chertok in volume 4 of Rockets and People.
…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.This kind of socialist operation works fine in a wartime crash program driven by time pressure, where unlimited funds and manpower are available, and where there is plenty of capital which can be consumed or borrowed to pay for it. But it does not create sustainable enterprises. Once the goal is achieved, the war won (or lost), or it runs out of other people's money to spend, the whole thing grinds to a halt or stumbles along, continuing to consume resources while accomplishing little. This was the predictable trajectory of Apollo. Apollo was one of the noblest achievements of the human species and we should celebrate it as a milestone in the human adventure, but trying to repeat it is pure poison to the human destiny in the solar system and beyond. This book is a superb recounting of the Apollo experience, told mostly about the largely unknown people who confronted the daunting technical problems and, one by one, found solutions which, if not perfect, were good enough to land on the Moon in 1969. Later chapters describe key missions, again concentrating on the problem solving which went on behind the scenes to achieve their goals or, in the case of Apollo 13, get home alive. Looking back on something that happened fifty years ago, especially if you were born afterward, it may be difficult to appreciate just how daunting the idea of flying to the Moon was in May 1961. This book is the story of the people who faced that challenge, pulled it off, and are largely forgotten today. Both the 1989 first edition and 2004 paperback revised edition are out of print and available only at absurd collectors' prices. The Kindle edition, which is based upon the 2004 edition with small revisions to adapt to digital reader devices is available at a reasonable price, as is an unabridged audio book, which is a reading of the 2004 edition. You'd think there would have been a paperback reprint of this valuable book in time for the fiftieth anniversary of the landing of Apollo 11 (and the thirtieth anniversary of its original publication), but there wasn't. Project Apollo is such a huge, sprawling subject that no book can possibly cover every aspect of it. For those who wish to delve deeper, here is a reading list of excellent sources. I have read all of these books and recommend every one. For those I have reviewed, I link to my review; for others, I link to a source where you can obtain the book.
In the distance, glistening partitions, reminiscent of the algal membranes that formed the cages in some aquatic zoos, swayed back and forth gently, as if in time to mysterious currents. Behind each barrier the sea changed color abruptly, the green giving way to other bright hues, like a fastidiously segregated display of bioluminescent plankton.Oh, wow. And then, it stops. I don't mean ends, as that would imply that everything that's been thrown up in the air is somehow resolved. There is an attempt to close the circle with the start of the story, but a whole universe of questions are left unanswered. The human perspective is inadequate to describe a place where Planck length objects interact in Planck time intervals and the laws of physics are made up on the fly. Ultimately, the story failed for me since it never engaged me with the characters—I didn't care what happened to them. I'm a fan of hard science fiction, but this was just too adamantine to be interesting. The title, Schild's Ladder, is taken from a method in differential geometry which is used to approximate the parallel transport of a vector along a curve.
In his world, you didn't let wrongs go unanswered—not wrongs like this, and especially when you had the ability to do something. Vengeance was a necessary function of a civilized world, particularly at its margins, in its most remote and wild regions. Evildoers, unwilling to submit to the rule of law, needed to lie awake in their beds at night worried about when justice would eventually come for them. If laws and standards were not worth enforcing, then they certainly couldn't be worth following.Harvath forms tenuous alliances with those he encounters, and then must confront an all-out assault by élite mercenaries who, apparently unsatisfied with the fear induced by fanatic Russian operatives, model themselves on the Nazi SS. Then, after survival, it's time for revenge. Harvath has done his biochemistry homework and learned well the off-label applications of suxamethonium chloride. Sux to be you, Boris. This is a tightly-crafted thriller which is, in my opinion, one of best of Brad Thor's novels. There is no political message or agenda nor any of the Washington intrigue which has occupied recent books. Here it is a pure struggle between a resourceful individual, on his own against amoral forces of pure evil, in an environment as deadly as his human adversaries.
He held forth on a great range of topics, on some of which he was thoroughly expert, but on others of which he may have derived his views from the few pages of a book at which he happened to glance. The air of authority was the same in both cases.Still other IYIs have no authentic credentials whatsoever, but derive their purported authority from the approbation of other IYIs in completely bogus fields such as gender and ethnic studies, critical anything studies, and nutrition science. As the author notes, riding some of his favourite hobby horses,
Typically, the IYI get first-order logic right, but not second-order (or higher) effects, making him totally incompetent in complex domains. The IYI has been wrong, historically, about Stalinism, Maoism, Iraq, Libya, Syria, lobotomies, urban planning, low-carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p values. But he is still convinced his current position is right.Doubtless, IYIs have always been with us (at least since societies developed to such a degree that they could afford some fraction of the population who devoted themselves entirely to words and ideas)—Nietzsche called them “Bildungsphilisters”—but since the middle of the twentieth century they have been proliferating like pond scum, and now hold much of the high ground in universities, the media, think tanks, and senior positions in the administrative state. They believe their models (almost always linear and first-order) accurately describe the behaviour of complex dynamic systems, and that they can “nudge” the less-intellectually-exalted and credentialed masses into virtuous behaviour, as defined by them. When the masses dare to push back, having a limited tolerance for fatuous nonsense, or being scolded by those who have been consistently wrong about, well, everything, and dare vote for candidates and causes which make sense to them and seem better-aligned with the reality they see on the ground, they are accused of—gasp—populism, and must be guided in the proper direction by their betters, their uncouth speech silenced in favour of the cultured “consensus” of the few. One of the reasons we seem to have many more IYIs around than we used to, and that they have more influence over our lives is related to scaling. As the author notes, “it is easier to macrobull***t than microbull***t”. A grand theory which purports to explain the behaviour of billions of people in a global economy over a period of decades is impossible to test or verify analytically or by simulation. An equally silly theory that describes things within people's direct experience is likely to be immediately rejected out of hand as the absurdity it is. This is one reason decentralisation works so well: when you push decision making down as close as possible to individuals, their common sense asserts itself and immunises them from the blandishments of IYIs.
America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them. The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920. At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years. Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.” As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort. Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%. Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.” When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes. Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred. In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to. Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace. Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis. This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.
The only way to smash this racket is to conscript capital and industry and labor before the nations [sic] manhood can be conscripted. One month before the Government can conscript the young men of the nation—it must conscript capital and industry. Let the officers and the directors and the high-powered executives of our armament factories and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted—to get $30 a month, the same wage as the lads in the trenches get. Let the workers in these plants get the same wages—all the workers, all presidents, all directors, all managers, all bankers—yes, and all generals and all admirals and all officers and all politicians and all government office holders—everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches! Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors [I think “mayors” was intended —JW] pay half their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds. Why shouldn't they?Butler goes on to recommend that any declaration of war require approval by a national plebiscite in which voting would be restricted to those subject to conscription in a military conflict. (Writing in 1935, he never foresaw that young men and women would be sent into combat without so much as a declaration of war being voted by Congress.) Further, he would restrict all use of military force to genuine defence of the nation, in particular, limiting the Navy to operating no more than 200 miles (320 km) from the coastline. This is an impassioned plea against the folly of foreign wars by a man whose career was as a warrior. One can argue that there is a legitimate interest in, say assuring freedom of navigation in international waters, but looking back on the results of U.S. foreign wars in the 21st century, it is difficult to argue they can be justified any more than the “Banana Wars” Butler fought in his time.
Listen, I understand who you are, and what this is. Please let me be clear that I have no intention to cooperate with you. I'm not going to cooperate with any intelligence service. I mean no disrespect, but this isn't going to be that kind of meeting. If you want to search my bag, it's right here. But I promise you, there's nothing in it that can help you.And that was that. Edward Snowden could have kept quiet, done his job, collected his handsome salary, continued to live in a Hawaiian paradise, and share his life with Lindsay, but he threw it all away on a matter of principle and duty to his fellow citizens and the Constitution he had sworn to defend when taking the oath upon joining the Army and the CIA. On the basis of the law, he is doubtless guilty of the three federal crimes with which he has been charged, sufficient to lock him up for as many as thirty years should the U.S. lay its hands on him. But he believes he did the correct thing in an attempt to right wrongs which were intolerable. I agree, and can only admire his courage. If anybody is deserving of a Presidential pardon, it is Edward Snowden. There is relatively little discussion here of the actual content of the documents which were disclosed and the surveillance programs they revealed. For full details, visit the Snowden Surveillance Archive, which has copies of all of the documents which have been disclosed by the media to date. U.S. government employees and contractors should read the warning on the site before viewing this material.
But what I do know is that the U.S. isn't ready. If Halabi's figured out a way to hit us with something big—something biological—what's our reaction going to be? The politicians will run for the hills and point fingers at each other. And the American people…. They faint if someone uses insensitive language in their presence and half of them couldn't run up a set of stairs if you put a gun to their head. What'll happen if the real s*** hits the fan? What are they going to do if they're faced with something that can't be fixed by a Facebook petition?So Rapp is as ruthless with his superiors as with the enemy, and obtains the free hand he needs to get the job done. Eventually Rapp and his team identify what is a potentially catastrophic threat and must swing into action, despite the political and diplomatic repercussions, to avert disaster. And then it is time to settle some scores. Kyle Mills has delivered another thriller which is both in the tradition of Mitch Rapp and also further develops his increasingly complex character in new ways.
Again our computations have been flushed and the LM is still flying. In Cambridge someone says, “Something is stealing time.” … Some dreadful thing is active in our computer and we do not know what it is or what it will do next. Unlike Garman [AGC support engineer for Mission Control] in Houston I know too much. If it were in my hands, I would call an abort.As the Lunar Module passed 3000 feet, another alarm, this time a 1201—VAC areas exhausted—flashed. This is another indication of overload, but of a different kind. Mission control immediately calls up “We're go. Same type. We're go.” Well, it wasn't the same type, but they decided to press on. Descending through 2000 feet, the DSKY (computer display and keyboard) goes blank and stays blank for ten agonising seconds. Seventeen seconds later another 1202 alarm, and a blank display for two seconds—Armstrong's heart rate reaches 150. A total of five program alarms and resets had occurred in the final minutes of landing. But why? And could the computer be trusted to fly the return from the Moon's surface to rendezvous with the Command Module? While the Lunar Module was still on the lunar surface Instrumentation Laboratory engineer George Silver figured out what happened. During the landing, the Lunar Module's rendezvous radar (used only during return to the Command Module) was powered on and set to a position where its reference timing signal came from an internal clock rather than the AGC's master timing reference. If these clocks were in a worst case out of phase condition, the rendezvous radar would flood the AGC with what we used to call “nonsense interrupts” back in the day, at a rate of 800 per second, each consuming one 11.72 microsecond memory cycle. This imposed an additional load of more than 13% on the AGC, which pushed it over the edge and caused tasks deemed non-critical (such as updating the DSKY) not to be completed on time, resulting in the program alarms and restarts. The fix was simple: don't enable the rendezvous radar until you need it, and when you do, put the switch in the position that synchronises it with the AGC's clock. But the AGC had proved its excellence as a real-time system: in the face of unexpected and unknown external perturbations it had completed the mission flawlessly, while alerting its developers to a problem which required their attention. The creativity of the AGC software developers and the merit of computer systems sufficiently simple that the small number of people who designed them completely understood every aspect of their operation was demonstrated on Apollo 14. As the Lunar Module was checked out prior to the landing, the astronauts in the spacecraft and Mission Control saw the abort signal come on, which was supposed to indicate the big Abort button on the control panel had been pushed. This button, if pressed during descent to the lunar surface, immediately aborted the landing attempt and initiated a return to lunar orbit. This was a “one and done” operation: no Microsoft-style “Do you really mean it?” tea ceremony before ending the mission. Tapping the switch made the signal come and go, and it was concluded the most likely cause was a piece of metal contamination floating around inside the switch and occasionally shorting the contacts. The abort signal caused no problems during lunar orbit, but if it should happen during descent, perhaps jostled by vibration from the descent engine, it would be disastrous: wrecking a mission costing hundreds of millions of dollars and, coming on the heels of Apollo 13's mission failure and narrow escape from disaster, possibly bring an end to the Apollo lunar landing programme. The Lunar Module AGC team, with Don Eyles as the lead, was faced with an immediate challenge: was there a way to patch the software to ignore the abort switch, protecting the landing, while still allowing an abort to be commanded, if necessary, from the computer keyboard (DSKY)? The answer to this was obvious and immediately apparent: no. The landing software, like all AGC programs, ran from read-only rope memory which had been woven on the ground months before the mission and could not be changed in flight. But perhaps there was another way. Eyles and his colleagues dug into the program listing, traced the path through the logic, and cobbled together a procedure, then tested it in the simulator at the Instrumentation Laboratory. While the AGC's programming was fixed, the AGC operating system provided low-level commands which allowed the crew to examine and change bits in locations in the read-write memory. Eyles discovered that by setting the bit which indicated that an abort was already in progress, the abort switch would be ignored at the critical moments during the descent. As with all software hacks, this had other consequences requiring their own work-arounds, but by the time Apollo 14's Lunar Module emerged from behind the Moon on course for its landing, a complete procedure had been developed which was radioed up from Houston and worked perfectly, resulting in a flawless landing. These and many other stories of the development and flight experience of the AGC lunar landing software are related here by the person who wrote most of it and supported every lunar landing mission as it happened. Where technical detail is required to understand what is happening, no punches are pulled, even to the level of bit-twiddling and hideously clever programming tricks such as using an overflow condition to skip over an EXTEND instruction, converting the following instruction from double precision to single precision, all in order to save around forty words of precious non-bank-switched memory. In addition, this is a personal story, set in the context of the turbulent 1960s and early ’70s, of the author and other young people accomplishing things no humans had ever before attempted. It was a time when everybody was making it up as they went along, learning from experience, and improvising on the fly; a time when a person who had never written a line of computer code would write, as his first program, the code that would land men on the Moon, and when the creativity and hard work of individuals made all the difference. Already, by the end of the Apollo project, the curtain was ringing down on this era. Even though a number of improvements had been developed for the LM AGC software which improved precision landing capability, reduced the workload on the astronauts, and increased robustness, none of these were incorporated in the software for the final three Apollo missions, LUMINARY 210, which was deemed “good enough” and the benefit of the changes not worth the risk and effort to test and incorporate them. Programmers seeking this kind of adventure today will not find it at NASA or its contractors, but instead in the innovative “New Space” and smallsat industries.
It's a maxim among popular science writers that every equation you include cuts your readership by a factor of two, so among the hardy half who remain, let's see how this works. It's really very simple (and indeed, far simpler than actual population dynamics in a real environment). The left side, “dP/dt” simply means “the rate of growth of the population P with respect to time, t”. On the right hand side, “rP” accounts for the increase (or decrease, if r is less than 0) in population, proportional to the current population. The population is limited by the carrying capacity of the habitat, K, which is modelled by the factor “(1 − P/K)”. Now think about how this works: when the population is very small, P/K will be close to zero and, subtracted from one, will yield a number very close to one. This, then, multiplied by the increase due to rP will have little effect and the growth will be largely unconstrained. As the population P grows and begins to approach K, however, P/K will approach unity and the factor will fall to zero, meaning that growth has completely stopped due to the population reaching the carrying capacity of the environment—it simply doesn't produce enough vegetation to feed any more rabbits. If the rabbit population overshoots, this factor will go negative and there will be a die-off which eventually brings the population P below the carrying capacity K. (Sorry if this seems tedious; one of the great things about learning even a very little about differential equations is that all of this is apparent at a glance from the equation once you get over the speed bump of understanding the notation and algebra involved.) This is grossly over-simplified. In fact, real populations are prone to oscillations and even chaotic dynamics, but we don't need to get into any of that for what follows, so I won't. Let's complicate things in our bunny paradise by introducing a population of wolves. The wolves can't eat the vegetation, since their digestive systems cannot extract nutrients from it, so their only source of food is the rabbits. Each wolf eats many rabbits every year, so a large rabbit population is required to support a modest number of wolves. Now if we go back and look at the equation for wolves, K represents the number of wolves the rabbit population can sustain, in the steady state, where the number of rabbits eaten by the wolves just balances the rabbits' rate of reproduction. This will often result in a rabbit population smaller than the carrying capacity of the environment, since their population is now constrained by wolf predation and not K. What happens as this (oversimplified) system cranks away, generation after generation, and Darwinian evolution kicks in? Evolution consists of two processes: variation, which is largely random, and selection, which is sensitively dependent upon the environment. The rabbits are unconstrained by K, the carrying capacity of their environment. If their numbers increase beyond a population P substantially smaller than K, the wolves will simply eat more of them and bring the population back down. The rabbit population, then, is not at all constrained by K, but rather by r: the rate at which they can produce new offspring. Population biologists call this an r-selected species: evolution will select for individuals who produce the largest number of progeny in the shortest time, and hence for a life cycle which minimises parental investment in offspring and against mating strategies, such as lifetime pair bonding, which would limit their numbers. Rabbits which produce fewer offspring will lose a larger fraction of them to predation (which affects all rabbits, essentially at random), and the genes which they carry will be selected out of the population. An r-selected population, sometimes referred to as r-strategists, will tend to be small, with short gestation time, high fertility (offspring per litter), rapid maturation to the point where offspring can reproduce, and broad distribution of offspring within the environment. Wolves operate under an entirely different set of constraints. Their entire food supply is the rabbits, and since it takes a lot of rabbits to keep a wolf going, there will be fewer wolves than rabbits. What this means, going back to the Verhulst equation, is that the 1 − P/K factor will largely determine their population: the carrying capacity K of the environment supports a much smaller population of wolves than their food source, rabbits, and if their rate of population growth r were to increase, it would simply mean that more wolves would starve due to insufficient prey. This results in an entirely different set of selection criteria driving their evolution: the wolves are said to be K-selected or K-strategists. A successful wolf (defined by evolution theory as more likely to pass its genes on to successive generations) is not one which can produce more offspring (who would merely starve by hitting the K limit before reproducing), but rather highly optimised predators, able to efficiently exploit the limited supply of rabbits, and to pass their genes on to a small number of offspring, produced infrequently, which require substantial investment by their parents to train them to hunt and, in many cases, acquire social skills to act as part of a group that hunts together. These K-selected species tend to be larger, live longer, have fewer offspring, and have parents who spend much more effort raising them and training them to be successful predators, either individually or as part of a pack. “K or r, r or K: once you've seen it, you can't look away.” Just as our island of bunnies and wolves was over-simplified, the dichotomy of r- and K-selection is rarely precisely observed in nature (although rabbits and wolves are pretty close to the extremes, which it why I chose them). Many species fall somewhere in the middle and, more importantly, are able to shift their strategy on the fly, much faster than evolution by natural selection, based upon the availability of resources. These r/K shape-shifters react to their environment. When resources are abundant, they adopt an r-strategy, but as their numbers approach the carrying capacity of their environment, shift to life cycles you'd expect from K-selection. What about humans? At a first glance, humans would seem to be a quintessentially K-selected species. We are large, have long lifespans (about twice as long as we “should” based upon the number of heartbeats per lifetime of other mammals), usually only produce one child (and occasionally two) per gestation, with around a one year turn-around between children, and massive investment by parents in raising infants to the point of minimal autonomy and many additional years before they become fully functional adults. Humans are “knowledge workers”, and whether they are hunter-gatherers, farmers, or denizens of cubicles at The Company, live largely by their wits, which are a combination of the innate capability of their hypertrophied brains and what they've learned in their long apprenticeship through childhood. Humans are not just predators on what they eat, but also on one another. They fight, and they fight in bands, which means that they either develop the social skills to defend themselves and meet their needs by raiding other, less competent groups, or get selected out in the fullness of evolutionary time. But humans are also highly adaptable. Since modern humans appeared some time between fifty and two hundred thousand years ago they have survived, prospered, proliferated, and spread into almost every habitable region of the Earth. They have been hunter-gatherers, farmers, warriors, city-builders, conquerors, explorers, colonisers, traders, inventors, industrialists, financiers, managers, and, in the Final Days of their species, WordPress site administrators. In many species, the selection of a predominantly r or K strategy is a mix of genetics and switches that get set based upon experience in the environment. It is reasonable to expect that humans, with their large brains and ability to override inherited instinct, would be especially sensitive to signals directing them to one or the other strategy. Now, finally, we get back to politics. This was a post about politics. I hope you've been thinking about it as we spent time in the island of bunnies and wolves, the cruel realities of natural selection, and the arcana of differential equations. What does r-selection produce in a human population? Well, it might, say, be averse to competition and all means of selection by measures of performance. It would favour the production of large numbers of offspring at an early age, by early onset of mating, promiscuity, and the raising of children by single mothers with minimal investment by them and little or none by the fathers (leaving the raising of children to the State). It would welcome other r-selected people into the community, and hence favour immigration from heavily r populations. It would oppose any kind of selection based upon performance, whether by intelligence tests, academic records, physical fitness, or job performance. It would strive to create the ideal r environment of unlimited resources, where all were provided all their basic needs without having to do anything but consume. It would oppose and be repelled by the K component of the population, seeking to marginalise it as toxic, privileged, or exploiters of the real people. It might even welcome conflict with K warriors of adversaries to reduce their numbers in otherwise pointless foreign adventures. And K-troop? Once a society in which they initially predominated creates sufficient wealth to support a burgeoning r population, they will find themselves outnumbered and outvoted, especially once the r wave removes the firebreaks put in place when K was king to guard against majoritarian rule by an urban underclass. The K population will continue to do what they do best: preserving the institutions and infrastructure which sustain life, defending the society in the military, building and running businesses, creating the basic science and technologies to cope with emerging problems and expand the human potential, and governing an increasingly complex society made up, with every generation, of a population, and voters, who are fundamentally unlike them. Note that the r/K model completely explains the “crunchy to soggy” evolution of societies which has been remarked upon since antiquity. Human societies always start out, as our genetic heritage predisposes us to, K-selected. We work to better our condition and turn our large brains to problem-solving and, before long, the privation our ancestors endured turns into a pretty good life and then, eventually, abundance. But abundance is what selects for the r strategy. Those who would not have reproduced, or have as many children in the K days of yore, now have babies-a-poppin' as in the introduction to Idiocracy, and before long, not waiting for genetics to do its inexorable work, but purely by a shift in incentives, the rs outvote the Ks and the Ks begin to count the days until their society runs out of the wealth which can be plundered from them. But recall that equation. In our simple bunnies and wolves model, the resources of the island were static. Nothing the wolves could do would increase K and permit a larger rabbit and wolf population. This isn't the case for humans. K humans dramatically increase the carrying capacity of their environment by inventing new technologies such as agriculture, selective breeding of plants and animals, discovering and exploiting new energy sources such as firewood, coal, and petroleum, and exploring and settling new territories and environments which may require their discoveries to render habitable. The rs don't do these things. And as the rs predominate and take control, this momentum stalls and begins to recede. Then the hard times ensue. As Heinlein said many years ago, “This is known as bad luck.” And then the Gods of the Copybook Headings will, with terror and slaughter return. And K-selection will, with them, again assert itself. Is this a complete model, a Rosetta stone for human behaviour? I think not: there are a number of things it doesn't explain, and the shifts in behaviour based upon incentives are much too fast to account for by genetics. Still, when you look at those eleven issues I listed so many words ago through the r/K perspective, you can almost immediately see how each strategy maps onto one side or the other of each one, and they are consistent with the policy preferences of “liberals” and “conservatives”. There is also some rather fuzzy evidence for genetic differences (in particular the DRD4-7R allele of the dopamine receptor and size of the right brain amygdala) which appear to correlate with ideology. Still, if you're on one side of the ideological divide and confronted with somebody on the other and try to argue from facts and logical inference, you may end up throwing up your hands (if not your breakfast) and saying, “They just don't get it!” Perhaps they don't. Perhaps they can't. Perhaps there's a difference between you and them as great as that between rabbits and wolves, which can't be worked out by predator and prey sitting down and voting on what to have for dinner. This may not be a hopeful view of the political prospect in the near future, but hope is not a strategy and to survive and prosper requires accepting reality as it is and acting accordingly.
People say sometimes that Beauty is only superficial. That may be so. But at least it is not as superficial as Thought. To me, Beauty is the wonder of wonders. It is only shallow people who do not judge by appearances.From childhood, however, we have been exhorted not to judge people by their appearances. In Skin in the Game (August 2019), Nassim Nicholas Taleb advises choosing the surgeon who “doesn't look like a surgeon” because their success is more likely due to competence than first impressions. Despite this, physiognomy, assessing a person's characteristics from their appearance, is as natural to humans as breathing, and has been an instinctual part of human behaviour as old as our species. Thinkers and writers from Aristotle through the great novelists of the 19th century believed that an individual's character was reflected in, and could be inferred from their appearance, and crafted and described their characters accordingly. Jules Verne would often spend a paragraph describing the appearance of his characters and what that implied for their behaviour. Is physiognomy all nonsense, a pseudoscience like phrenology, which purported to predict mental characteristics by measuring bumps on the skull which were claimed indicate the development of “cerebral organs” with specific functions? Or, is there something to it, after all? Humans are a social species and, as such, have evolved to be exquisitely sensitive to signals sent by others of their kind, conveyed through subtle means such as a tone of voice, facial expression, or posture. Might we also be able to perceive and interpret messages which indicate properties such as honesty, intelligence, courage, impulsiveness, criminality, diligence, and more? Such an ability, if possible, would be advantageous to individuals in interacting with others and, contributing to success in reproducing and raising offspring, would be selected for by evolution. In this short book (or long essay—the text is just 85 pages), the author examines the evidence and concludes that there are legitimate correlations between appearance and behaviour, and that human instincts are picking up genuine signals which are useful in interacting with others. This seems perfectly plausible: the development of the human body and face are controlled by the genetic inheritance of the individual and modulated through the effects of hormones, and it is well-established that both genetics and hormones are correlated with a variety of behavioural traits. Let's consider a reasonably straightforward example. A study published in 2008 found a statistically significant correlation between the width of the face (cheekbone to cheekbone distance compared to brow to upper lip) and aggressiveness (measured by the number of penalty minutes received) among a sample of 90 ice hockey players. Now, a wide face is also known to correlate with a high testosterone level in males, and testosterone correlates with aggressiveness and selfishness. So, it shouldn't be surprising to find the wide face morphology correlated with the consequences of high-testosterone behaviour. In fact, testosterone and other hormone levels play a substantial part in many of the correlations between appearance and behaviour discussed by the author. Many people believe they can identify, with reasonable reliability, homosexuals just from their appearance: the term “gaydar” has come into use for this ability. In 2017, researchers trained an artificial intelligence program with a set of photographs of individuals with known sexual orientations and then tested the program on a set of more than 35,000 images. The program correctly identified the sexual orientation of men 81% of the time and women with 74% accuracy. Of course, appearance goes well beyond factors which are inherited or determined by hormones. Tattoos, body piercings, and other irreversible modifications of appearance correlate with low time preference, which correlates with low intelligence and the other characteristics of r-selected lifestyle. Choices of clothing indicate an individual's self-identification, although fashion trends change rapidly and differ from region to region, so misinterpretation is a risk. The author surveys a wide variety of characteristics including fat/thin body type, musculature, skin and hair, height, face shape, breast size in women, baldness and beards in men, eye spacing, tattoos, hair colour, facial symmetry, handedness, and finger length ratio, and presents citations to research, most published recently, supporting correlations between these aspects of appearance and behaviour. He cautions that while people may be good at sensing and interpreting these subtle signals among members of their own race, there are substantial and consistent differences between the races, and no inferences can be drawn from them, nor are members of one race generally able to read the signals from members of another. One gets the sense (although less strongly) that this is another field where advances in genetics and data science are piling up a mass of evidence which will roll over the stubborn defenders of the “blank slate” like a truth tsunami. And again, this is an area where people's instincts, honed by millennia of evolution, are still relied upon despite the scorn of “experts”. (So afraid were the authors of the Wikipedia page on physiognomy [retrieved 2019-12-16] of the “computer gaydar” paper mentioned above that they declined to cite the peer reviewed paper in the Journal of Personality and Social Psychology but instead linked to a BBC News piece which dismissed it as “dangerous” and “junk science”. Go on whistling, folks, as the wave draws near and begins to crest….) Is the case for physiognomy definitively made? I think not, and as I suspect the author would agree, there are many aspects of appearance and a multitude of personality traits, some of which may be significantly correlated and others not at all. Still, there is evidence for some linkage, and it appears to be growing as more work in the area (which is perilous to the careers of those who dare investigate it) accumulates. The scientific evidence, summarised here, seems to be, as so often happens, confirming the instincts honed over hundreds of generations by the inexorable process of evolution: you can form some conclusions just by observing people, and this information is useful in the competition which is life on Earth. Meanwhile, when choosing programmers for a project team, the one who shows up whose eyebrows almost meet their hairline, sporting a plastic baseball cap worn backward with the adjustment strap on the smallest peg, with a scraggly soybeard, pierced nose, and visible tattoos isn't likely to be my pick. She's probably a WordPress developer.
2020 |
What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.Indeed, these curious physical phenomena begin to look precisely like the kinds of optimisations game and simulation designers employ to cope with the limited computer power at their disposal. The author notes, “Quantum Indeterminacy, a fundamental principle of the material world, sounds remarkably similar to optimizations made in the world of computer graphics and video games, which are rendered on individual machines (computers or mobile phones) but which have conscious players controlling and observing the action.” One of the key tricks in complex video games is “conditional rendering”: you don't generate the images or worry about the physics of objects which the player can't see from their current location. This is remarkably like quantum mechanics, where the act of observation reduces the state vector to a discrete measurement and collapses its complex extent in space and time into a known value. In video games, you only need to evaluate when somebody's looking. Quantum mechanics is largely encapsulated in the tweet by Aatish Bhatia, “Don't look: waves. Look: particles.” It seems our universe works the same way. Curious, isn't it? Similarly, games and simulations exploit discreteness and locality to reduce the amount of computation they must perform. The world is approximated by a grid, and actions in one place can only affect neighbours and propagate at a limited speed. This is precisely what we see in field theories and relativity, where actions are local and no influence can propagate faster than the speed of light. The unexplained. Many esoteric and mystic traditions, especially those of the East such as Hinduism and Buddhism, describe the world as something like a dream, in which we act and our actions affect our permanent identity in subsequent lives. Western traditions, including the Abrahamic religions, see life in this world as a temporary thing, where our acts will be judged by a God who is outside the world. These beliefs come naturally to humans, and while there is little or no evidence for them in conventional science, it is safe to say that far more people believe and have believed these things and have structured their lives accordingly than those who have adopted the strictly rationalistic viewpoint one might deduce from deterministic, reductionist science. And yet, once again, in video games we see the emergence of a model which is entirely compatible with these ancient traditions. Characters live multiple lives, and their actions in the game cause changes in a state (“karma”) which is recorded outside the game and affects what they can do. They complete quests, which affect their karma and capabilities, and upon completing a quest, they may graduate (be reincarnated) into a new life (level), in which they retain their karma from previous lives. Just as players who exist outside the game can affect events and characters within it, various traditions describe actors outside the natural universe (hence “supernatural”) such as gods, angels, demons, and spirits of the departed, interacting with people within the universe and occasionally causing physical manifestations (miracles, apparitions, hauntings, UFOs, etc.). And perhaps the simulation hypothesis can even explain absence of evidence: the sky in a video game may contain a multitude of stars and galaxies, but that doesn't mean each is populated by its own video game universe filled with characters playing the same game. No, it's just scenery, there to be admired but with which you can't interact. Maybe that's why we've never detected signals from an alien civilisation: the stars are just procedurally generated scenery to make our telescopic views more interesting. The author concludes with a summary of the evidence we may be living in a simulation and the objection of sceptics (such that a computer as large and complicated as the universe would be required to simulate a universe). He suggests experiments which might detect the granularity of the simulation and provide concrete evidence the universe is not the continuum most of science has assumed it to be. A final chapter presents speculations as to who might be running the simulation, what their motives might be for doing so, and the nature of beings within the simulation. I'm cautious of delusions of grandeur in making such guesses. I'll bet we're a science fair project, and I'll further bet that within a century we'll be creating a multitude of simulated universes for our own science fair projects.
When I began this Reading List in January 2001, it was just that: a list of every book I'd read, updated as I finished books, without any commentary other than, perhaps, availability information and sources for out-of-print works or those from publishers not available through Amazon.com. As the 2000s progressed, I began to add remarks about many of the books, originally limited to one paragraph, but eventually as the years wore on, expanding to full-blown reviews, some sprawling to four thousand words or more and using the book as the starting point for an extended discussion on topics related to its content.
This is, sadly, to employ a term I usually despise, no longer sustainable. My time has become so entirely consumed by system administration tasks on two Web sites, especially one in which I made the disastrous blunder of basing upon WordPress, the most incompetently and irresponsible piece of…software I have ever encountered in more than fifty years of programming; shuffling papers, filling out forms, and other largely government-mandated bullshit (Can I say that here? It's my site! You bet I can.); and creating content for and participating in discussions on the premier anti-social network on the Web for intelligent people around the globe with wide-ranging interests, I simply no longer have the time to sit down, compose. edit, and publish lengthy reviews (in three locations: here, on Fourmilog, and at Ratburger.org) of every book I read.
But that hasn't kept me from reading books, which is my major recreation and escape from the grinding banality which occupies most of my time. As a consequence, I have accumulated, as of the present time, a total of no fewer than twenty-four books I've finished which are on the waiting list to be reviewed and posted here, and that doesn't count a few more I've set aside before finishing the last chapter and end material so as not to make the situation even worse and compound the feeling of guilt.
So, starting with this superb book, which despite having loved everything by Nevil Shute I've read, I'd never gotten around to reading, this list will return to its roots: a reading list with, at most, brief comments. I have marked a number of books (nine, as of now) as candidates for posts in my monthly Saturday Night Science column at Ratburger.org and, as I write reviews and remarks about them for that feature, I will integrate them back into this list.
To avoid overwhelming readers, I'll clear out the backlog by posting at most a book a day until I've caught up. Happy page-turning!
Art3mis snapped her fingers and her avatar's attire changed once again. Now she wore Annie Potts's black latex outfit from her first scene in Pretty in Pink, along with her punk-rock porcupine hairdo, dangling earrings, and dinner-fork bracelet.
“Applause, applause, applause,” she said, doing a slow spin so that we could admire the attention to detail she'd put into her Iona cosplay.
“Look at this lily-white hellscape,” Aech said, shaking her head as she stared out her own window. “Is there a single person of color in this entire town?”
Parzival: Her school records included a scan of her birth certificate, which revealed another surprise. She'd been DMAB—designated male at birth. … Around the same time, she'd changed her avatar's sex classification to øgender, a brand-new option GSS had added due to popular demand. People who identified as øgender were individuals who chose to experience sex exclusively through their ONI headsets, and who also didn't limit themselves to experiencing it as a specific gender or sexual orientation.
The rest of the Original 7ven joined the fight too. Jimmy Jam and Monte Moir each wielded a modified red Roland AXIS-1 keytar that fired sonic funk blast waves out of its neck each time a chord was played on it. Jesse Johnson fired sonic thunderbolts from the pickups of his Fender Voodoo Stratocaster, while Terry Lewis did the same with his bass, and Jellybean Johnson stood behind them, firing red lightning skyward with his drumsticks, wielding them like two magic wands. Each of the band members could also fire a deadly blast of sonic energy directly from their own mouths, just by shouting the word “Yeow!” over and over again.“Yeow?” … Yawn.
2021 |
This is an encyclopedic history and technical description of United States nuclear weapons, delivery systems, manufacturing, storage, maintenance, command and control, security, strategic and tactical doctrine, and interaction with domestic politics and international arms control agreements, covering the period from the inception of these weapons in World War II through 2020. This encompasses a huge amount of subject matter, and covering it in the depth the author undertakes is a large project, with the two volume print edition totalling 1244 20×25 centimetre pages. The level of detail and scope is breathtaking, especially considering that not so long ago much of the information documented here was among the most carefully-guarded secrets of the U.S. military. You will learn the minutiæ of neutron initiators, which fission primaries were used in what thermonuclear weapons, how the goal of “one-point safety” was achieved, the introduction of permissive action links to protect against unauthorised use of weapons and which weapons used what kind of security device, and much, much more.
If the production quality of this work matched its content, it would be an invaluable reference for anybody interested in these weapons, from military historians, students of large-scale government research and development projects, researchers of the Cold War and the nuclear balance of power, and authors setting fiction in that era and wishing to get the details right. Sadly, when it comes to attention to detail, this work, as published in this edition, is sadly lacking—it is both slipshod and shoddy. I was reading it for information, not with the fine-grained attention I devote when proofreading my work or that of others, but in the process I marked 196 errors of fact, spelling, formatting, and grammar, or about one every six printed pages. Now, some of these are just sloppy things (including, or course, misuse of the humble apostrophe) which grate upon the reader but aren't likely to confuse, but others are just glaring errors.
Here are some of the obvious errors. Names misspelled or misstated include Jay Forrester, John von Neumann, Air Force Secretary Hans Mark, and Ronald Reagan. In chapter 11, an entire paragraph is duplicated twice in a row. In chapter 9, it is stated that the Little Feller nuclear test in 1962 was witnessed by president John F. Kennedy; in fact, it was his brother, Attorney General Robert F. Kennedy, who observed the test. There is a long duplicated passage at the start of chapter 20, but this may be a formatting error in the Kindle edition. In chapter 29, it is stated that nitrogen tetroxide was the fuel of the Titan II missile—in fact, it was the oxidiser. In chapter 41, the Northrop B-2 stealth bomber is incorrectly attributed to Lockheed in four places. In chapter 42, the Trident submarine-launched missile is referred to as “Titan” on two occasions.
The problem with such a plethora of errors is that when reading information with which you aren't acquainted or have the ability to check, there's no way to know whether they're correct or nonsense. Before using anything from this book as a source in your own work, I'd advise keeping in mind the Russian proverb, Доверяй, но проверяй—“Trust, but verify”. In this case, I'd go light on the trust and double up on the verification.
In the citation above, I link to the Kindle edition, which is free for Kindle Unlimited subscribers. The print edition is published in two paperbacks, Volume 1 and Volume 2.
2022 |