2016 |
Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped. Or better yet, never begun.Except, it seems, you can only recognise them in retrospect.
Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed. None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles. Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy. In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.
Drawing by Randall Munroe / xkcd used under right to
share but not to sell
(CC BY-NC 2.5).
(The words in the above picture are drawn. In the book they are set in sharp letters.)
Joseph Weber, an experimental physicist at the University of Maryland, was the first to attempt to detect gravitational radiation. He used large bars, now called Weber bars, of aluminium, usually cylinders two metres long and one metre in diameter, instrumented with piezoelectric sensors. The bars were, based upon their material and dimensions, resonant at a particular frequency, and could detect a change in length of the cylinder of around 10−16 metres. Weber was a pioneer in reducing noise of his detectors, and operated two detectors at different locations so that signals would only be considered valid if observed nearly simultaneously by both.
What nobody knew was how “noisy” the sky was in gravitational radiation: how many sources there were and how strong they might be. Theorists could offer little guidance: ultimately, you just had to listen. Weber listened, and reported signals he believed consistent with gravitational waves. But others who built comparable apparatus found nothing but noise and theorists objected that if objects in the universe emitted as much gravitational radiation as Weber's detections implied, it would convert all of its mass into gravitational radiation in just fifty million years. Weber's claims of having detected gravitational radiation are now considered to have been discredited, but there are those who dispute this assessment. Still, he was the first to try, and made breakthroughs which informed subsequent work. Might there be a better way, which could detect even smaller signals than Weber's bars, and over a wider frequency range? (Since the frequency range of potential sources was unknown, casting the net as widely as possible made more potential candidate sources accessible to the experiment.) Independently, groups at MIT, the University of Glasgow in Scotland, and the Max Planck Institute in Germany began to investigate interferometers as a means of detecting gravitational waves. An interferometer had already played a part in confirming Einstein's special theory of relativity: could it also provide evidence for an elusive prediction of the general theory? An interferometer is essentially an absurdly precise ruler where the markings on the scale are waves of light. You send beams of light down two paths, and adjust them so that the light waves cancel (interfere) when they're combined after bouncing back from mirrors at the end of the two paths. If there's any change in the lengths of the two paths, the light won't interfere precisely, and its intensity will increase depending upon the difference. But when a gravitational wave passes, that's precisely what happens! Lengths in one direction will be squeezed while those orthogonal (at a right angle) will be stretched. In principle, an interferometer can be an exquisitely sensitive detector of gravitational waves. The gap between principle and practice required decades of diligent toil and hundreds of millions of dollars to bridge. From the beginning, it was clear it would not be easy. The field of general relativity (gravitation) had been called “a theorist's dream, an experimenter's nightmare”, and almost everybody working in the area were theorists: all they needed were blackboards, paper, pencils, and lots of erasers. This was “little science”. As the pioneers began to explore interferometric gravitational wave detectors, it became clear what was needed was “big science”: on the order of large particle accelerators or space missions, with budgets, schedules, staffing, and management comparable to such projects. This was a culture shock to the general relativity community as violent as the astrophysical sources they sought to detect. Between 1971 and 1989, theorists and experimentalists explored detector technologies and built prototypes to demonstrate feasibility. In 1989, a proposal was submitted to the National Science Foundation to build two interferometers, widely separated geographically, with an initial implementation to prove the concept and a subsequent upgrade intended to permit detection of gravitational radiation from anticipated sources. After political battles, in 1995 construction of LIGO, the Laser Interferometer Gravitational-Wave Observatory, began at the two sites located in Livingston, Louisiana and Hanford, Washington, and in 2001, commissioning of the initial detectors was begun; this would take four years. Between 2005 and 2007 science runs were made with the initial detectors; much was learned about sources of noise and the behaviour of the instrument, but no gravitational waves were detected. Starting in 2007, based upon what had been learned so far, construction of the advanced interferometer began. This took three years. Between 2010 and 2012, the advanced components were installed, and another three years were spent commissioning them: discovering their quirks, fixing problems, and increasing sensitivity. Finally, in 2015, observations with the advanced detectors began. The sensitivity which had been achieved was astonishing: the interferometers could detect a change in the length of their four kilometre arms which was one ten-thousandth the diameter of a proton (the nucleus of a hydrogen atom). In order to accomplish this, they had to overcome noise which ranged from distant earthquakes, traffic on nearby highways, tides raised in the Earth by the Sun and Moon, and a multitude of other sources, via a tower of technology which made the machine, so simple in concept, forbiddingly complex. September 14, 2015, 09:51 UTC: Chirp! A hundred years after the theory that predicted it, 44 years after physicists imagined such an instrument, 26 years after it was formally proposed, 20 years after it was initially funded, a gravitational wave had been detected, and it was right out of the textbook: the merger of two black holes with masses around 29 and 36 times that of the Sun, at a distance of 1.3 billion light years. A total of three solar masses were converted into gravitational radiation: at the moment of the merger, the gravitational radiation emitted was 50 times greater than the light from all of the stars in the universe combined. Despite the stupendous energy released by the source, when it arrived at Earth it could only have been detected by the advanced interferometer which had just been put into service: it would have been missed by the initial instrument and was orders of magnitude below the noise floor of Weber's bar detectors. For only the third time since proto-humans turned their eyes to the sky a new channel of information about the universe we inhabit was opened. Most of what we know comes from electromagnetic radiation: light, radio, microwaves, gamma rays, etc. In the 20th century, a second channel opened: particles. Cosmic rays and neutrinos allow exploring energetic processes we cannot observe in any other way. In a real sense, neutrinos let us look inside the Sun and into the heart of supernovæ and see what's happening there. And just last year the third channel opened: gravitational radiation. The universe is almost entirely transparent to gravitational waves: that's why they're so difficult to detect. But that means they allow us to explore the universe at its most violent: collisions and mergers of neutron stars and black holes—objects where gravity dominates the forces of the placid universe we observe through telescopes. What will we see? What will we learn? Who knows? If experience is any guide, we'll see things we never imagined and learn things even the theorists didn't anticipate. The game is afoot! It will be a fine adventure. Black Hole Blues is the story of gravitational wave detection, largely focusing upon LIGO and told through the eyes of Rainer Weiss and Kip Thorne, two of the principals in its conception and development. It is an account of the transition of a field of research from a theorist's toy to Big Science, and the cultural, management, and political problems that involves. There are few examples in experimental science where so long an interval has elapsed, and so much funding expended, between the start of a project and its detecting the phenomenon it was built to observe. The road was bumpy, and that is documented here. I found the author's tone off-putting. She, a theoretical cosmologist at Barnard College, dismisses scientists with achievements which dwarf her own and ideas which differ from hers in the way one expects from Social Justice Warriors in the squishier disciplines at the Seven Sisters: “the notorious Edward Teller”, “Although Kip [Thorne] outgrew the tedious moralizing, the sexism, and the religiosity of his Mormon roots”, (about Joseph Weber) “an insane, doomed, impossible bar detector designed by the old mad guy, crude laboratory-scale slabs of metal that inspired and encouraged his anguished claims of discovery”, “[Stephen] Hawking made his oddest wager about killer aliens or robots or something, which will not likely ever be resolved, so that might turn out to be his best bet yet”, (about Richard Garwin) “He played a role in halting the Star Wars insanity as well as potentially disastrous industrial escalations, like the plans for supersonic airplanes…”, and “[John Archibald] Wheeler also was not entirely against the House Un-American Activities Committee. He was not entirely against the anticommunist fervor that purged academics from their ivory-tower ranks for crimes of silence, either.” … “I remember seeing him at the notorious Princeton lunches, where visitors are expected to present their research to the table. Wheeler was royalty, in his eighties by then, straining to hear with the help of an ear trumpet. (Did I imagine the ear trumpet?)”. There are also a number of factual errors (for example, a breach in the LIGO beam tube sucking out all of the air from its enclosure and suffocating anybody inside), which a moment's calculation would have shown was absurd. The book was clearly written with the intention of being published before the first detection of a gravitational wave by LIGO. The entire story of the detection, its validation, and public announcement is jammed into a seven page epilogue tacked onto the end. This epochal discovery deserves being treated at much greater length.Secrets Are LiesTo Mae's family and few remaining friends outside The Circle, this all seems increasingly bizarre: as if the fastest growing and most prestigious high technology company in the world has become a kind of grotesque cult which consumes the lives of its followers and aspires to become universal. Mae loves her sense of being connected, the interaction with a worldwide public, and thinks it is just wonderful. The Circle internally tests and begins to roll out a system of direct participatory democracy to replace existing political institutions. Mae is there to report it. A plan to put an end to most crime is unveiled: Mae is there. The Circle is closing. Mae is contacted by her mysterious acquaintance, and presented with a moral dilemma: she has become a central actor on the stage of a world which is on the verge of changing, forever. This is a superbly written story which I found both realistic and chilling. You don't need artificial intelligence or malevolent machines to create an eternal totalitarian nightmare. All it takes a few years' growth and wider deployment of technologies which exist today, combined with good intentions, boundless ambition, and fuzzy thinking. And the latter three commodities are abundant among today's technology powerhouses. Lest you think the technologies which underlie this novel are fantasy or far in the future, they were discussed in detail in David Brin's 1999 The Transparent Society and my 1994 “Unicard” and 2003 “The Digital Imprimatur”. All that has changed is that the massive computing, communication, and data storage infrastructure envisioned in those works now exists or will within a few years. What should you fear most? Probably the millennials who will read this and think, “Wow! This will be great.” “Democracy is mandatory here!”
Sharing Is Caring
Privacy Is Theft
There are two overwhelming forces in the world. One is chaos; the other is order. God—the original singular speck—is forming again. He's gathering together his bits—we call it gravity. And in the process he is becoming self-aware to defeat chaos, to defeat evil if you will, to battle the devil. But something has gone terribly wrong.Sometimes, when your computer is in a loop, the only thing you can do is reboot it: forcefully get it out of the destructive loop back to a starting point from which it can resume making progress. But how do you reboot a global technological civilisation on the brink of war? The Avatar must find the reboot button as time is running out. Thirty years later, a delivery man rings the door. An old man with a shabby blanket answers and invites him inside. There are eight questions to ponder at the end which expand upon the shiver-up-your-spine themes raised in the novel. Bear in mind, when pondering how prophetic this novel is of current and near-future events, that it was published twelve years ago.
No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper. The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts. On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved. Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston. With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry. Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone. While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines. In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory. Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality. Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out. When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic. I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.
Raindrops keep fallin' in my face,Finally, here was proof that “it moves”: there would be no aberration in a geocentric universe. But by Bradley's time in the 1720s, only cranks and crackpots still believed in the geocentric model. The question was, instead, how distant are the stars? The parallax game remained afoot. It was ultimately a question of instrumentation, but also one of luck. By the 19th century, there was abundant evidence that stars differed enormously in their intrinsic brightness. (We now know that the most luminous stars are more than a billion times more brilliant than the dimmest.) Thus, you couldn't conclude that the brightest stars were the nearest, as astronomers once guessed. Indeed, the distances of the four brightest stars as seen from Earth are, in light years, 8.6, 310, 4.4, and 37. Given that observing the position of a star for parallax is a long-term project and tedious, bear in mind that pioneers on the quest had no idea whether the stars they observed were near or far, nor the distance to the nearest stars they might happen to be lucky enough to choose. It all came together in the 1830s. Using an instrument called a heliometer, which was essentially a refractor telescope with its lens cut in two with the ability to shift the halves and measure the offset, Friedrich Bessel was able to measure the parallax of the star 61 Cygni by comparison to an adjacent distant star. Shortly thereafter, Wilhelm Struve published the parallax of Vega, and then, just two months later, Thomas Henderson reported the parallax of Alpha Centauri, based upon measurements made earlier at the Cape of Good Hope. Finally, we knew the distances to the nearest stars (although those more distant remained a mystery), and just how empty the universe was. Let's put some numbers on this, just to appreciate how great was the achievement of the pioneers of parallax. The parallax angle of the closest star system, Alpha Centauri, is 0.755 arc seconds. (The parallax angle is half the shift observed in the position of the star as the Earth orbits the Sun. We use half the shift because it makes the trigonometry to compute the distance easier to understand.) An arc second is 1/3600 of a degree, and there are 360 degrees in a circle, so it's 1/1,296,000 of a full circle. Now let's work out the distance to Alpha Centauri. We'll work in terms of astronomical units (au), the mean distance between the Earth and Sun. We have a right triangle where we know the distance from the Earth to the Sun and the parallax angle of 0.755 arc seconds. (To get a sense for how tiny an angle this is, it's comparable to the angle subtended by a US quarter dollar coin when viewed from a distance of 6.6 km.) We can compute the distance from the Earth to Alpha Centauri as:
More and more as I pick up the pace…
1 au / tan(0.755 / 3600) = 273198 au = 4.32 light years
Parallax is used to define the parsec (pc), the distance at which a star would have a parallax angle of one arc second. A parsec is about 3.26 light years, so the distance to Alpha Centauri is 1.32 parsecs. Star Wars notwithstanding, the parsec, like the light year, is a unit of distance, not time. Progress in instrumentation has accelerated in recent decades. The Earth is a poor platform from which to make precision observations such as parallax. It's much better to go to space, where there are neither the wobbles of a planet nor its often murky atmosphere. The Hipparcos mission, launched in 1989, measured the parallaxes and proper motions of more than 118,000 stars, with lower resolution data for more than 2.5 million stars. The Gaia mission, launched in 2013 and still underway, has a goal of measuring the position, parallax, and proper motion of more than a billion stars. It's been a long road, getting from there to here. It took more than 2,000 years from the time Aristarchus proposed the heliocentric solar system until we had direct observational evidence that eppur si muove. Within a few years, we will have in hand direct measurements of the distances to a billion stars. And, some day, we'll visit them. I originally read this book in December 2003. It was a delight to revisit.These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.
Phenomena in the universe take place over scales ranging from the unimaginably small to the breathtakingly large. The classic film, Powers of Ten, produced by Charles and Ray Eames, and the companion book explore the universe at length scales in powers of ten: from subatomic particles to the most distant visible galaxies. If we take the smallest meaningful distance to be the Planck length, around 10−35 metres, and the diameter of the observable universe as around 1027 metres, then the ratio of the largest to smallest distances which make sense to speak of is around 1062. Another way to express this is to answer the question, “How big is the universe in Planck lengths?” as “Mega, mega, yotta, yotta big!”
But length isn't the only way to express the scale of the universe. In the present book, the authors examine the time intervals at which phenomena occur or recur. Starting with one second, they take steps of powers of ten (10, 100, 1000, 10000, etc.), arriving eventually at the distant future of the universe, after all the stars have burned out and even black holes begin to disappear. Then, in the second part of the volume, they begin at the Planck time, 5×10−44 seconds, the shortest unit of time about which we can speak with our present understanding of physics, and again progress by powers of ten until arriving back at an interval of one second.
Intervals of time can denote a variety of different phenomena, which are colour coded in the text. A period of time can mean an epoch in the history of the universe, measured from an event such as the Big Bang or the present; a distance defined by how far light travels in that interval; a recurring event, such as the orbital period of a planet or the frequency of light or sound; or the half-life of a randomly occurring event such as the decay of a subatomic particle or atomic nucleus.
Because the universe is still in its youth, the range of time intervals discussed here is much larger than those when considering length scales. From the Planck time of 5×10−44 seconds to the lifetime of the kind of black hole produced by a supernova explosion, 1074 seconds, the range of intervals discussed spans 118 orders of magnitude. If we include the evaporation through Hawking radiation of the massive black holes at the centres of galaxies, the range is expanded to 143 orders of magnitude. Obviously, discussions of the distant future of the universe are highly speculative, since in those vast depths of time physical processes which we have never observed due to their extreme rarity may dominate the evolution of the universe.
Among the fascinating facts you'll discover is that many straightforward physical processes take place over an enormous range of time intervals. Consider radioactive decay. It is possible, using a particle accelerator, to assemble a nucleus of hydrogen-7, an isotope of hydrogen with a single proton and six neutrons. But if you make one, don't grow too fond of it, because it will decay into tritium and four neutrons with a half-life of 23×10−24 seconds, an interval usually associated with events involving unstable subatomic particles. At the other extreme, a nucleus of tellurium-128 decays into xenon with a half-life of 7×1031 seconds (2.2×1024 years), more than 160 trillion times the present age of the universe.
While the very short and very long are the domain of physics, intermediate time scales are rich with events in geology, biology, and human history. These are explored, along with how we have come to know their chronology. You can open the book to almost any page and come across a fascinating story. Have you ever heard of the ocean quahog (Arctica islandica)? They're clams, and the oldest known has been determined to be 507 years old, born around 1499 and dredged up off the coast of Iceland in 2006. People eat them.
Or did you know that if you perform carbon-14 dating on grass growing next to a highway, the lab will report that it's tens of thousands of years old? Why? Because the grass has incorporated carbon from the CO2 produced by burning fossil fuels which are millions of years old and contain little or no carbon-14.
This is a fascinating read, and one which uses the framework of time intervals to acquaint you with a wide variety of sciences, each inviting further exploration. The writing is accessible to the general reader, young adult and older. The individual entries are short and stand alone—if you don't understand something or aren't interested in a topic, just skip to the next. There are abundant colour illustrations and diagrams.
Author Gerard 't Hooft won the 1999 Nobel Prize in Physics for his work on the quantum mechanics of the electroweak interaction. The book was originally published in Dutch in the Netherlands in 2011. The English translation was done by 't Hooft's daughter, Saskia Eisberg-'t Hooft. The translation is fine, but there are a few turns of phrase which will seem odd to an English mother tongue reader. For example, matter in the early universe is said to “clot” under the influence of gravity; the common English term for this is “clump”. This is a translation, not a re-write: there are a number of references to people, places, and historical events which will be familiar to Dutch readers but less so to those in the Anglosphere. In the Kindle edition notes, cross-references, the table of contents, and the index are all properly linked, and the illustrations are reproduced well.
In this new material I saw another confirmation. Its advent was like the signature of some elemental arcanum, complicit with forces not at all interested in human affairs. Carbomorph. Born from incomplete reactions and destructive distillation. From tar and pitch and heavy oils, the black ichor that pulsed thermonous through the arteries of the very earth.On the “Makers”:
This insistence on the lightness and whimsy of farce. The romantic fetish and nostalgia, to see your work as instantly lived memorabilia. The event was modeled on Renaissance performance. This was a crowd of actors playing historical figures. A living charade meant to dislocate and obscure their moment with adolescent novelty. The neckbeard demiurge sees himself keeling in the throes of assembly. In walks the problem of the political and he hisses like the mathematician at Syracuse: “Just don't molest my baubles!”This book recounts the history of the 3D printed pistol, the people who made it happen, and why they did what they did. It recounts recent history during the deployment of a potentially revolutionary technology, as seen from the inside, and the way things actually happen: where nobody really completely understands what is going on and everybody is making things up as they go along. But if the promise of this technology allows the forces of liberty and creativity to prevail over the grey homogenisation of the state and the powers that serve it, this is a book which will be read many years from now by those who wish to understand how, where, and when it all began.… But nobody here truly meant to give you a revolution. “Making” was just another way of selling you your own socialization. Yes, the props were period and we had kept the whole discourse of traditional production, but this was parody to better hide the mechanism. We were “making together,” and “making for good” according to a ritual under the signs of labor. And now I knew this was all apolitical on purpose. The only goal was that you become normalized. The Makers had on their hands a Last Man's revolution whose effeminate mascots could lead only state-sanctioned pep rallies for feel-good disruption. The old factory was still there, just elevated to the image of society itself. You could buy Production's acrylic coffins, but in these new machines was the germ of the old productivism. Dead labor, that vampire, would still glamour the living.
In an information economy, growth springs not from power but from knowledge. Crucial to the growth of knowledge is learning, conducted across an economy through the falsifiable testing of entrepreneurial ideas in companies that can fail. The economy is a test and measurement system, and it requires reliable learning guided by an accurate meter of monetary value.Money, then, is the means by which information is transmitted within the economy. It allows comparing the value of completely disparate things: for example the services of a neurosurgeon and a ton of pork bellies, even though it is implausible anybody has ever bartered one for the other. When money is stable (its supply is fixed or grows at a constant rate which is small compared to the existing money supply), it is possible for participants in the economy to evaluate various goods and services on offer and, more importantly, make long term plans to create new goods and services which will improve productivity. When money is manipulated by governments and their central banks, such planning becomes, in part, a speculation on the value of currency in the future. It's like you were operating a textile factory and sold your products by the metre, and every morning you had to pick up the Wall Street Journal to see how long a metre was today. Should you invest in a new weaving machine? Who knows how long the metre will be by the time it's installed and producing? I'll illustrate the information theory of value in the following way. Compare the price of the pile of raw materials used in making a BMW (iron, copper, glass, aluminium, plastic, leather, etc.) with the finished automobile. The difference in price is the information embodied in the finished product—not just the transformation of the raw materials into the car, but the knowledge gained over the decades which contributed to that transformation and the features of the car which make it attractive to the customer. Now take that BMW and crash it into a bridge abutment on the autobahn at 200 km/h. How much is it worth now? Probably less than the raw materials (since it's harder to extract them from a jumbled-up wreck). Every atom which existed before the wreck is still there. What has been lost is the information (what electrical engineers call the “magic smoke”) which organised them into something people valued. When the value of money is unpredictable, any investment is in part speculative, and it is inevitable that the most lucrative speculations will be those in money itself. This diverts investment from improving productivity into financial speculation on foreign exchange rates, interest rates, and financial derivatives based upon them: a completely unproductive zero-sum sector of the economy which didn't exist prior to the abandonment of fixed exchange rates in 1971. What happened in 1971? On August 15th of that year, President Richard Nixon unilaterally suspended the convertibility of the U.S. dollar into gold, setting into motion a process which would ultimately destroy the Bretton Woods system of fixed exchange rates which had been created as a pillar of the world financial and trade system after World War II. Under Bretton Woods, the dollar was fixed to gold, with sovereign holders of dollar reserves (but not individuals) able to exchange dollars and gold in unlimited quantities at the fixed rate of US$ 35/troy ounce. Other currencies in the system maintained fixed exchange rates with the dollar, and were backed by reserves, which could be held in either dollars or gold. Fixed exchange rates promoted international trade by eliminating currency risk in cross-border transactions. For example, a German manufacturer could import raw materials priced in British pounds, incorporate them into machine tools assembled by workers paid in German marks, and export the tools to the United States, being paid in dollars, all without the risk that a fluctuation by one or more of these currencies against another would wipe out the profit from the transaction. The fixed rates imposed discipline on the central banks issuing currencies and the governments to whom they were responsible. Running large trade deficits or surpluses, or accumulating too much public debt was deterred because doing so could force a costly official change in the exchange rate of the currency against the dollar. Currencies could, in extreme circumstances, be devalued or revalued upward, but this was painful to the issuer and rare. With the collapse of Bretton Woods, no longer was there a link to gold, either direct or indirect through the dollar. Instead, the relative values of currencies against one another were set purely by the market: what traders were willing to pay to buy one with another. This pushed the currency risk back onto anybody engaged in international trade, and forced them to “hedge” the currency risk (by foreign exchange transactions with the big banks) or else bear the risk themselves. None of this contributed in any way to productivity, although it generated revenue for the banks engaged in the game. At the time, the idea of freely floating currencies, with their exchange rates set by the marketplace, seemed like a free market alternative to the top-down government-imposed system of fixed exchange rates it supplanted, and it was supported by champions of free enterprise such as Milton Friedman. The author contends that, based upon almost half a century of experience with floating currencies and the consequent chaotic changes in exchange rates, bouts of inflation and deflation, monetary induced recessions, asset bubbles and crashes, and interest rates on low-risk investments which ranged from 20% to less than zero, this was one occasion Prof. Friedman got it wrong. Like the ever-changing metre in the fable of the textile factory, incessantly varying money makes long term planning difficult to impossible and sends the wrong signals to investors and businesses. In particular, when interest rates are forced to near zero, productive investment which creates new assets at a rate greater than the interest rate on the borrowed funds is neglected in favour of bidding up the price of existing assets, creating bubbles like those in real estate and stocks in recent memory. Further, since free money will not be allocated by the market, those who receive it are the privileged or connected who are first in line; this contributes to the justified perception of inequality in the financial system. Having judged the system of paper money with floating exchange rates a failure, Gilder does not advocate a return to either the classical gold standard of the 19th century or the Bretton Woods system of fixed exchange rates with a dollar pegged to gold. Preferring to rely upon the innovation of entrepreneurs and the selection of the free market, he urges governments to remove all impediments to the introduction of multiple, competitive currencies. In particular, the capital gains tax would be abolished for purchases and sales regardless of the currency used. (For example, today you can obtain a credit card denominated in euros and use it freely in the U.S. to make purchases in dollars. Every time you use the card, the dollar amount is converted to euros and added to the balance on your bill. But, strictly speaking, you have sold euros and bought dollars, so you must report the transaction and any gain or loss from change in the dollar value of the euros in your account and the value of the ones you spent. This is so cumbersome it's a powerful deterrent to using any currency other than dollars in the U.S. Many people ignore the requirement to report such transactions, but they're breaking the law by doing so.) With multiple currencies and no tax or transaction reporting requirements, all will be free to compete in the market, where we can expect the best solutions to prevail. Using whichever currency you wish will be as seamless as buying something with a debit or credit card denominated in a currency different than the one of the seller. Existing card payment systems have a transaction cost which is so high they are impractical for “micropayment” on the Internet or for fully replacing cash in everyday transactions. Gilder suggests that Bitcoin or other cryptocurrencies based on blockchain technology will probably be the means by which a successful currency backed 100% with physical gold or another hard asset will be used in transactions. This is a thoughtful examination of the problems of the contemporary financial system from a perspective you'll rarely encounter in the legacy financial media. The root cause of our money problems is the money: we have allowed governments to inflict upon us a monopoly of government-managed money, which, unsurprisingly, works about as well as anything else provided by a government monopoly. Our experience with this flawed system over more than four decades makes its shortcomings apparent, once you cease accepting the heavy price we pay for them as the normal state of affairs and inevitable. As with any other monopoly, all that's needed is to break the monopoly and free the market to choose which, among a variety of competing forms of money, best meet the needs of those who use them. Here is a Bookmonger interview with the author discussing the book.
The scraps, which you reject, unfitRené Antoine Ferchault de Réaumur, a French polymath who published in numerous fields of science, observed in 1719 that wasps made their nests from what amounted to paper they produced directly from wood. If humans could replicate this vespidian technology, the forests of Europe and North America could provide an essentially unlimited and renewable source of raw material for paper. This idea was to lie fallow for more than a century. Some experimenters produced small amounts of paper from wood through various processes, but it was not until 1850 that paper was manufactured from wood in commercial quantities in Germany, and 1863 that the first wood-based paper mill began operations in America. Wood is about half cellulose, while the fibres in rags run up to 90% cellulose. The other major component of wood is lignin, a cross-linked polymer which gives it its strength and is useless for paper making. In the 1860s a process was invented where wood, first mechanically cut into small chips, was chemically treated to break down the fibrous structure in a device called a “digester”. This produced a pulp suitable for paper making, and allowed a dramatic expansion in the volume of paper produced. But the original wood-based paper still contained lignin, which turns brown over time. While this was acceptable for newspapers, it was undesirable for books and archival documents, for which rag paper remained preferred. In 1879, a German chemist invented a process to separate lignin from cellulose in wood pulp, which allowed producing paper that did not brown with age. The processes used to make paper from wood involved soaking the wood pulp in acid to break down the fibres. Some of this acid remained in the paper, and many books printed on such paper between 1840 and 1970 are now in the process of slowly disintegrating as the acid eats away at the paper. Only around 1970 was it found that an alkali solution works just as well when processing the pulp, and since then acid-free paper has become the norm for book publishing. Most paper is produced from wood today, and on an enormous, industrial scale. A single paper mill in China, not the largest, produces 600,000 tonnes of paper per year. And yet, for all of the mechanisation, that paper is made by the same process as the first sheet of paper produced in China: by reducing material to cellulose fibres, mixing them with water, extracting a sheet (now a continuous roll) with a screen, then pressing and drying it to produce the final product. Paper and printing is one of those technologies which is so simple, based upon readily-available materials, and potentially revolutionary that it inspires “what if” speculation. The ancient Egyptians, Greeks, and Romans each had everything they needed—raw materials, skills, and a suitable written language—so that a Connecticut Yankee-like time traveller could have explained to artisans already working with wood and metal how to make paper, cast movable type, and set up a printing press in a matter of days. How would history have differed had one of those societies unleashed the power of the printed word?
To clothe the tenant of a hovel,
May shine in sentiment and wit,
And help make a charming novel…
Forty years ago [in the 1880s] the contact of the individual with the Government had its largest expression in the sheriff or policeman, and in debates over political equality. In those happy days the Government offered but small interference with the economic life of the citizen.But with the growth of cities, industrialisation, and large enterprises such as railroads and steel manufacturing, a threat to this frontier individualism emerged: the reduction of workers to a proletariat or serfdom due to the imbalance between their power as individuals and the huge companies that employed them. It is there that government action was required to protect the other component of American individualism: the belief in equality of opportunity. Hoover believes, and supports, intervention in the economy to prevent the concentration of economic power in the hands of a few, and to guard, through taxation and other means, against the emergence of a hereditary aristocracy of wealth. Yet this poses its own risks,
But with the vast development of industry and the train of regulating functions of the national and municipal government that followed from it; with the recent vast increase in taxation due to the war;—the Government has become through its relations to economic life the most potent force for maintenance or destruction of our American individualism.One of the challenges American society must face as it adapts is avoiding the risk of utopian ideologies imported from Europe seizing this power to try to remake the country and its people along other lines. Just ten years later, as Hoover's presidency gave way to the New Deal, this fearful prospect would become a reality. Hoover examines the philosophical, spiritual, economic, and political aspects of this unique system of individual initiative tempered by constraints and regulation in the interest of protecting the equal opportunity of all citizens to rise as high as their talent and effort permit. Despite the problems cited by radicals bent on upending the society, he contends things are working pretty well. He cites “the one percent”: “Yet any analysis of the 105,000,000 of us would show that we harbor less than a million of either rich or impecunious loafers.” Well, the percentage of very rich seems about the same today, but after half a century of welfare programs which couldn't have been more effective in destroying the family and the initiative of those at the bottom of the economic ladder had that been their intent, and an education system which, as a federal commission was to write in 1983, “If an unfriendly foreign power had attempted to impose on America …, we might well have viewed it as an act of war”, a nation with three times the population seems to have developed a much larger unemployable and dependent underclass. Hoover also judges the American system to have performed well in achieving its goal of a classless society with upward mobility through merit. He observes, speaking of the Harding administration of which he is a member,
That our system has avoided the establishment and domination of class has a significant proof in the present Administration in Washington, Of the twelve men comprising the President, Vice-President, and Cabinet, nine have earned their own way in life without economic inheritance, and eight of them started with manual labor.Let's see how that has held up, almost a century later. Taking the 17 people in equivalent positions at the end of the Obama administration in 2016 (President, Vice President, and heads of the 15 executive departments), we find that only 1 of the 17 inherited wealth (I'm inferring from the description of parents in their biographies) but that precisely zero had any experience with manual labour. If attending an Ivy League university can be taken as a modern badge of membership in a ruling class, 11 of the 17—65%, meet this test (if you consider Stanford a member of an “extended Ivy League”, the figure rises to 70%). Although published in a different century in a very different America, much of what Hoover wrote remains relevant today. Just as Hoover warned of bad ideas from Europe crossing the Atlantic and taking root in the United States, the Frankfurt School in Germany was laying the groundwork for the deconstruction of Western civilisation and individualism, and in the 1930s, its leaders would come to America to infect academia. As Hoover warned, “There is never danger from the radical himself until the structure and confidence of society has been undermined by the enthronement of destructive criticism.” Destructive criticism is precisely what these “critical theorists” specialised in, and today in many parts of the humanities and social sciences even in the most eminent institutions the rot is so deep they are essentially a write-off. Undoing a century of bad ideas is not the work of a few years, but Hoover's optimistic and pragmatic view of the redeeming merit of individualism unleashed is a bracing antidote to the gloom one may feel when surveying the contemporary scene.