2017 |
This equation not only correctly predicted the results measured in the laboratories, it avoided the ultraviolet catastrophe, as it predicted an absolute cutoff of the highest frequency radiation which could be emitted based upon an object's temperature. This meant that the absorption and re-emission of radiation in the closed oven could never run away to infinity because no energy could be emitted above the limit imposed by the temperature. Fine: the theory explained the measurements. But what did it mean? More than a century later, we're still trying to figure that out. Planck modeled the walls of the oven as a series of resonators, but unlike earlier theories in which each could emit energy at any frequency, he constrained them to produce discrete chunks of energy with a value determined by the frequency emitted. This had the result of imposing a limit on the frequency due to the available energy. While this assumption yielded the correct result, Planck, deeply steeped in the nineteenth century tradition of the continuum, did not initially suggest that energy was actually emitted in discrete packets, considering this aspect of his theory “a purely formal assumption.” Planck's 1900 paper generated little reaction: it was observed to fit the data, but the theory and its implications went over the heads of most physicists. In 1905, in his capacity as editor of Annalen der Physik, he read and approved the publication of Einstein's paper on the photoelectric effect, which explained another physics puzzle by assuming that light was actually emitted in discrete bundles with an energy determined by its frequency. But Planck, whose equation manifested the same property, wasn't ready to go that far. As late as 1913, he wrote of Einstein, “That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.” Only in the 1920s did Planck fully accept the implications of his work as embodied in the emerging quantum theory.
The equation for Planck's Law contained two new fundamental physical constants: Planck's constant (h) and Boltzmann's constant (kB). (Boltzmann's constant was named in memory of Ludwig Boltzmann, the pioneer of statistical mechanics, who committed suicide in 1906. The constant was first introduced by Planck in his theory of thermal radiation.) Planck realised that these new constants, which related the worlds of the very large and very small, together with other physical constants such as the speed of light (c), the gravitational constant (G), and the Coulomb constant (ke), allowed defining a system of units for quantities such as length, mass, time, electric charge, and temperature which were truly fundamental: derived from the properties of the universe we inhabit, and therefore comprehensible to intelligent beings anywhere in the universe. Most systems of measurement are derived from parochial anthropocentric quantities such as the temperature of somebody's armpit or the supposed distance from the north pole to the equator. Planck's natural units have no such dependencies, and when one does physics using them, equations become simpler and more comprehensible. The magnitudes of the Planck units are so far removed from the human scale they're unlikely to find any application outside theoretical physics (imagine speed limit signs expressed in a fraction of the speed of light, or road signs giving distances in Planck lengths of 1.62×10−35 metres), but they reflect the properties of the universe and may indicate the limits of our ability to understand it (for example, it may not be physically meaningful to speak of a distance smaller than the Planck length or an interval shorter than the Planck time [5.39×10−44 seconds]).
Planck's life was long and productive, and he enjoyed robust health (he continued his long hikes in the mountains into his eighties), but was marred by tragedy. His first wife, Marie, died of tuberculosis in 1909. He outlived four of his five children. His son Karl was killed in 1916 in World War I. His two daughters, Grete and Emma, both died in childbirth, in 1917 and 1919. His son and close companion Erwin, who survived capture and imprisonment by the French during World War I, was arrested and executed by the Nazis in 1945 for suspicion of involvement in the Stauffenberg plot to assassinate Hitler. (There is no evidence Erwin was a part of the conspiracy, but he was anti-Nazi and knew some of those involved in the plot.) Planck was repulsed by the Nazis, especially after a private meeting with Hitler in 1933, but continued in his post as the head of the Kaiser Wilhelm Society until 1937. He considered himself a German patriot and never considered emigrating (and doubtless his being 75 years old when Hitler came to power was a consideration). He opposed and resisted the purging of Jews from German scientific institutions and the campaign against “Jewish science”, but when ordered to dismiss non-Aryan members of the Kaiser Wilhelm Society, he complied. When Heisenberg approached him for guidance, he said, “You have come to get my advice on political questions, but I am afraid I can no longer advise you. I see no hope of stopping the catastrophe that is about to engulf all our universities, indeed our whole country. … You simply cannot stop a landslide once it has started.” Planck's house near Berlin was destroyed in an Allied bombing raid in February 1944, and with it a lifetime of his papers, photographs, and correspondence. (He and his second wife Marga had evacuated to Rogätz in 1943 to escape the raids.) As a result, historians have only limited primary sources from which to work, and the present book does an excellent job of recounting the life and science of a man whose work laid part of the foundations of twentieth century science.Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment. Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.
I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment? What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained? Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry? Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what? Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter? Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”. Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens. Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them. This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:
If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.
Quintile | % Muslim | Countries |
---|---|---|
1 | 100–80 | 36 |
2 | 80–60 | 5 |
3 | 60–40 | 8 |
4 | 40–20 | 7 |
5 | 20–0 | 132 |
It was impossible. Nobody, in this time of depression, could find an order for a single ship…—let alone a flock of them. There was the staff. … He could probably get them together again at a twenty per cent rise in salary—if they were any good. But how was he to judge of that? The whole thing was impossible, sheer madness to attempt. He must be sensible, and put it from his mind. It would be damn good fun…Three weeks later, acting through a solicitor to conceal his identity, Mr. Henry Warren, merchant banker of the City, became the owner of Barlows' Yard, purchasing it outright for the sum of £5500. Thus begins one of the most entertaining, realistic, and heartwarming tales of entrepreneurship (or perhaps “rentrepreneurship”) I have ever read. The fact that the author was himself founder and director of an aircraft manufacturing company during the depression, and well aware of the need to make payroll every week, get orders to keep the doors open even if they didn't make much business sense, and do whatever it takes so that the business can survive and meet its obligations to its customers, investors, employees, suppliers, and creditors, contributes to the authenticity of the tale. (See his autobiography, Slide Rule [July 2011], for details of his career.) Back in his office at the bank, there is the matter of the oil deal in Laevatia. After defaulting on their last loan, the Balkan country is viewed as a laughingstock and pariah in the City, but Warren has an idea. If they are to develop oil in the country, they will need to ship it, and how better to ship it than in their own ships, built in Britain on advantageous terms? Before long, he's off to the Balkans to do a deal in the Balkan manner (involving bejewelled umbrellas, cases of Worcestershire sauce, losing to the Treasury minister in the local card game at a dive in the capital, and working out a deal where the dividends on the joint stock oil company will be secured by profits from the national railway. And, there's the matter of the ships, which will be contracted for by Warren's bank. Then it's back to London to pitch the deal. Warren's reputation counts for a great deal in the City, and the preference shares are placed. That done, the Hawside Ship and Engineering Company Ltd. is registered with cut-out directors, and the process of awarding the contract for the tankers to it is undertaken. As Warren explains to Miss McMahon, who he has begun to see more frequently, once the order is in hand, it can be used to float shares in the company to fund the equipment and staff to build the ships. At least if the prospectus is sufficiently optimistic—perhaps too optimistic…. Order in hand, life begins to return to Sharples. First a few workers, then dozens, then hundreds. The welcome sound of riveting and welding begins to issue from the yard. A few boarded-up shops re-open, and then more. Then another order for a ship came in, thanks to arm-twisting by one of the yard's directors. With talk of Britain re-arming, there was the prospect of Admiralty business. There was still only one newspaper a week in Sharples, brought in from Newcastle and sold to readers interested in the football news. On one of his more frequent visits to the town, yard, and Miss McMahon, Warren sees the headline: “Revolution in Laevatia”. “This is a very bad one,” Warren says. “I don't know what this is going to mean.” But, one suspects, he did. As anybody who has been in the senior management of a publicly-traded company is well aware, what happens next is well-scripted: the shareholder suit by a small investor, the press pile-on, the back-turning by the financial community, the securities investigation, the indictment, and, eventually, the slammer. Warren understands this, and works diligently to ensure the Yard survives. There is a deep mine of wisdom here for anybody facing a bad patch.
“You must make this first year's accounts as bad as they ever can be,” he said. “You've got a marvellous opportunity to do so now, one that you'll never have again. You must examine every contract that you've got, with Jennings, and Grierson must tell the auditors that every contract will be carried out at a loss. He'll probably be right, of course—but he must pile it on. You've got to make reserves this year against every possible contingency, probable or improbable.” … “Pile everything into this year's loss, including a lot that really ought not to be there. If you do that, next year you'll be bound to show a profit, and the year after, if you've done it properly this year. Then as soon as you're showing profits and a decent show of orders in hand, get rid of this year's losses by writing down your capital, pay a dividend, and make another issue to replace the capital.”Sage advice—I've been there. We had cash in the till, so we were able to do a stock buy-back at the bottom, but the principle is the same. Having been brought back to life by almost dying in small town hospital, Warren is rejuvenated by his time in gaol. In November 1937, he is released and returns to Sharples where, amidst evidence of prosperity everywhere he approaches the Yard, to see a plaque on the wall with his face in profile: “HENRY WARREN — 1934 — HE GAVE US WORK”. Then he was off to see Miss McMahon. The only print edition currently available new is a very expensive hardcover. Used paperbacks are readily available: check under both Kindling and the original British title, Ruined City. I have linked to the Kindle edition above.
Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue. … That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”. This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that. This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available. Thanks to Glenn Beck for recommending this book.
He shook hands and gave me a friendly grin. You could call it nothing but a grin, for his lips were exceedingly thin and fleshless, and among his upper teeth a baby tooth too lingered on, conspicuous in its incongruity. But his eyes were cheerful and amused.Both Laura and Enrico shared the ability to see things precisely as they were, then see beyond that to what they could become. In Rome, Fermi became head of the mathematical physics department at the Sapienza University of Rome, which his mentor, Corbino, saw as Italy's best hope to become a world leader in the field. He helped Fermi recruit promising physicists, all young and ambitious. They gave each other nicknames: ecclesiastical in nature, befitting their location in Rome. Fermi was dubbed Il Papa (The Pope), not only due to his leadership and seniority, but because he had already developed a reputation for infallibility: when he made a calculation or expressed his opinion on a technical topic, he was rarely if ever wrong. Meanwhile, Mussolini was increasing his grip on the country. In 1929, he announced the appointment of the first thirty members of the Royal Italian Academy, with Fermi among the laureates. In return for a lifetime stipend which would put an end to his financial worries, he would have to join the Fascist party. He joined. He did not take the Academy seriously and thought its comic opera uniforms absurd, but appreciated the money. By the 1930s, one of the major mysteries in physics was beta decay. When a radioactive nucleus decayed, it could emit one or more kinds of radiation: alpha, beta, or gamma. Alpha particles had been identified as the nuclei of helium, beta particles as electrons, and gamma rays as photons: like light, but with a much shorter wavelength and correspondingly higher energy. When a given nucleus decayed by alpha or gamma, the emission always had the same energy: you could calculate the energy carried off by the particle emitted and compare it to the nucleus before and after, and everything added up according to Einstein's equation of E=mc². But something appeared to be seriously wrong with beta (electron) decay. Given a large collection of identical nuclei, the electrons emitted flew out with energies all over the map: from very low to an upper limit. This appeared to violate one of the most fundamental principles of physics: the conservation of energy. If the nucleus after plus the electron (including its kinetic energy) didn't add up to the energy of the nucleus before, where did the energy go? Few physicists were ready to abandon conservation of energy, but, after all, theory must ultimately conform to experiment, and if a multitude of precision measurements said that energy wasn't conserved in beta decay, maybe it really wasn't. Fermi thought otherwise. In 1933, he proposed a theory of beta decay in which the emission of a beta particle (electron) from a nucleus was accompanied by emission of a particle he called a neutrino, which had been proposed earlier by Pauli. In one leap, Fermi introduced a third force, alongside gravity and electromagnetism, which could transform one particle into another, plus a new particle: without mass or charge, and hence extraordinarily difficult to detect, which nonetheless was responsible for carrying away the missing energy in beta decay. But Fermi did not just propose this mechanism in words: he presented a detailed mathematical theory of beta decay which made predictions for experiments which had yet to be performed. He submitted the theory in a paper to Nature in 1934. The editors rejected it, saying “it contained abstract speculations too remote from physical reality to be of interest to the reader.” This was quickly recognised and is now acknowledged as one of the most epic face-plants of peer review in theoretical physics. Fermi's theory rapidly became accepted as the correct model for beta decay. In 1956, the neutrino (actually, antineutrino) was detected with precisely the properties predicted by Fermi. This theory remained the standard explanation for beta decay until it was extended in the 1970s by the theory of the electroweak interaction, which is valid at higher energies than were available to experimenters in Fermi's lifetime. Perhaps soured on theoretical work by the initial rejection of his paper on beta decay, Fermi turned to experimental exploration of the nucleus, using the newly-discovered particle, the neutron. Unlike alpha particles emitted by the decay of heavy elements like uranium and radium, neutrons had no electrical charge and could penetrate the nucleus of an atom without being repelled. Fermi saw this as the ideal probe to examine the nucleus, and began to use neutron sources to bombard a variety of elements to observe the results. One experiment directed neutrons at a target of silver and observed the creation of isotopes of silver when the neutrons were absorbed by the silver nuclei. But something very odd was happening: the results of the experiment seemed to differ when it was run on a laboratory bench with a marble top compared to one of wood. What was going on? Many people might have dismissed the anomaly, but Fermi had to know. He hypothesised that the probability a neutron would interact with a nucleus depended upon its speed (or, equivalently, energy): a slower neutron would effectively have more time to interact than one which whizzed through more rapidly. Neutrons which were reflected by the wood table top were “moderated” and had a greater probability of interacting with the silver target. Fermi quickly tested this supposition by using paraffin wax and water as neutron moderators and measuring the dramatically increased probability of interaction (or as we would say today, neutron capture cross section) when neutrons were slowed down. This is fundamental to the design of nuclear reactors today. It was for this work that Fermi won the Nobel Prize in Physics for 1938. By 1938, conditions for Italy's Jewish population had seriously deteriorated. Laura Fermi, despite her father's distinguished service as an admiral in the Italian navy, was now classified as a Jew, and therefore subject to travel restrictions, as were their two children. The Fermis went to their local Catholic parish, where they were (re-)married in a Catholic ceremony and their children baptised. With that paperwork done, the Fermi family could apply for passports and permits to travel to Stockholm to receive the Nobel prize. The Fermis locked their apartment, took a taxi, and boarded the train. Unbeknownst to the fascist authorities, they had no intention of returning. Fermi had arranged an appointment at Columbia University in New York. His Nobel Prize award was US$45,000 (US$789,000 today). If he returned to Italy with the sum, he would have been forced to convert it to lire and then only be able to take the equivalent of US$50 out of the country on subsequent trips. Professor Fermi may not have been much interested in politics, but he could do arithmetic. The family went from Stockholm to Southampton, and then on an ocean liner to New York, with nothing other than their luggage, prize money, and, most importantly, freedom. In his neutron experiments back in Rome, there had been curious results he and his colleagues never explained. When bombarding nuclei of uranium, the heaviest element then known, with neutrons moderated by paraffin wax, they had observed radioactive results which didn't make any sense. They expected to create new elements, heavier than uranium, but what they saw didn't agree with the expectations for such elements. Another mystery…in those heady days of nuclear physics, there was one wherever you looked. At just about the time Fermi's ship was arriving in New York, news arrived from Germany about what his group had observed, but not understood, four years before. Slow neutrons, which Fermi's group had pioneered, were able to split, or fission the nucleus of uranium into two lighter elements, releasing not only a large amount of energy, but additional neutrons which might be able to propagate the process into a “chain reaction”, producing either a large amount of energy or, perhaps, an enormous explosion. As one of the foremost researchers in neutron physics, it was immediately apparent to Fermi that his new life in America was about to take a direction he'd never anticipated. By 1941, he was conducting experiments at Columbia with the goal of evaluating the feasibility of creating a self-sustaining nuclear reaction with natural uranium, using graphite as a moderator. In 1942, he was leading a project at the University of Chicago to build the first nuclear reactor. On December 2nd, 1942, Chicago Pile-1 went critical, producing all of half a watt of power. But the experiment proved that a nuclear chain reaction could be initiated and controlled, and it paved the way for both civil nuclear power and plutonium production for nuclear weapons. At the time he achieved one of the first major milestones of the Manhattan Project, Fermi's classification as an “enemy alien” had been removed only two months before. He and Laura Fermi did not become naturalised U.S. citizens until July of 1944. Such was the breakneck pace of the Manhattan Project that even before the critical test of the Chicago pile, the DuPont company was already at work planning for the industrial scale production of plutonium at a facility which would eventually be built at the Hanford site near Richland, Washington. Fermi played a part in the design and commissioning of the X-10 Graphite Reactor in Oak Ridge, Tennessee, which served as a pathfinder and began operation in November, 1943, operating at a power level which was increased over time to 4 megawatts. This reactor produced the first substantial quantities of plutonium for experimental use, revealing the plutonium-240 contamination problem which necessitated the use of implosion for the plutonium bomb. Concurrently, he contributed to the design of the B Reactor at Hanford, which went critical in September 1944, running at 250 megawatts, that produced the plutonium for the Trinity test and the Fat Man bomb dropped on Nagasaki. During the war years, Fermi divided his time among the Chicago research group, Oak Ridge, Hanford, and the bomb design and production group at Los Alamos. As General Leslie Groves, head of Manhattan Project, had forbidden the top atomic scientists from travelling by air, “Henry Farmer”, his wartime alias, spent much of his time riding the rails, accompanied by a bodyguard. As plutonium production ramped up, he increasingly spent his time with the weapon designers at Los Alamos, where Oppenheimer appointed him associate director and put him in charge of “Division F” (for Fermi), which acted as a consultant to all of the other divisions of the laboratory. Fermi believed that while scientists could make major contributions to the war effort, how their work and the weapons they created were used were decisions which should be made by statesmen and military leaders. When appointed in May 1945 to the Interim Committee charged with determining how the fission bomb was to be employed, he largely confined his contributions to technical issues such as weapons effects. He joined Oppenheimer, Compton, and Lawrence in the final recommendation that “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use.” On July 16, 1945, Fermi witnessed the Trinity test explosion in New Mexico at a distance of ten miles from the shot tower. A few seconds after the blast, he began to tear little pieces of paper from from a sheet and drop them toward the ground. When the shock wave arrived, he paced out the distance it had blown them and rapidly computed the yield of the bomb as around ten kilotons of TNT. Nobody familiar with Fermi's reputation for making off-the-cuff estimates of physical phenomena was surprised that his calculation, done within a minute of the explosion, agreed within the margin of error with the actual yield of 20 kilotons, determined much later. After the war, Fermi wanted nothing more than to return to his research. He opposed the continuation of wartime secrecy to postwar nuclear research, but, unlike some other prominent atomic scientists, did not involve himself in public debates over nuclear weapons and energy policy. When he returned to Chicago, he was asked by a funding agency simply how much money he needed. From his experience at Los Alamos he wanted both a particle accelerator and a big computer. By 1952, he had both, and began to produce results in scattering experiments which hinted at the new physics which would be uncovered throughout the 1950s and '60s. He continued to spend time at Los Alamos, and between 1951 and 1953 worked two months a year there, contributing to the hydrogen bomb project and analysis of Soviet atomic tests. Everybody who encountered Fermi remarked upon his talents as an explainer and teacher. Seven of his students: six from Chicago and one from Rome, would go on to win Nobel Prizes in physics, in both theory and experiment. He became famous for posing “Fermi problems”, often at lunch, exercising the ability to make and justify order of magnitude estimates of difficult questions. When Freeman Dyson met with Fermi to present a theory he and his graduate students had developed to explain the scattering results Fermi had published, Fermi asked him how many free parameters Dyson had used in his model. Upon being told the number was four, he said, “I remember my old friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Chastened, Dyson soon concluded his model was a blind alley. After returning from a trip to Europe in the fall of 1954, Fermi, who had enjoyed robust good health all his life, began to suffer from problems with digestion. Exploratory surgery found metastatic stomach cancer, for which no treatment was possible at the time. He died at home on November 28, 1954, two months past his fifty-third birthday. He had made a Fermi calculation of how long to rent the hospital bed in which he died: the rental expired two days after he did. There was speculation that Fermi's life may have been shortened by his work with radiation, but there is no evidence of this. He was never exposed to unusual amounts of radiation in his work, and none of his colleagues, who did the same work at his side, experienced any medical problems. This is a masterful biography of one of the singular figures in twentieth century science. The breadth of his interests and achievements is reflected in the list of things named after Enrico Fermi. Given the hyper-specialisation of modern science, it is improbable we will ever again see his like.
Image by Wikipedia user
Putinovac
licensed under the
Creative Commons
Attribution
3.0 Unported license.
Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.I can imagine the kings hoping they didn't pick a night in mid-August for their vigil! The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders. All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage. Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text. In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.
Three hidden keys open three secret gates,The prize is Halliday's entire fortune and, with it, super-user control of the principal medium of human interaction, business, and even politics. Before fading out, Halliday shows three keys: copper, jade, and crystal, which must be obtained to open the three gates. Only after passing through the gates and passing the tests within them, will the intrepid paladin obtain the Easter egg hidden within the OASIS and gain control of it. Halliday provided a link to Anorak's Almanac, more than a thousand pages of journal entries made during his life, many of which reflect his obsession with 1980s popular culture, science fiction and fantasy, videogames, movies, music, and comic books. The clues to finding the keys and the Egg were widely believed to be within this rambling, disjointed document. Given the stakes, and the contest's being open to anybody in the OASIS, what immediately came to be called the Hunt became a social phenomenon, all-consuming to some. Egg hunters, or “gunters”, immersed themselves in Halliday's journal and every pop culture reference within it, however obscure. All of this material was freely available on the OASIS, and gunters memorised every detail of anything which had caught Halliday's attention. As time passed, and nobody succeeded in finding even the copper key (Halliday's memorial site displayed a scoreboard of those who achieved goals in the Hunt, so far blank), many lost interest in the Hunt, but a dedicated hard core persisted, often to the exclusion of all other diversions. Some gunters banded together into “clans”, some very large, agreeing to exchange information and, if one found the Egg, to share the proceeds with all members. More sinister were the activities of Innovative Online Industries—IOI—a global Internet and communications company which controlled much of the backbone that underlay the OASIS. It had assembled a large team of paid employees, backed by the research and database facilities of IOI, with their sole mission to find the Egg and turn control of the OASIS over to IOI. These players, all with identical avatars and names consisting of their six-digit IOI employee numbers, all of which began with the digit “6”, were called “sixers” or, more often in the gunter argot, “Sux0rz”. Gunters detested IOI and the sixers, because it was no secret that if they found the Egg, IOI's intention was to close the architecture of the OASIS, begin to charge fees for access, plaster everything with advertising, destroy anonymity, snoop indiscriminately, and use their monopoly power to put their thumb on the scale of all forms of communication including political discourse. (Fortunately, that couldn't happen to us with today's enlightened, progressive Silicon Valley overlords.) But IOI's financial resources were such that whenever a rare and powerful magical artefact (many of which had been created by Halliday in the original OASIS, usually requiring the completion of a quest to obtain, but freely transferrable thereafter) came up for auction, IOI was usually able to outbid even the largest gunter clans and add it to their arsenal. Wade Watts, a lone gunter whose avatar is named Parzival, became obsessed with the Hunt on the day of Halliday's death, and, years later, devotes almost every minute of his life not spent sleeping or in school (like many, he attends school in the OASIS, and is now in the last year of high school) on the Hunt, reading and re-reading Anorak's Almanac, reading, listening to, playing, and viewing everything mentioned therein, to the extent he can recite the dialogue of the movies from memory. He makes copious notes in his “grail diary”, named after the one kept by Indiana Jones. His friends, none of whom he has ever met in person, are all gunters who congregate on-line in virtual reality chat rooms such as that run by his best friend, Aech. Then, one day, bored to tears and daydreaming in Latin class, Parzival has a flash of insight. Putting together a message buried in the Almanac that he and many other gunters had discovered but failed to understand, with a bit of Latin and his encyclopedic knowledge of role playing games, he decodes the clue and, after a demanding test, finds himself in possession of the Copper Key. His name, alone, now appears at the top of the scoreboard, with 10,000 points. The path to the First Gate was now open. Discovery of the Copper Key was a sensation: suddenly Parzival, a humble level 10 gunter, is a worldwide celebrity (although his real identity remains unknown, as he refuses all media offers which would reveal or compromise it). Knowing that the key can be found re-energises other gunters, not to speak of IOI, and Parzival's footprints in the OASIS are scrupulously examined for clues to his achievement. (Finding a key and opening a gate does not render it unavailable to others. Those who subsequently pass the tests will receive their own copies of the key, although there is a point bonus for finding it first.) So begins an epic quest by Parzival and other gunters, contending with the evil minions of IOI, whose potential gain is so high and ethics so low that the risks may extend beyond the OASIS into the real world. For the reader, it is a nostalgic romp through every aspect of the popular culture of the 1980s: the formative era of personal computing and gaming. The level of detail is just staggering: this may be the geekiest nerdfest ever published. Heck, there's even a reference to an erstwhile Autodesk employee! The only goof I noted is a mention of the “screech of a 300-baud modem during the log-in sequence”. Three hundred baud modems did not have the characteristic squawk and screech sync-up of faster modems which employ trellis coding. While there are a multitude of references to details which will make people who were there, then, smile, readers who were not immersed in the 1980s and/or less familiar with its cultural minutiæ can still enjoy the challenges, puzzles solved, intrigue, action, and epic virtual reality battles which make up the chronicle of the Hunt. The conclusion is particularly satisfying: there may be a bigger world than even the OASIS. A movie based upon the novel, directed by Steven Spielberg, is scheduled for release in March 2018.
Wherein the errant will be tested for worthy traits,
And those with the skill to survive these straits,
Will reach The End where the prize awaits.
The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.
Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂? Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere. But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet). But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it? (Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.) We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.) Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.) So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system. Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind! But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond? Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful. Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather. Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant. Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships. Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists. We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.