Books by Susskind, Leonard

Susskind, Leonard. The Black Hole War. New York: Little, Brown, 2008. ISBN 978-0-316-01640-7.
I hesitated buying this book for some months after its publication because of a sense there was something “off” in the author's last book, The Cosmic Landscape (March 2006). I should learn to trust my instincts more; this book treats a fascinating and important topic on the wild frontier between general relativity and quantum mechanics in a disappointing, deceptive, and occasionally infuriating manner.

The author is an eminent physicist who has made major contributions to string theory, the anthropic string landscape, and the problem of black hole entropy and the fate of information which is swallowed by a black hole. The latter puzzle is the topic of the present book, which is presented as a “war” between Stephen Hawking and his followers, mostly general relativity researchers, and Susskind and his initially small band of quantum field and string theorists who believed that information must be preserved in black hole accretion and evaporation lest the foundations of physics (unitarity and the invertibility of the S-matrix) be destroyed.

Here is a simple way to understand one aspect of this apparent paradox. Entropy is a measure of the hidden information in a system. The entropy of gas at equilibrium is very high because there are a huge number of microscopic configurations (position and velocity) of the molecules of the gas which result in the same macroscopic observables: temperature, pressure, and volume. A perfect crystal at absolute zero, on the other hand, has (neglecting zero-point energy), an entropy of zero because there is precisely one arrangement of atoms which exactly reproduces it. A classical black hole, as described by general relativity, is characterised by just three parameters: mass, angular momentum, and electrical charge. (The very same basic parameters as elementary particles—hmmmm….) All of the details of the mass and energy which went into the black hole: lepton and baryon number, particle types, excitations, and higher level structure are lost as soon as they cross the event horizon and cause it to expand. According to Einstein's theory, two black holes with the same mass, spin, and charge are absolutely indistinguishable even if the first was made from the collapse of a massive star and the second by crushing 1975 Ford Pintos in a cosmic trash compactor. Since there is a unique configuration for a given black hole, there is no hidden information and its entropy should therefore be zero.

But consider this: suppose you heave a ball of hot gas or plasma—a star, say—into the black hole. Before it is swallowed, it has a very high entropy, but as soon as it is accreted, you have only empty space and the black hole with entropy zero. You've just lowered the entropy of the universe, and the Second Law of Thermodynamics says that cannot ever happen. Some may argue that the Second Law is “transcended” in a circumstance like this, but it is a pill which few physicists are willing to swallow, especially since in this case it occurs in a completely classical context on a large scale where statistical mechanics obtains. It was this puzzle which led Jacob Bekenstein to propose that black holes did, in fact, have an entropy which was proportional to the area of the event horizon in units of Planck length squared. Black holes not only have entropy, they have a huge amount of it, and account for the overwhelming majority of entropy in the universe. Stephen Hawking subsequently reasoned that if a black hole has entropy, it must have temperature and radiate, and eventually worked out the mechanism of Hawking radiation and the evaporation of black holes.

But if a black hole can evaporate, what happens to the information (more precisely, the quantum state) of the material which collapsed into the black hole in the first place? Hawking argued that it was lost: the evaporation of the black hole was a purely thermal process which released none of the information lost down the black hole. But one of the foundations of quantum mechanics is that information is never lost; it may be scrambled in complex scattering processes to such an extent that you can't reconstruct the initial state, but in principle if you had complete knowledge of the state vector you could evolve the system backward and arrive at the initial configuration. If a black hole permanently destroys information, this wrecks the predictability of quantum mechanics and with it all of microscopic physics.

This book chronicles the author's quest to find out what happens to information that falls into a black hole and discover the mechanism by which information swallowed by the black hole is eventually restored to the universe when the black hole evaporates. The reader encounters string theory, the holographic principle, D-branes, anti de Sitter space, and other arcana, and is eventually led to the explanation that a black hole is really just an enormous ball of string, which encodes in its structure and excitations all of the information of the individual fundamental strings swallowed by the hole. As the black hole evaporates, little bits of this string slip outside the event horizon and zip away as fundamental particles, carrying away the information swallowed by the hole.

The story is told largely through analogies and is easy to follow if you accept the author's premises. I found the tone of the book quite difficult to take, however. The word which kept popping into my head as I made my way through was “smug”. The author opines on everything and anything, and comes across as scornful of anybody who disagrees with his opinions. He is bemused and astonished when he discovers that somebody who is a Republican, an evangelical Christian, or some other belief at variance with the dogma of the academic milieu he inhabits can, nonetheless, actually be a competent scientist. He goes on for two pages (pp. 280–281) making fun of Mormonism and then likens Stephen Hawking to a cult leader. The physics is difficult enough to explain; who cares about what Susskind thinks about everything else? Sometimes he goes right over the top, resulting in unseemly prose like the following.

Although the Black Hole War should have come to an end in early 1998, Stephen Hawking was like one of those unfortunate soldiers who wander in the jungle for years, not knowing that the hostilities have ended. By this time, he had become a tragic figure. Fifty-six years old, no longer at the height of his intellectual powers, and almost unable to communicate, Stephen didn't get the point. I am certain that it was not because of his intellectual limitations. From the interactions I had with him well after 1998, it was obvious that his mind was still extremely sharp. But his physical abilities had so badly deteriorated that he was almost completely locked within his own head. With no way to write an equation and tremendous obstacles to collaborating with others, he must have found it impossible to do the things physicists ordinarily do to understand new, unfamiliar work. So Stephen went on fighting for some time. (p. 419)
Or, Prof. Susskind, perhaps it's that the intellect of Prof. Hawking makes him sceptical of arguments based a “theory” which is, as you state yourself on p. 384, “like a very complicated Tinkertoy set, with lots of different parts that can fit together in consistent patterns”; for which not a single fundamental equation has yet been written down; in which no model that remotely describes the world in which we live has been found; whose mathematical consistency and finiteness in other than toy models remains conjectural; whose results regarding black holes are based upon another conjecture (AdS/CFT) which, even if proven, operates in a spacetime utterly unlike the one we inhabit; which seems to predict a vast “landscape” of possible solutions (vacua) which make it not a theory of everything but rather a “theory of anything”; which is formulated in a flat Minkowski spacetime, neglecting the background independence of general relativity; and which, after three decades of intensive research by some of the most brilliant thinkers in theoretical physics, has yet to make a single experimentally-testable prediction, while demonstrating its ability to wiggle out of almost any result (for example, failure of the Large Hadron Collider to find supersymmetric particles).

At the risk of attracting the scorn the author vents on pp. 186–187 toward non-specialist correspondents, let me say that the author's argument for “black hole complementarity” makes absolutely no sense whatsoever to this layman. In essence, he argues that matter infalling across the event horizon of a black hole, if observed from outside, is disrupted by the “extreme temperature” there, and is excited into its fundamental strings which spread out all over the horizon, preserving the information accreted in the stringy structure of the horizon (whence it can be released as the black hole evaporates). But for a co-moving observer infalling with the matter, nothing whatsoever happens at the horizon (apart from tidal effects whose magnitude depends upon the mass of the black hole). Susskind argues that since you have to choose your frame of reference and cannot simultaneously observe the event from both outside the horizon and falling across it, there is no conflict between these two descriptions, and hence they are complementary in the sense Bohr described quantum observables.

But, unless I'm missing something fundamental, the whole thing about the “extreme temperature” at the black hole event horizon is simply nonsense. Yes, if you lower a thermometer from a space station at some distance from a black hole down toward the event horizon, it will register a diverging temperature as it approaches the horizon. But this is because it is moving near the speed of light with respect to spacetime falling through the horizon and is seeing the cosmic background radiation blueshifted by a factor which reaches infinity at the horizon. Further, being suspended above the black hole, the thermometer is in a state of constant acceleration (it might as well have a rocket keeping it at a specified distance from the horizon as a tether), and is thus in a Rindler spacetime and will measure black body radiation even in a vacuum due to the Unruh effect. But note that due to the equivalence principle, all of this will happen precisely the same even with no black hole. The same thermometer, subjected to the identical acceleration and velocity with respect to the cosmic background radiation frame, will read precisely the same temperature in empty space, with no black hole at all (and will even observe a horizon due to its hyperbolic motion).

The “lowering the thermometer” is a completely different experiment from observing an object infalling to the horizon. The fact that the suspended thermometer measures a high temperature in no way implies that a free-falling object approaching the horizon will experience such a temperature or be disrupted by it. A co-moving observer with the object will observe nothing as it crosses the horizon, while a distant observer will see the object appear to freeze and wink out as it reaches the horizon and the time dilation and redshift approaches infinity. Nowhere is there this legendary string blowtorch at the horizon spreading out the information in the infalling object around a horizon which, observed from either perspective, is just empty space.

The author concludes, in a final chapter titled “Humility”, “The Black Hole War is over…”. Well, maybe, but for this reader, the present book did not make the sale. The arguments made here are based upon aspects of string theory which are, at the moment, purely conjectural and models which operate in universes completely different from the one we inhabit. What happens to information that falls into a black hole? Well, Stephen Hawking has now conceded that it is preserved and released in black hole evaporation (but this assumes an anti de Sitter spacetime, which we do not inhabit), but this book just leaves me shaking my head at the arm waving arguments and speculative theorising presented as definitive results.

April 2009 Permalink

Susskind, Leonard. The Cosmic Landscape. New York: Little, Brown, 2006. ISBN 0-316-15579-9.
Leonard Susskind (and, independently, Yoichiro Nambu) co-discovered the original hadronic string theory in 1969. He has been a prominent contributor to a wide variety of topics in theoretical physics over his long career, and is a talented explainer of abstract theoretical concepts to the general reader. This book communicates both the physics and cosmology of the “string landscape” (a term he coined in 2003) revolution which has swiftly become the consensus among string theorists, as well as the intellectual excitement of those exploring this new frontier.

The book is subtitled “String Theory and the Illusion of Intelligent Design” which may be better marketing copy—controversy sells—than descriptive of the contents. There is very little explicit discussion of intelligent design in the book at all except in the first and last pages, and what is meant by “intelligent design” is not what the reader might expect: design arguments in the origin and evolution of life, but rather the apparent fine-tuning of the physical constants of our universe, the cosmological constant in particular, without which life as we know it (and, in many cases, not just life but even atoms, stars, and galaxies) could not exist. Susskind is eloquent in describing why the discovery that the cosmological constant, which virtually every theoretical physicist would have bet had to be precisely zero, is (apparently) a small tiny positive number, seemingly fine tuned to one hundred and twenty decimal places “hit us like the proverbial ton of bricks” (p. 185)—here was a number which, not only did theory suggest should be 120 orders of magnitude greater, but which, had it been slightly larger than its minuscule value, would have precluded structure formation (and hence life) in the universe. One can imagine some as-yet-undiscovered mathematical explanation why a value is precisely zero (and, indeed, physicists did: it's called supersymmetry, and searching for evidence of it is one of the reasons they're spending billions of taxpayer funds to build the Large Hadron Collider), but when you come across a dial set with the almost ridiculous precision of 120 decimal places and it's a requirement for our own existence, thoughts of a benevolent Creator tend to creep into the mind of even the most doctrinaire scientific secularist. This is how the appearance of “intelligent design” (as the author defines it) threatens to get into the act, and the book is an exposition of the argument string theorists and cosmologists have developed to contend that such apparent design is entirely an illusion.

The very title of the book, then invites us to contrast two theories of the origin of the universe: “intelligent design” and the “string landscape”. So, let's accept that challenge and plunge right in, shall we? First of all, permit me to observe that despite frequent claims to the contrary, including some in this book, intelligent design need not presuppose a supernatural being operating outside the laws of science and/or inaccessible to discovery through scientific investigation. The origin of life on Earth due to deliberate seeding with engineered organisms by intelligent extraterrestrials is a theory of intelligent design which has no supernatural component, evidence of which may be discovered by science in the future, and which is sufficiently plausible to have persuaded Francis Crick, co-discoverer of the structure of DNA, was the most likely explanation. If you observe a watch, you're entitled to infer the existence of a watchmaker, but there's no reason to believe he's a magician, just a craftsman.

If we're to compare these theories, let us begin by stating them both succinctly:

Theory 1: Intelligent Design.   An intelligent being created the universe and chose the initial conditions and physical laws so as to permit the existence of beings like ourselves.

Theory 2: String Landscape.   The laws of physics and initial conditions of the universe are chosen at random from among 10500 possibilities, only a vanishingly small fraction of which (probably no more than one in 10120) can support life. The universe we observe, which is infinite in extent and may contain regions where the laws of physics differ, is one of an infinite number of causally disconnected “pocket universes“ which spontaneously form from quantum fluctuations in the vacuum of parent universes, a process which has been occurring for an infinite time in the past and will continue in the future, time without end. Each of these pocket universes which, together, make up the “megaverse”, has its own randomly selected laws of physics, and hence the overwhelming majority are sterile. We find ourselves in one of the tiny fraction of hospitable universes because if we weren't in such an exceptionally rare universe, we wouldn't exist to make the observation. Since there are an infinite number of universes, however, every possibility not only occurs, but occurs an infinite number of times, so not only are there an infinite number of inhabited universes, there are an infinite number identical to ours, including an infinity of identical copies of yourself wondering if this paragraph will ever end. Not only does the megaverse spawn an infinity of universes, each universe itself splits into two copies every time a quantum measurement occurs. Our own universe will eventually spawn a bubble which will destroy all life within it, probably not for a long, long time, but you never know. Evidence for all of the other universes is hidden behind a cosmic horizon and may remain forever inaccessible to observation.

Paging Friar Ockham! If unnecessarily multiplying hypotheses are stubble indicating a fuzzy theory, it's pretty clear which of these is in need of the razor! Further, while one can imagine scientific investigation discovering evidence for Theory 1, almost all of the mechanisms which underlie Theory 2 remain, barring some conceptual breakthrough equivalent to looking inside a black hole, forever hidden from science by an impenetrable horizon through which no causal influence can propagate. So severe is this problem that chapter 9 of the book is devoted to the question of how far theoretical physics can go in the total absence of experimental evidence. What's more, unlike virtually every theory in the history of science, which attempted to describe the world we observe as accurately and uniquely as possible, Theory 2 predicts every conceivable universe and says, hey, since we do, after all, inhabit a conceivable universe, it's consistent with the theory. To one accustomed to the crystalline inevitability of Newtonian gravitation, general relativity, quantum electrodynamics, or the laws of thermodynamics, this seems by comparison like a California blonde saying “whatever”—the cosmology of despair.

Scientists will, of course, immediately rush to attack Theory 1, arguing that a being such as that it posits would necessarily be “indistinguishable from magic”, capable of explaining anything, and hence unfalsifiable and beyond the purview of science. (Although note that on pp. 192–197 Susskind argues that Popperian falsifiability should not be a rigid requirement for a theory to be deemed scientific. See Lee Smolin's Scientific Alternatives to the Anthropic Principle for the argument against the string landscape theory on the grounds of falsifiability, and the 2004 Smolin/Susskind debate for a more detailed discussion of this question.) But let us look more deeply at the attributes of what might be called the First Cause of Theory 2. It not only permeates all of our universe, potentially spawning a bubble which may destroy it and replace it with something different, it pervades the abstract landscape of all possible universes, populating them with an infinity of independent and diverse universes over an eternity of time: omnipresent in spacetime. When a universe is created, all the parameters which ultimately govern its ultimate evolution (under the probabilistic laws of quantum mechanics, to be sure) are fixed at the moment of creation: omnipotent to create any possibility, perhaps even varying the mathematical structures underlying the laws of physics. As a budded off universe evolves, whether a sterile formless void or teeming with intelligent life, no information is ever lost in its quantum evolution, not even down a black hole or across a cosmic horizon, and every quantum event splits the universe and preserves all possible outcomes. The ensemble of universes is thus omniscient of all its contents. Throw in intelligent and benevolent, and you've got the typical deity, and since you can't observe the parallel universes where the action takes place, you pretty much have to take it on faith. Where have we heard that before?

Lest I be accused of taking a cheap shot at string theory, or advocating a deistic view of the universe, consider the following creation story which, after John A. Wheeler, I shall call “Creation without the Creator”. Many extrapolations of continued exponential growth in computing power envision a technological singularity in which super-intelligent computers designing their own successors rapidly approach the ultimate physical limits on computation. Such computers would be sufficiently powerful to run highly faithful simulations of complex worlds, including intelligent beings living within them which need not be aware they were inhabiting a simulation, but thought they were living at the “top level”, who eventually passed through their own technological singularity, created their own simulated universes, populated them with intelligent beings who, in turn,…world without end. Of course, each level of simulation imposes a speed penalty (though, perhaps not much in the case of quantum computation), but it's not apparent to the inhabitants of the simulation since their own perceived time scale is in units of the “clock rate” of the simulation.

If an intelligent civilisation develops to the point where it can build these simulated universes, will it do so? Of course it will—just look at the fascination crude video game simulations have for people today. Now imagine a simulation as rich as reality and unpredictable as tomorrow, actually creating an inhabited universe—who could resist? As unlimited computing power becomes commonplace, kids will create innovative universes and evolve them for billions of simulated years for science fair projects. Call the mean number of simulated universes created by intelligent civilisations in a given universe (whether top-level or itself simulated) the branching factor. If this is greater than one, and there is a single top-level non-simulated universe, then it will be outnumbered by simulated universes which grow exponentially in numbers with the depth of the simulation. Hence, by the Copernican principle, or principle of mediocrity, we should expect to find ourselves in a simulated universe, since they vastly outnumber the single top-level one, which would be an exceptional place in the ensemble of real and simulated universes. Now here's the point: if, as we should expect from this argument, we do live in a simulated universe, then our universe is the product of intelligent design and Theory 1 is an absolutely correct description of its origin.

Suppose this is the case: we're inside a simulation designed by a freckle-faced superkid for extra credit in her fifth grade science class. Is this something we could discover, or must it, like so many aspects of Theory 2, be forever hidden from our scientific investigation? Surprisingly, this variety of Theory 1 is quite amenable to experiment: neither revelation nor faith is required. What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.

Might we expect surprises as we subject our simulated universe to ever more precise scrutiny, perhaps even astonishing the being which programmed it with our cunning and deviousness (as the author of any software package has experienced at the hands of real-world users)? Who knows, we might run into round-off errors which “hit us like a ton of bricks”! Suppose there were some quantity, say, that was supposed to be exactly zero but, if you went and actually measured the geometry way out there near the edge and crunched the numbers, you found out it differed from zero in the 120th decimal place. Why, you might be as shocked as the naïve Perl programmer who ran the program “printf("%.18f", 0.2)” and was aghast when it printed “0.200000000000000011” until somebody explained that with about 56 bits of mantissa in IEEE double precision floating point, you only get about 17 decimal digits (log10 256) of precision. So, what does a round-off in the 120th digit imply? Not Theory 2, with its infinite number of infinitely reproducing infinite universes, but simply that our Theory 1 intelligent designer used 400 bit numbers (log2 10120) in the simulation and didn't count on our noticing—remember you heard it here first, and if pointing this out causes the simulation to be turned off, sorry about that, folks! Surprises from future experiments which would be suggestive (though not probative) that we're in a simulated universe would include failure to find any experimental signature of quantum gravity (general relativity could be classical in the simulation, since potential conflicts with quantum mechanics would be hidden behind event horizons in the present-day universe, and extrapolating backward to the big bang would be meaningless if the simulation were started at a later stage, say at the time of big bang nucleosynthesis), and discovery of limits on the ability to superpose wave functions for quantum computation which could result from limited precision in the simulation as opposed to the continuous complex values assumed by quantum mechanics. An interesting theoretical program would be to investigate feasible experiments which, by magnifying physical effects similar to proposed searches for quantum gravity signals, would detect round-off errors of magnitude comparable to the cosmological constant.

But seriously, this is an excellent book and anybody who's interested in the strange direction in which the string theorists are veering these days ought to read it; it's well-written, authoritative, reasonably fair to opposing viewpoints (although I'm surprised the author didn't address the background spacetime criticism of string theory raised so eloquently by Lee Smolin), and provides a roadmap of how string theory may develop in the coming years. The only nagging question you're left with after finishing the book is whether after thirty years of theorising which comes to the conclusion that everything is predicted and nothing can be observed, it's about science any more.

March 2006 Permalink