Although the Black Hole War should have come to an end in early 1998, Stephen Hawking was like one of those unfortunate soldiers who wander in the jungle for years, not knowing that the hostilities have ended. By this time, he had become a tragic figure. Fifty-six years old, no longer at the height of his intellectual powers, and almost unable to communicate, Stephen didn't get the point. I am certain that it was not because of his intellectual limitations. From the interactions I had with him well after 1998, it was obvious that his mind was still extremely sharp. But his physical abilities had so badly deteriorated that he was almost completely locked within his own head. With no way to write an equation and tremendous obstacles to collaborating with others, he must have found it impossible to do the things physicists ordinarily do to understand new, unfamiliar work. So Stephen went on fighting for some time. (p. 419)Or, Prof. Susskind, perhaps it's that the intellect of Prof. Hawking makes him sceptical of arguments based a “theory” which is, as you state yourself on p. 384, “like a very complicated Tinkertoy set, with lots of different parts that can fit together in consistent patterns”; for which not a single fundamental equation has yet been written down; in which no model that remotely describes the world in which we live has been found; whose mathematical consistency and finiteness in other than toy models remains conjectural; whose results regarding black holes are based upon another conjecture (AdS/CFT) which, even if proven, operates in a spacetime utterly unlike the one we inhabit; which seems to predict a vast “landscape” of possible solutions (vacua) which make it not a theory of everything but rather a “theory of anything”; which is formulated in a flat Minkowski spacetime, neglecting the background independence of general relativity; and which, after three decades of intensive research by some of the most brilliant thinkers in theoretical physics, has yet to make a single experimentally-testable prediction, while demonstrating its ability to wiggle out of almost any result (for example, failure of the Large Hadron Collider to find supersymmetric particles). At the risk of attracting the scorn the author vents on pp. 186–187 toward non-specialist correspondents, let me say that the author's argument for “black hole complementarity” makes absolutely no sense whatsoever to this layman. In essence, he argues that matter infalling across the event horizon of a black hole, if observed from outside, is disrupted by the “extreme temperature” there, and is excited into its fundamental strings which spread out all over the horizon, preserving the information accreted in the stringy structure of the horizon (whence it can be released as the black hole evaporates). But for a co-moving observer infalling with the matter, nothing whatsoever happens at the horizon (apart from tidal effects whose magnitude depends upon the mass of the black hole). Susskind argues that since you have to choose your frame of reference and cannot simultaneously observe the event from both outside the horizon and falling across it, there is no conflict between these two descriptions, and hence they are complementary in the sense Bohr described quantum observables. But, unless I'm missing something fundamental, the whole thing about the “extreme temperature” at the black hole event horizon is simply nonsense. Yes, if you lower a thermometer from a space station at some distance from a black hole down toward the event horizon, it will register a diverging temperature as it approaches the horizon. But this is because it is moving near the speed of light with respect to spacetime falling through the horizon and is seeing the cosmic background radiation blueshifted by a factor which reaches infinity at the horizon. Further, being suspended above the black hole, the thermometer is in a state of constant acceleration (it might as well have a rocket keeping it at a specified distance from the horizon as a tether), and is thus in a Rindler spacetime and will measure black body radiation even in a vacuum due to the Unruh effect. But note that due to the equivalence principle, all of this will happen precisely the same even with no black hole. The same thermometer, subjected to the identical acceleration and velocity with respect to the cosmic background radiation frame, will read precisely the same temperature in empty space, with no black hole at all (and will even observe a horizon due to its hyperbolic motion). The “lowering the thermometer” is a completely different experiment from observing an object infalling to the horizon. The fact that the suspended thermometer measures a high temperature in no way implies that a free-falling object approaching the horizon will experience such a temperature or be disrupted by it. A co-moving observer with the object will observe nothing as it crosses the horizon, while a distant observer will see the object appear to freeze and wink out as it reaches the horizon and the time dilation and redshift approaches infinity. Nowhere is there this legendary string blowtorch at the horizon spreading out the information in the infalling object around a horizon which, observed from either perspective, is just empty space. The author concludes, in a final chapter titled “Humility”, “The Black Hole War is over…”. Well, maybe, but for this reader, the present book did not make the sale. The arguments made here are based upon aspects of string theory which are, at the moment, purely conjectural and models which operate in universes completely different from the one we inhabit. What happens to information that falls into a black hole? Well, Stephen Hawking has now conceded that it is preserved and released in black hole evaporation (but this assumes an anti de Sitter spacetime, which we do not inhabit), but this book just leaves me shaking my head at the arm waving arguments and speculative theorising presented as definitive results.
Paging Friar Ockham! If unnecessarily multiplying hypotheses are stubble indicating a fuzzy theory, it's pretty clear which of these is in need of the razor! Further, while one can imagine scientific investigation discovering evidence for Theory 1, almost all of the mechanisms which underlie Theory 2 remain, barring some conceptual breakthrough equivalent to looking inside a black hole, forever hidden from science by an impenetrable horizon through which no causal influence can propagate. So severe is this problem that chapter 9 of the book is devoted to the question of how far theoretical physics can go in the total absence of experimental evidence. What's more, unlike virtually every theory in the history of science, which attempted to describe the world we observe as accurately and uniquely as possible, Theory 2 predicts every conceivable universe and says, hey, since we do, after all, inhabit a conceivable universe, it's consistent with the theory. To one accustomed to the crystalline inevitability of Newtonian gravitation, general relativity, quantum electrodynamics, or the laws of thermodynamics, this seems by comparison like a California blonde saying “whatever”—the cosmology of despair. Scientists will, of course, immediately rush to attack Theory 1, arguing that a being such as that it posits would necessarily be “indistinguishable from magic”, capable of explaining anything, and hence unfalsifiable and beyond the purview of science. (Although note that on pp. 192–197 Susskind argues that Popperian falsifiability should not be a rigid requirement for a theory to be deemed scientific. See Lee Smolin's Scientific Alternatives to the Anthropic Principle for the argument against the string landscape theory on the grounds of falsifiability, and the 2004 Smolin/Susskind debate for a more detailed discussion of this question.) But let us look more deeply at the attributes of what might be called the First Cause of Theory 2. It not only permeates all of our universe, potentially spawning a bubble which may destroy it and replace it with something different, it pervades the abstract landscape of all possible universes, populating them with an infinity of independent and diverse universes over an eternity of time: omnipresent in spacetime. When a universe is created, all the parameters which ultimately govern its ultimate evolution (under the probabilistic laws of quantum mechanics, to be sure) are fixed at the moment of creation: omnipotent to create any possibility, perhaps even varying the mathematical structures underlying the laws of physics. As a budded off universe evolves, whether a sterile formless void or teeming with intelligent life, no information is ever lost in its quantum evolution, not even down a black hole or across a cosmic horizon, and every quantum event splits the universe and preserves all possible outcomes. The ensemble of universes is thus omniscient of all its contents. Throw in intelligent and benevolent, and you've got the typical deity, and since you can't observe the parallel universes where the action takes place, you pretty much have to take it on faith. Where have we heard that before? Lest I be accused of taking a cheap shot at string theory, or advocating a deistic view of the universe, consider the following creation story which, after John A. Wheeler, I shall call “Creation without the Creator”. Many extrapolations of continued exponential growth in computing power envision a technological singularity in which super-intelligent computers designing their own successors rapidly approach the ultimate physical limits on computation. Such computers would be sufficiently powerful to run highly faithful simulations of complex worlds, including intelligent beings living within them which need not be aware they were inhabiting a simulation, but thought they were living at the “top level”, who eventually passed through their own technological singularity, created their own simulated universes, populated them with intelligent beings who, in turn,…world without end. Of course, each level of simulation imposes a speed penalty (though, perhaps not much in the case of quantum computation), but it's not apparent to the inhabitants of the simulation since their own perceived time scale is in units of the “clock rate” of the simulation. If an intelligent civilisation develops to the point where it can build these simulated universes, will it do so? Of course it will—just look at the fascination crude video game simulations have for people today. Now imagine a simulation as rich as reality and unpredictable as tomorrow, actually creating an inhabited universe—who could resist? As unlimited computing power becomes commonplace, kids will create innovative universes and evolve them for billions of simulated years for science fair projects. Call the mean number of simulated universes created by intelligent civilisations in a given universe (whether top-level or itself simulated) the branching factor. If this is greater than one, and there is a single top-level non-simulated universe, then it will be outnumbered by simulated universes which grow exponentially in numbers with the depth of the simulation. Hence, by the Copernican principle, or principle of mediocrity, we should expect to find ourselves in a simulated universe, since they vastly outnumber the single top-level one, which would be an exceptional place in the ensemble of real and simulated universes. Now here's the point: if, as we should expect from this argument, we do live in a simulated universe, then our universe is the product of intelligent design and Theory 1 is an absolutely correct description of its origin. Suppose this is the case: we're inside a simulation designed by a freckle-faced superkid for extra credit in her fifth grade science class. Is this something we could discover, or must it, like so many aspects of Theory 2, be forever hidden from our scientific investigation? Surprisingly, this variety of Theory 1 is quite amenable to experiment: neither revelation nor faith is required. What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.Theory 1: Intelligent Design. An intelligent being created the universe and chose the initial conditions and physical laws so as to permit the existence of beings like ourselves.
Theory 2: String Landscape. The laws of physics and initial conditions of the universe are chosen at random from among 10500 possibilities, only a vanishingly small fraction of which (probably no more than one in 10120) can support life. The universe we observe, which is infinite in extent and may contain regions where the laws of physics differ, is one of an infinite number of causally disconnected “pocket universes“ which spontaneously form from quantum fluctuations in the vacuum of parent universes, a process which has been occurring for an infinite time in the past and will continue in the future, time without end. Each of these pocket universes which, together, make up the “megaverse”, has its own randomly selected laws of physics, and hence the overwhelming majority are sterile. We find ourselves in one of the tiny fraction of hospitable universes because if we weren't in such an exceptionally rare universe, we wouldn't exist to make the observation. Since there are an infinite number of universes, however, every possibility not only occurs, but occurs an infinite number of times, so not only are there an infinite number of inhabited universes, there are an infinite number identical to ours, including an infinity of identical copies of yourself wondering if this paragraph will ever end. Not only does the megaverse spawn an infinity of universes, each universe itself splits into two copies every time a quantum measurement occurs. Our own universe will eventually spawn a bubble which will destroy all life within it, probably not for a long, long time, but you never know. Evidence for all of the other universes is hidden behind a cosmic horizon and may remain forever inaccessible to observation.
Might we expect surprises as we subject our simulated universe to ever more precise scrutiny, perhaps even astonishing the being which programmed it with our cunning and deviousness (as the author of any software package has experienced at the hands of real-world users)? Who knows, we might run into round-off errors which “hit us like a ton of bricks”! Suppose there were some quantity, say, that was supposed to be exactly zero but, if you went and actually measured the geometry way out there near the edge and crunched the numbers, you found out it differed from zero in the 120th decimal place. Why, you might be as shocked as the naïve Perl programmer who ran the program “printf("%.18f", 0.2)” and was aghast when it printed “0.200000000000000011” until somebody explained that with about 56 bits of mantissa in IEEE double precision floating point, you only get about 17 decimal digits (log10 256) of precision. So, what does a round-off in the 120th digit imply? Not Theory 2, with its infinite number of infinitely reproducing infinite universes, but simply that our Theory 1 intelligent designer used 400 bit numbers (log2 10120) in the simulation and didn't count on our noticing—remember you heard it here first, and if pointing this out causes the simulation to be turned off, sorry about that, folks! Surprises from future experiments which would be suggestive (though not probative) that we're in a simulated universe would include failure to find any experimental signature of quantum gravity (general relativity could be classical in the simulation, since potential conflicts with quantum mechanics would be hidden behind event horizons in the present-day universe, and extrapolating backward to the big bang would be meaningless if the simulation were started at a later stage, say at the time of big bang nucleosynthesis), and discovery of limits on the ability to superpose wave functions for quantum computation which could result from limited precision in the simulation as opposed to the continuous complex values assumed by quantum mechanics. An interesting theoretical program would be to investigate feasible experiments which, by magnifying physical effects similar to proposed searches for quantum gravity signals, would detect round-off errors of magnitude comparable to the cosmological constant.
But seriously, this is an excellent book and anybody who's interested in the strange direction in which the string theorists are veering these days ought to read it; it's well-written, authoritative, reasonably fair to opposing viewpoints (although I'm surprised the author didn't address the background spacetime criticism of string theory raised so eloquently by Lee Smolin), and provides a roadmap of how string theory may develop in the coming years. The only nagging question you're left with after finishing the book is whether after thirty years of theorising which comes to the conclusion that everything is predicted and nothing can be observed, it's about science any more.