- Churchill, Winston S.
The World Crisis.
London: Penguin, [1923–1931, 2005] 2007.
ISBN 978-0-14-144205-1.
-
Churchill's history of the Great War (what we now
call World War I) was published in five volumes
between 1923 and 1931.
The present volume is an
abridgement of the first four volumes, which appeared
simultaneously with the fifth volume of the complete work.
This abridged edition was prepared by Churchill himself; it
is not a cut and paste job by an editor. Volume Four
and this abridgement end with the collapse of Germany
and the armistice—the aftermath of the war and the
peace negotiations covered in Volume Five of the full history
are not included here.
When this work began to appear in
1923, the smart set in London quipped, “Winston's
written a book about himself and called it The
World Crisis”. There's a lot of truth in that:
this is something somewhere between a history and memoir of
a politician in wartime. Description of the disastrous
attempts to break the stalemate of trench warfare in 1915
barely occupies a chapter, while the Dardanelles Campaign,
of which Churchill was seen as the most vehement advocate,
and for which he was blamed after its tragic failure,
makes up almost a quarter of the 850 page book.
If you're looking for a dispassionate history of World War I, this is
not the book to read: it was written too close to the events of the
war, before the dire consequences of the peace came to pass, and by
a figure motivated as much to defend his own actions as to provide a
historical narrative. That said, it does provide an insight into
how Churchill's experiences in the war forged the character which
would cause Britain to turn to him when war came again.
It also goes a long way to explaining precisely why Churchill's
warnings were ignored in the 1930s. This book is, in large part, a
recital of disaster after disaster in which Churchill played a part,
coupled with an explanation of why, in each successive case, it wasn't
his fault. Whether or not you accept his excuses and justifications
for his actions, it's pretty easy to understand how politicians and
the public in the interwar period could look upon Churchill as
somebody who, when given authority, produced calamity. It was not just
that others were blind to the threat, but rather than Churchill's
record made him a seriously flawed messenger on an occasion where his
message was absolutely correct.
At this epoch, Churchill was already an excellent writer and
delivers some soaring prose on occasions, but he has not
yet become the past master of the English language on
display in The Second World War
(which won the Nobel Prize for Literature when it really
meant something). There are numerous tables, charts, and maps
which illustrate the circumstances of the war.
Americans who hold to the common view that “The Yanks
came to France and won the war for the Allies” may be
offended by Churchill's speaking of them only in passing. He
considers their effect on the actual campaigns of 1918 as
mostly psychological: reinforcing French and British morale
and confronting Germany with an adversary with unlimited
resources.
Perhaps the greatest lesson to be drawn from this work
is that of the initial part, which covers the darkening
situation between 1911 and the outbreak of war in 1914.
What is stunning, as sketched by a person involved in the
events of that period, is just how trivial the proximate
causes of the war were compared to the apocalyptic bloodbath
which ensued. It is as if the crowned heads, diplomats, and
politicians had no idea of the stakes involved, and indeed
they did not—all expected the war to be short and
decisive, none anticipating the consequences of the superiority
conferred on the defence by the machine gun, entrenchments,
and barbed wire. After the outbreak of war and its freezing
into a trench war stalemate in the winter of 1914, for
three years the Allies believed their “offensives”,
which squandered millions of lives for transitory and insignificant
gains of territory, were conducting a war of attrition against
Germany. In fact, due to the supremacy of the defender, Allied
losses always exceeded those of the Germans, often by a factor
of two to one (and even more for officers). Further, German
losses were never greater than the number of new conscripts in
each year of the war up to 1918, so in fact this “war of
attrition” weakened the Allies every year it
continued. You'd expect intelligence services to figure
out such a fundamental point, but it appears the
“by the book” military mentality dismissed such
evidence and continued to hurl a generation of their
countrymen into the storm of steel.
This is a period piece: read it not as a history of the war
but rather to experience the events of the time as
Churchill saw them, and to appreciate how they made him
the wartime leader he was to be when, once again, the lights
went out all over Europe.
A U.S. edition is available.
- Carroll, Sean.
From Eternity to Here.
New York: Dutton, 2010.
ISBN 978-0-525-95133-9.
-
The nature of time has perplexed philosophers
and scientists from the ancient Greeks (and
probably before) to the present day. Despite two and half
millennia of reflexion upon the problem and spectacular
success in understanding many other aspects of the universe
we inhabit, not only has little progress been made on
the question of time, but to a large extent we are still
puzzling over the same problems which vexed thinkers in the
time of Socrates: Why does there seem to be an inexorable
arrow of time which can be perceived in physical processes
(you can scramble an egg, but just try to unscramble one)?
Why do we remember the past, but not the future? Does time
flow by us, living in an eternal present, or do we move
through time? Do we have free will, or is that an illusion and
is the future actually predestined? Can
we travel to the past or to the future? If we are typical
observers in an eternal or very long-persisting universe, why
do we find ourselves so near its beginning (the big bang)?
Indeed, what we have learnt about time makes these puzzles
even more enigmatic. For it appears, based both on theory
and all experimental evidence to date, that the microscopic
laws of physics are completely reversible in time: any physical
process can (and does) go in both the forward and reverse
time directions equally well. (Actually, it's a little more
complicated than that: just reversing the direction of time
does not yield identical results, but simultaneously reversing
the direction of time [T], interchanging left and right [parity: P],
and swapping particles for antiparticles [charge: C] yields
identical results under the so-called “CPT” symmetry
which, as far is known, is absolute. The tiny violation of
time reversal symmetry by itself in weak interactions seems,
to most physicists, inadequate to explain the perceived
unidirectional arrow of time, although
some disagree.)
In this book, the author argues that the way in which we
perceive time here and now (whatever “now” means)
is a direct consequence of the initial conditions which
obtained at the big bang—the beginning of time, and
the future state into which the universe is evolving—eternity.
Whether or not you agree with the author's conclusions, this
book is a tour de force
popular exposition of thermodynamics and statistical mechanics,
which provides the best intuitive grasp of these concepts of
any non-technical book I have yet encountered. The science
and ideas which influenced thermodynamics and its
practical and philosophical consequences
are presented in a historical context, showing how in many
cases phenomenological models were successful in grasping the
essentials of a physical process well before the actual underlying
mechanisms were understood (which is heartening to those trying
to model the very early universe absent a
theory of quantum gravity).
Carroll argues that the
Second
Law of Thermodynamics entirely
defines the arrow of time. Closed systems
(and for the purpose of the argument here we can consider the
observable universe as such a system, although it is not precisely
closed: particles enter and leave our horizon as the universe
expands and that expansion accelerates) always evolve from a state
of lower probability to one of higher probability: the “entropy”
of a system is (sloppily stated) a measure of the probability of finding
the system in a given macroscopically observable state, and over
time the entropy always stays the same or increases; except for
minor fluctuations, the entropy increases until the system reaches
equilibrium, after which it simply fluctuates around the equilibrium
state with essentially no change in its coarse-grained observable
state. What we perceive as the arrow of time is simply systems
evolving from less probable to more probable states, and since
they (in isolation) never go the other way, we naturally observe
the arrow of time to be universal.
Look at it this way—there are vastly fewer configurations of the
atoms which make up an egg as produced by a chicken: shell
outside, yolk in the middle, and white in between, as there are
for the same egg scrambled in the pan with the fragments of
shell discarded in the poubelle. There are an almost inconceivable
number of ways in which the atoms of the yolk and white can mix
to make the scrambled egg, but far fewer ways they can end up
neatly separated inside the shell. Consequently, if we see a movie
of somebody unscrambling an egg, the white and yolk popping up from
the pan to be surrounded by fragments which fuse into an unbroken
shell, we know some trickster is running the film backward: it
illustrates a process where the entropy dramatically decreases, and
that never happens in the real world. (Or, more precisely, its
probability of happening anywhere in the universe in
the time since the big bang is “beyond vanishingly small”.)
Now, once you understand these matters, as you will after reading the
pellucid elucidation here, it all seems pretty straightforward:
our universe is evolving, like all systems, from lower entropy
to higher entropy, and consequently it's only natural that we
perceive that evolution as the passage of time. We remember
the past because the process of storing those memories increases
the entropy of the universe; we cannot remember the future
because we cannot predict the precise state of the coarse-grained
future from that of the present, simply because there are far
more possible states in the future than at the present. Seems
reasonable, right?
Well,
up to a point, Lord Copper.
The real mystery, to which Roger
Penrose and others have been calling attention for some
years, is not that entropy is increasing in our universe, but
rather why it is presently so low compared to what
it might be expected to be in a universe in a randomly chosen
configuration, and further, why it was so absurdly low in the
aftermath of the big bang. Given the initial conditions after
the big bang, it is perfectly reasonable to expect the
universe to have evolved to something like its present state.
But this says nothing at all about why the big bang
should have produced such an incomprehensibly improbable set of
initial conditions.
If you think about entropy in the usual thermodynamic sense
of gas in a box, the evolution of the universe seems distinctly
odd. After the big bang, the region which represents today's observable
universe appears to have been a thermalised system of particles and
radiation very near equilibrium, and yet today we see nothing
of the sort. Instead, we see complex structure at scales from
molecules to superclusters of galaxies, with vast voids in between,
and stars profligately radiating energy into space with a temperature
less than three degrees above absolute zero. That sure doesn't look
like entropy going down: it's more like your leaving a pot of tepid water
on the counter top overnight and, the next morning, finding
a village of igloos surrounding a hot spring. I mean, it
could happen, but how probable is that?
It's gravity that makes the difference. Unlike all of the other
forces of nature, gravity
always attracts.
This means that when
gravity is significant (which it isn't in a steam engine or
pan of water), a gas at thermal equilibrium is actually in a state
of very low entropy. Any small compression or rarefaction in a
region will cause particles to be gravitationally attracted to volumes with
greater density, which will in turn reinforce the inhomogeneity,
which will amplify the gravitational attraction. The gas at thermal
equilibrium will, then, unless it is perfectly homogeneous (which
quantum and thermal fluctuations render impossible) collapse into
compact structures separated by voids, with the entropy increasing
all the time. Voilà galaxies, stars, and planets.
As sources of energy are exhausted, gravity wins in the end, and
as structures compact ever more, entropy increasing apace, eventually
the universe is filled only with black holes (with vastly more
entropy than the matter and energy that fell into them) and cold
dark objects. But wait, there's more! The expansion of the universe
is accelerating, so any structures which are not gravitationally
bound will eventually disappear over the horizon and the remnants
(which may ultimately decay into a gas of unbound particles,
although the physics of this remains speculative) will occupy
a nearly empty expanding universe (absurd as this may sound, this
de Sitter space
is an exact solution to Einstein's equations of General
Relativity). This, the author argues, is the highest entropy
state of matter and energy in the presence of gravitation, and it
appears from current observational evidence that that's indeed
where we're headed.
So, it's plausible the entire evolution of the universe from
the big bang into the distant future increases entropy all the
way, and hence there's no mystery why we perceive an arrow of
time pointing from the hot dense past to cold dark eternity.
But doggone it, we still don't have a clue why the
big bang produced such low entropy! The author surveys a number
of proposed explanations, some of which invoke fine-tuning with
no apparent physical explanations, summon an enormous
(or infinite) “multiverse” of all possibilities and
argue that among such an ensemble, we find ourselves in one of
the vanishingly small fraction of universes like our own because
observers like ourselves couldn't exist in all the others (the
anthropic argument), or that the big bang was not actually the
beginning and that some dynamical process which preceded the
big bang (which might then be considered a “big bounce”)
forced the initial conditions into a low entropy state. There
are many excellent arguments against these proposals, which are
clearly presented. The author's own favourite, which he concedes
is as speculative as all the others, is that de Sitter space
is unstable against a quantum fluctuation which nucleates
a disconnected bubble universe in which entropy is initially low.
The process of nucleation increases entropy in the multiverse,
and hence there is no upper bound at all on entropy,
with the multiverse eternal in past and future, and entropy
increasing forever without bound in the future and decreasing
without bound in the past.
(If you're a regular visitor here, you know what's coming, don't you?)
Paging friar
Ockham! We start out having discovered yet another piece of
evidence for what appears to be a fantastically improbable fine-tuning
of the initial conditions of our universe. The deeper we investigate
this, the more mysterious it appears, as we discover no reason in the
dynamical laws of physics for the initial conditions to be have been
so unlikely among the ensemble of possible initial conditions.
We are then faced with the “trichotomy” I discussed
regarding the
origin of life on Earth: chance (it just happened
to be that way, or it was every possible way, and we, tautologically,
live in one of the universes in which we can exist), necessity (some
dynamical law which we haven't yet figured out caused the initial
conditions to be the way we observe them to have been), or
(and here's where all the scientists turn their backs upon me,
snuff the candles, and walk away) design. Yes, design. Suppose
(and yes, I know, I've used this analogy before and will certainly
do so again) you were a character in a video game who somehow became
sentient and began to investigate the universe you inhabited. As
you did, you'd discover there were distinct regularities which governed
the behaviour of objects and their interactions. As you probed
deeper, you might be able to access the machine code of the
underlying simulation (or at least get a glimpse into its operation
by running precision experiments). You would discover that
compared to a random collection of bits of the same length, it
was in a fantastically improbable configuration, and you could
find no plausible way that a random initial configuration could
evolve into what you observe today, especially since you'd found
evidence that your universe was not eternally old but rather came
into being at some time in the past (when, say, the game cartridge
was inserted).
What would you conclude? Well, if you exclude the design hypothesis,
you're stuck with supposing that there may be an infinity of
universes like yours in all random configurations, and you
observe the one you do because you couldn't exist in all but a very
few improbable configurations of that ensemble. Or you might argue that
some process you haven't yet figured out caused the underlying substrate
of your universe to assemble itself, complete with the copyright
statement and the Microsoft security holes, from a generic configuration
beyond your ability to observe in the past. And being clever, you'd
come up with persuasive arguments as to how these most implausible
circumstances might have happened, even at the expense of invoking
an infinity of other universes, unobservable in principle, and an
eternity of time, past and present, in which events could play out.
Or, you might conclude from the quantity of initial information you
observed (which is identical to low initial entropy) and the
improbability of that configuration having been arrived at by
random processes on any imaginable time scale, that it was
put in from the outside by an intelligent designer:
you might call Him or Her the
Programmer,
and some might even
come to worship this being, outside the observable universe,
which is nonetheless responsible for its creation and the wildly
improbable initial conditions which permit its inhabitants to exist
and puzzle out their origins.
Suppose you were running a simulation of a universe,
and to win the science fair you knew you'd have to show the
evolution of complexity all the way from the get-go to the point
where creatures within the simulation started to do precision
experiments, discover
curious
fine-tunings and discrepancies,
and begin to wonder…? Would you start your simulation at
a near-equilibrium condition? Only if you were a complete
idiot—nothing would ever happen—and whatever you might
say about
post-singularity
super-kids, they aren't idiots (well, let's not talk about the music
they listen to, if you can call that music). No, you'd start the
simulation with extremely low entropy, with just enough inhomogeneity
that gravity would get into the act and drive the emergence of
hierarchical structure. (Actually, if you set up quantum mechanics the
way we observe it, you wouldn't have to put in the inhomogeneity; it will
emerge from quantum fluctuations all by itself.) And of course you'd
fine tune the parameters of the standard model of particle physics so
your universe wouldn't immediately turn entirely into neutrons,
diprotons, or some other dead end. Then you'd sit back, turn up the
volume on the MultIversePod, and watch it run. Sure 'nuff, after a
while there'd be critters trying to figure it all out, scratching
their balding heads, and wondering how it came to be that way. You
would be most amused as they excluded your existence as a hypothesis,
publishing theories ever more baroque to exclude the possibility of
design. You might be tempted to….
Fortunately, this chronicle does not publish comments. If you're
sending them from the future, please use the
antitelephone.
(The author
discusses this “simulation argument”
in endnote 191. He leaves it to the reader to judge its plausibility,
as do I. I remain on the record as saying, “more likely
than not”.)
Whatever you may think about the Big Issues raised here,
if you've never experienced the beauty of thermodynamics
and statistical mechanics at a visceral level, this is the book
to read. I'll bet many engineers who have been completely
comfortable with computations in “thermogoddamics”
for decades finally discover they “get it” after
reading this equation-free treatment aimed at a popular audience.
- D'Souza, Dinesh.
Life After Death: The Evidence.
Washington: Regnery Publishing, 2009
ISBN 978-1-59698-099-0.
-
Ever since the Enlightenment, and to an increasing extent today,
there is a curious disconnect between the intellectual élite
and the population at large. The overwhelming majority of human
beings who have ever lived believed in their survival, in one form
or another, after death, while materialists, reductionists, and
atheists argue that this is nothing but wishful thinking; that
there is no physical mechanism by which consciousness could survive
the dissolution of the neural substrate in which it is instantiated,
and point to the lack of any evidence for survival after death. And
yet a large majority of people alive today beg to differ. As atheist
H. G. Wells put it in a very different context, they sense that
“Worlds may freeze and suns may perish, but there stirs
something within us now that can never die again.” Who is
right?
In this slim (256 page) volume, the author examines the scientific,
philosophical, historical, and moral evidence for and implications of
survival after death. He explicitly excludes religious revelation
(except in the final chapter, where some evidence he cites as
historical may be deemed by others to be argument from scriptural
authority). Having largely excluded religion from the argument, he
explores the near-universality of belief in life after death across
religious traditions and notes the common threads uniting
them.
But traditions and beliefs do not in any way address the actual
question: does our individual consciousness, in some manner,
survive the death of our bodies? While materialists discard such
a notion as absurd, the author argues that there is nothing in our
present-day understanding of physics, evolutionary biology, or
neuroscience which excludes this possibility. In fact, the
complete failure so far to understand the physical basis of consciousness
can be taken as evidence that it may be a phenomenon independent of
its physical instantiation: structured information which could
conceivably transcend the hardware on which it currently operates.
Computer users think nothing these days of backing up their old
computer, loading the backups onto a new machine (which may use
a different processor and operating system), and with a little
upward compatibility magic, having everything work pretty much as
before. Do your applications and documents from the old computer
die when you turn it off for the last time? Are they reincarnated
when you load them into the replacement machine? Will they live
forever as long as you continue to transfer them to successive
machines, or on backup tapes? This may seem a silly analogy, but
consider that materialists consider your consciousness and self
to be nothing other than a pattern of information evolving in a
certain way according to the rules of neural computation. Do the
thought experiment: suppose nanotechnological robots replaced your
meat neurons one by one with mechanical analogues with the same
external electrochemical interface. Eventually your brain would
be entirely different physically, but would your consciousness change
at all? Why? If it's just a bunch of components, then replacing
protein components with silicon (or whatever) components which work
in the same way should make no difference at all, shouldn't it?
A large part of what living organisms do is sense their
external environment and interact with it. Unicellular
organisms swim along the gradient of increasing nutrient concentration.
Other than autonomic internal functions of which we are aware
only when they misbehave, humans largely experience the world
through our sensory organs, and through the internal sense of self which
is our consciousness. Is it not possible that the latter is much
like the former—something external to the meatware
of our body which is picked up by a sensory organ, in this case
the neural networks of the brain?
If this be the case, in the same sense that the external world
does not cease to exist when our eyes, ears, olfactory, and
tactile sensations fail at the time of death or due to injury,
is it not plausible that dissolution of the brain, which receives
and interacts with our external consciousness, need not mean the
end of that incorporeal being?
Now, this is pretty out-there stuff, which might cause the author
to run from the room in horror should he hear me expound it.
Fine: this humble book reviewer spent a substantial amount of
time contributing to a project seeking evidence for existence of
global, distributed
consciousness,
and has concluded that such has been
demonstrated to exist
by the standards accepted by most of the “hard” sciences.
But let's get back to the book itself.
One thing you won't find here is evidence based upon hauntings,
spiritualism, or other supposed contact with the dead (although
I must admit, Chicago election returns are
awfully persuasive as to the ability of the dead to intervene
in affairs of the living). The author does explore near death
experiences, noting their universality across very different
cultures and religious traditions, and evidence for reincarnation,
which he concludes is unpersuasive (but see the research of
Ian Stevenson
and decide for yourself). The exploration of a physical basis for the
existence of other worlds (for example, Heaven and Hell) cites the
“multiverse” paradigm, and invites sceptics of that
“theory of anything” to denounce it as “just as
plausible as life after death”—works for me.
Excuse me for taking off on a tangent here, but it is, in a
formal sense. If you believe in an infinite chaotically inflating
universe with random initial conditions, or in
Many Worlds in One (October 2006),
then Heaven and Hell explicitly exist, not only once in the
multiverse, but an infinity of times. For every moment in your
life that you may have to ceased to exist, there is a universe
somewhere out there, either elsewhere in the multiverse or in some
distant region far from our cosmic horizon in this universe, where there's
an observable universe identical to our own up to that instant which diverges
thence into one which grants you eternal reward or torment for your
actions. In an infinite universe with random initial conditions,
every possibility occurs an infinite number of times. Think about
it, or better yet, don't.
The chapter on morality is particularly challenging and enlightening.
Every human society has had a code of morality (different in the
details, but very much the same at the core), and most of these
societies have based their moral code upon a belief in cosmic
justice in an afterlife. It's self-evident that bad guys sometimes
win at the expense of good guys in this life, but belief that
the score will be settled in the long run has provided a powerful
incentive for mortals to conform to the norms which their societies
prescribe as good. (I've deliberately written the last sentence in
the post-modern idiom; I consider many moral norms absolutely good or bad
based on gigayears of evolutionary history, but I needn't introduce
that into evidence to prove my case, so I won't.) From an
evolutionary standpoint, morality is a survival trait of the family or
band: the hunter who shares the kill with his family and tribe will
have more descendants than the gluttonous loner. A tribe which
produces males who sacrifice themselves to defend their women and
children will produce more offspring than the tribe whose males
value only their own individual survival.
Morality, then, is, at the group level, a selective trait, and
consequently it's no surprise that it's universal among human
societies. But if, as serious atheists such as Bertrand Russell
(as opposed to the lower-grade atheists we get today) worried,
morality has been linked to religion and belief in an afterlife
in every single human society to date, then how is morality (a
survival characteristic) to be maintained in the absence of
these beliefs? And if evolution has selected us to believe in
the afterlife for the behavioural advantages that belief confers in
the here and now, then how successful will the atheists be in
extinguishing a belief which has conferred a behavioural selective
advantage upon thousands of generations of our ancestors? And
how will societies which jettison such belief fare in competition
with those which keep it alive?
I could write much more about this book, but then you'd have to read a
review even longer than the book, so I'll spare you. If you're
interested in this topic (as you'll probably eventually be as you get
closer to the checkered flag), this is an excellent introduction, and
the end notes provide a wealth of suggestions for additional reading.
I doubt this book will shake the convictions of either the confirmed
believers or the stalwart sceptics, but it will
provide much for both to think about, and perhaps motivate some folks
whose approach is “I'll deal with that when the time
comes” (which has been pretty much my own) to consider the
consequences of what may come next.
- Benioff, David.
City of Thieves.
New York: Viking, 2008.
ISBN 978-0-670-01870-3.
-
This is a coming of age novel, buddy story, and quest saga set in the
most implausible of circumstances: the 872 day
Siege of Leningrad
and the surrounding territory. I don't know whether the author's
grandfather actually lived these events and recounted them to to him
or whether it's just a literary device, but I'm certain the images you
experience here will stay with you for many years after you put this
book down, and that you'll probably return to it after reading it
the first time.
Kolya is one of the most intriguing characters I've encountered in
modern fiction, with Vika a close second. You wouldn't expect a
narrative set in the German invasion of the Soviet Union to be funny,
but there are quite a number of laughs here, which will acquaint you
with the Russian genius for black humour when everything looks the
bleakest. You will learn to be very wary around well-fed
people in the middle of a siege!
Much of the description of life in Leningrad during the siege
is, of course, grim, although arguably less so than the factual
account in Harrison Salisbury's
The 900 Days (however, note
that the story is set early in the siege; conditions deteriorated
as it progressed). It isn't often you read a historical novel in
which
Olbers' paradox
figures!