Medicine

Barry, John M. The Great Influenza. New York: Penguin, [2004] 2005. ISBN 978-0-14-303649-4.
In the year 1800, the practice of medicine had changed little from that in antiquity. The rapid progress in other sciences in the 18th century had had little impact on medicine, which one historian called “the withered arm of science”. This began to change as the 19th century progressed. Researchers, mostly in Europe and especially in Germany, began to lay the foundations for a scientific approach to medicine and public health, understanding the causes of disease and searching for means of prevention and cure. The invention of new instruments for medical examination, anesthesia, and antiseptic procedures began to transform the practice of medicine and surgery.

All of these advances were slow to arrive in the United States. As late as 1900 only one medical school in the U.S. required applicants to have a college degree, and only 20% of schools required a high school diploma. More than a hundred U.S. medical schools accepted any applicant who could pay, and many graduated doctors who had never seen a patient or done any laboratory work in science. In the 1870s, only 10% of the professors at Harvard's medical school had a Ph.D.

In 1873, Johns Hopkins died, leaving his estate of US$ 3.5 million to found a university and hospital. The trustees embarked on an ambitious plan to build a medical school to be the peer of those in Germany, and began to aggressively recruit European professors and Americans who had studied in Europe to build a world class institution. By the outbreak of World War I in Europe, American medical research and education, still concentrated in just a few centres of excellence, had reached the standard set by Germany. It was about to face its greatest challenge.

With the entry of the United States into World War I in April of 1917, millions of young men conscripted for service were packed into overcrowded camps for training and preparation for transport to Europe. These camps, thrown together on short notice, often had only rudimentary sanitation and shelter, with many troops living in tent cities. Large number of doctors and especially nurses were recruited into the Army, and by the start of 1918 many were already serving in France. Doctors remaining in private practice in the U.S. were often older men, trained before the revolution in medical education and ignorant of modern knowledge of diseases and the means of treating them.

In all American wars before World War I, more men died from disease than combat. In the Civil War, two men died from disease for every death on the battlefield. Army Surgeon General William Gorgas vowed that this would not be the case in the current conflict. He was acutely aware that the overcrowded camps, frequent transfers of soldiers among far-flung bases, crowded and unsanitary troop transport ships, and unspeakable conditions in the trenches were a tinderbox just waiting for the spark of an infectious disease to ignite it. But the demand for new troops for the front in France caused his cautions to be overruled, and still more men were packed into the camps.

Early in 1918, a doctor in rural Haskell County, Kansas began to treat patients with a disease he diagnosed as influenza. But this was nothing like the seasonal influenza with which he was familiar. In typical outbreaks of influenza, the people at greatest risk are the very young (whose immune systems have not been previously exposed to the virus) and the very old, who lack the physical resilience to withstand the assault by the disease. Most deaths are among these groups, leading to a “bathtub curve” of mortality. This outbreak was different: the young and elderly were largely spared, while those in the prime of life were struck down, with many dying quickly of symptoms which resembled pneumonia. Slowly the outbreak receded, and by mid-March things were returning to normal. (The location and mechanism where the disease originated remain controversial to this day and we may never know for sure. After weighing competing theories, the author believes the Kansas origin most likely, but other origins have their proponents.)

That would have been the end of it, had not soldiers from Camp Funston, the second largest Army camp in the U.S., with 56,000 troops, visited their families in Haskell County while on leave. They returned to camp carrying the disease. The spark had landed in the tinderbox. The disease spread outward as troop trains travelled between camps. Often a train would leave carrying healthy troops (infected but not yet symptomatic) and arrive with up to half the company sick and highly infectious to those at the destination. Before long the disease arrived via troop ships at camps and at the front in France.

This was just the first wave. The spring influenza was unusual in the age group it hit most severely, but was not particularly more deadly than typical annual outbreaks. Then in the fall a new form of the disease returned in a much more virulent form. It is theorised that under the chaotic conditions of wartime a mutant form of the virus had emerged and rapidly spread among the troops and then passed into the civilian population. The outbreak rapidly spread around the globe, and few regions escaped. It was particularly devastating to aboriginal populations in remote regions like the Arctic and Pacific islands who had not developed any immunity to influenza.

The pathogen in the second wave could kill directly within a day by destroying the lining of the lung and effectively suffocating the patient. The disease was so virulent and aggressive that some medical researchers doubted it was influenza at all and suspected some new kind of plague. Even those who recovered from the disease had much of their immunity and defences against respiratory infection so impaired that some people who felt well enough to return to work would quickly come down with a secondary infection of bacterial pneumonia which could kill them.

All of the resources of the new scientific medicine were thrown into the battle with the disease, with little or no impact upon its progression. The cause of influenza was not known at the time: some thought it was a bacterial disease while others suspected a virus. Further adding to the confusion is that influenza patients often had a secondary infection of bacterial pneumonia, and the organism which causes that disease was mis-identified as the pathogen responsible for influenza. Heroic efforts were made, but the state of medical science in 1918 was simply not up to the challenge posed by influenza.

A century later, influenza continues to defeat every attempt to prevent or cure it, and another global pandemic remains a distinct possibility. Supportive treatment in the developed world and the availability of antibiotics to prevent secondary infection by pneumonia will reduce the death toll, but a mass outbreak of the virus on the scale of 1918 would quickly swamp all available medical facilities and bring society to the brink as it did then. Even regular influenza kills between a quarter and a half million people a year. The emergence of a killer strain like that of 1918 could increase this number by a factor of ten or twenty.

Influenza is such a formidable opponent due to its structure. It is an RNA virus which, unusually for a virus, has not a single strand of genetic material but seven or eight separate strands of RNA. Some researchers argue that in an organism infected with two or more variants of the virus these strands can mix to form new mutants, allowing the virus to mutate much faster than other viruses with a single strand of genetic material (this is controversial). The virus particle is surrounded by proteins called hemagglutinin (HA) and neuraminidase (NA). HA allows the virus to break into a target cell, while NA allows viruses replicated within the cell to escape to infect others.

What makes creating a vaccine for influenza so difficult is that these HA and NA proteins are what the body's immune system uses to identify the virus as an invader and kill it. But HA and NA come in a number of variants, and a specific strain of influenza may contain one from column H and one from column N, creating a large number of possibilities. For example, H1N2 is endemic in birds, pigs, and humans. H5N1 caused the bird flu outbreak in 2004, and H1N1 was responsible for the 1918 pandemic. It gets worse. As a child, when you are first exposed to influenza, your immune system will produce antibodies which identify and target the variant to which you were first exposed. If you were infected with and recovered from, say, H3N2, you'll be pretty well protected against it. But if, subsequently, you encounter H1N1, your immune system will recognise it sufficiently to crank out antibodies, but they will be coded to attack H3N2, not the H1N1 you're battling, against which they're useless. Influenza is thus a chameleon, constantly changing its colours to hide from the immune system.

Strains of influenza tend to come in waves, with one HxNy variant dominating for some number of years, then shifting to another. Developers of vaccines must play a guessing game about which you're likely to encounter in a given year. This explains why the 1918 pandemic particularly hit healthy adults. Over the decades preceding the 1918 outbreak, the primary variant had shifted from H1N1, then decades of another variant, and then after 1900 H1N1 came back to the fore. Consequently, when the deadly strain of H1N1 appeared in the fall of 1918, the immune systems of both young and elderly people were ready for it and protected them, but those in between had immune systems which, when confronted with H1N1, produced antibodies for the other variant, leaving them vulnerable.

With no medical defence against or cure for influenza even today, the only effective response in the case of an outbreak of a killer strain is public health measures such as isolation and quarantine. Influenza is airborne and highly infectious: the gauze face masks you see in pictures from 1918 were almost completely ineffective. The government response to the outbreak in 1918 could hardly have been worse. After creating military camps which were nothing less than a culture medium containing those in the most vulnerable age range packed in close proximity, once the disease broke out and reports began to arrive that this was something new and extremely lethal, the troop trains and ships continued to run due to orders from the top that more and more men had to be fed into the meat grinder that was the Western Front. This inoculated camp after camp. Then, when the disease jumped into the civilian population and began to devastate cities adjacent to military facilities such as Boston and Philadelphia, the press censors of Wilson's proto-fascist war machine decided that honest reporting of the extent and severity of the disease or measures aimed at slowing its spread would impact “morale” and war production, so newspapers were ordered to either ignore it or print useless happy talk which only accelerated the epidemic. The result was that in the hardest-hit cities, residents confronted with the reality before their eyes giving to lie to the propaganda they were hearing from authorities retreated into fear and withdrawal, allowing neighbours to starve rather than risk infection by bringing them food.

As was known in antiquity, the only defence against an infectious disease with no known medical intervention is quarantine. In Western Samoa, the disease arrived in September 1918 on a German steamer. By the time the disease ran its course, 22% of the population of the islands was dead. Just a few kilometres across the ocean in American Samoa, authorities imposed a rigid quarantine and not a single person died of influenza.

We will never know the worldwide extent of the 1918 pandemic. Many of the hardest-hit areas, such as China and India, did not have the infrastructure to collect epidemiological data and what they had collapsed under the impact of the crisis. Estimates are that on the order of 500 million people worldwide were infected and that between 50 and 100 million died: three to five percent of the world's population.

Researchers do not know why the 1918 second wave pathogen was so lethal. The genome has been sequenced and nothing jumps out from it as an obvious cause. Understanding its virulence may require recreating the monster and experimenting with it in animal models. Obviously, this is not something which should be undertaken without serious deliberation beforehand and extreme precautions, but it may be the only way to gain the knowledge needed to treat those infected should a similar wild strain emerge in the future. (It is possible this work may have been done but not published because it could provide a roadmap for malefactors bent on creating a synthetic plague. If this be the case, we'll probably never know about it.)

Although medicine has made enormous strides in the last century, influenza, which defeated the world's best minds in 1918, remains a risk, and in a world with global air travel moving millions between dense population centres, an outbreak today would be even harder to contain. Let us hope that in that dire circumstance authorities will have the wisdom and courage to take the kind of dramatic action which can make the difference between a regional tragedy and a global cataclysm.

October 2014 Permalink

Cordain, Loren. The Paleo Diet. Hoboken, NJ: John Wiley & Sons, 2002. ISBN 978-0-470-91302-4.
As the author of a diet book, I don't read many self-described “diet books”. First of all, I'm satisfied with the approach to weight management described in my own book; second, I don't need to lose weight; and third, I find most “diet books” built around gimmicks with little justification in biology and prone to prescribe regimes that few people are likely to stick with long enough to achieve their goal. What motivated me to read this book was a talk by Michael Rose at the First Personalized Life Extension Conference in which he mentioned the concept and this book not in conjunction with weight reduction but rather the extension of healthy lifespan in humans. Rose's argument, which is grounded in evolutionary biology and paleoanthropology, is somewhat subtle and well summarised in this article.

At the core of Rose's argument and that of the present book is the observation that while the human genome is barely different from that of human hunter-gatherers a million years ago, our present-day population has had at most 200 to 500 generations to adapt to the very different diet which emerged with the introduction of agriculture and animal husbandry. From an evolutionary standpoint, this is a relatively short time for adaptation and, here is the key thing (argued by Rose, but not in this book), even if modern humans had evolved adaptations to the agricultural diet (as in some cases they clearly have, lactose tolerance persisting into adulthood being one obvious example), those adaptations will not, from the simple mechanism of evolution, select out diseases caused by the new diet which only manifest themselves after the age of last reproduction in the population. So, if eating the agricultural diet (not to mention the horrors we've invented in the last century) were the cause of late-onset diseases such as cancer, cardiovascular problems, and type 2 diabetes, then evolution would have done nothing to select out the genes responsible for them, since these diseases strike most people after the age at which they've already passed on their genes to their children. Consequently, while it may be fine for young people to eat grain, dairy products, and other agricultural era innovations, folks over the age of forty may be asking for trouble by consuming foods which evolution hasn't had the chance to mold their genomes to tolerate. People whose ancestors shifted to the agricultural lifestyle much more recently, including many of African and aboriginal descent, have little or no adaptation to the agricultural diet, and may experience problems even earlier in life.

In this book, the author doesn't make these fine distinctions but rather argues that everybody can benefit from a diet resembling that which the vast majority of our ancestors—hunter-gatherers predating the advent of sedentary agriculture—ate, and to which evolution has molded our genome over that long expanse of time. This is not a “diet book” in the sense of a rigid plan for losing weight. Instead, it is a manual for adopting a lifestyle, based entirely upon non-exotic foods readily available at the supermarket, which approximates the mix of nutrients consumed by our distant ancestors. There are the usual meal plans and recipes, but the bulk of the book is a thorough survey, with extensive citations to the scientific literature, of what hunter-gatherers actually ate, the links scientists have found between the composition of the modern diet and the emergence of “diseases of civilisation” among populations that have transitioned to it in historical times, and the evidence for specific deleterious effects of major components of the modern diet such as grains and dairy products.

Not to over-simplify, but you can go a long way toward the ancestral diet simply by going to the store with an “anti-shopping list” of things not to buy, principally:

  • Grain, or anything derived from grains (bread, pasta, rice, corn)
  • Dairy products (milk, cheese, butter)
  • Fatty meats (bacon, marbled beef)
  • Starchy tuber crops (potatoes, sweet potatoes)
  • Salt or processed foods with added salt
  • Refined sugar or processed foods with added sugar
  • Oils with a high omega 6 to omega 3 ratio (safflower, peanut)

And basically, that's it! Apart from the list above you can buy whatever you want, eat it whenever you like in whatever quantity you wish, and the author asserts that if you're overweight you'll soon see your weight dropping toward your optimal weight, a variety of digestive and other problems will begin to clear up, you'll have more energy and a more consistent energy level throughout the day, and that you'll sleep better. Oh, and your chances of contracting cancer, diabetes, or cardiovascular disease will be dramatically reduced.

In practise, this means eating a lot of lean meat, seafood, fresh fruit and fresh vegetables, and nuts. As the author points out, even if you have a mound of cooked boneless chicken breasts, broccoli, and apples on the table before you, you're far less likely to pig out on them compared to, say, a pile of doughnuts, because the natural foods don't give you the immediate blood sugar hit the highly glycemic processed food does. And even if you do overindulge, the caloric density in the natural foods is so much lower your jaw will get tired chewing or your gut will bust before you can go way over your calorie requirements.

Now, if even if the science is sound (there are hundreds of citations of peer reviewed publications in the bibliography, but then nutritionists are forever publishing contradictory “studies” on any topic you can imagine, and in any case epidemiology cannot establish causation) and the benefits from adopting this diet are as immediate, dramatic, and important for long-term health, a lot of people are going to have trouble with what is recommended here. Food is a lot more to humans and other species (as anybody who's had a “picky eater” cat can testify) than just molecular fuel and construction material for our bodies. Our meals nourish the soul as well as the body, and among humans shared meals are a fundamental part of our social interaction which evolution has doubtless had time to write into our genes. If you go back and look at that list of things not to eat, you'll probably discover that just about any “comfort food” you cherish probably runs afoul of one or more of the forbidden ingredients. This means that contemplating the adoption of this diet as a permanent lifestyle change can look pretty grim, unless or until you find suitable replacements that thread among the constraints. The recipes presented here are interesting, but still come across to me (not having tried them) as pretty Spartan. And recall that even Spartans lived a pretty sybaritic lifestyle compared to your average hunter-gatherer band. But, hey, peach fuzz is entirely cool!

The view of the mechanics of weight loss and gain and the interaction between exercise and weight reduction presented here is essentially 100% compatible with my own in The Hacker's Diet.

This was intriguing enough that I decided to give it a try starting a couple of weeks ago. (I have been adhering, more or less, to the food selection guidelines, but not the detailed meal plans.) The results so far are intriguing but, at this early date, inconclusive. The most dramatic effect was an almost immediate (within the first three days) crash in my always-pesky high blood pressure. This may be due entirely to putting away the salt shaker (an implement of which I have been inordinately fond since childhood), but whatever the cause, it's taken about 20 points off the systolic and 10 off the diastolic, throughout the day. Second, I've seen a consistent downward bias in my weight. Now, as I said, I didn't try this diet to lose weight (although I could drop a few kilos and still be within the target band for my height and build, and wouldn't mind doing so). In any case, these are short-term results and may include transient adaptation effects. I haven't been hungry for a moment nor have I experienced any specific cravings (except the second-order kind for popcorn with a movie). It remains to be seen what will happen when I next attend a Swiss party and have to explain that I don't eat cheese.

This is a very interesting nutritional thesis, backed by a wealth of impressive research of which I was previously unaware. It flies in the face of much of the conventional wisdom on diet and nutrition, and yet viewed from the standpoint of evolution, it makes a lot of sense. You will find the case persuasively put here and perhaps be tempted to give it a try.

December 2010 Permalink

De Vany, Arthur. The New Evolution Diet. New York: Rodale Books, 2011. ISBN 978-1-60529-183-3.
The author is an economist best known for his research into the economics of Hollywood films, and his demonstration that the Pareto distribution applies to the profitability of Hollywood productions, empirically falsifying many entertainment business nostrums about a correlation between production cost and “star power” of the cast and actual performance at the box office. When his son, and later his wife, developed diabetes and the medical consensus treatment seemed to send both into a downward spiral, his economist's sense for the behaviour of complex nonlinear systems with feedback and delays caused him to suspect that the regimen prescribed for diabetics was based on a simplistic view of the system aimed at treating the symptoms rather than the cause. This led him to an in depth investigation of human metabolism and nutrition, grounded in the evolutionary heritage of our species (this is fully documented here—indeed, almost half of the book is end notes and source references, which should not be neglected: there is much of interest there).

His conclusion was that our genes, which have scarcely changed in the last 40,000 years, were adapted to the hunter-gatherer lifestyle that our hominid ancestors lived for millions of years before the advent of agriculture. Our present day diet and way of life could not be more at variance with our genetic programming, so it shouldn't be a surprise that we see a variety of syndromes, including obesity, cardiovascular diseases, type 2 diabetes, and late-onset diseases such as many forms of cancer which are extremely rare among populations whose diet and lifestyle remain closer to those of ancestral humans. Strong evidence for this hypothesis comes from nomadic aboriginal populations which, settled into villages and transitioned to the agricultural diet, promptly manifested diseases, categorised as “metabolic syndrome”, which were previously unknown among them.

This is very much the same conclusion as that of The Paleo Diet (December 2010), and I recommend you read both of these books as they complement one another. The present volume goes deeper into the biochemistry underlying its dietary recommendations, and explores what the hunter-gatherer lifestyle has to say about the exercise to which we are adapted. Our ancestors' lives were highly chaotic: they ate when they made a kill or found food to gather and fasted until the next bounty. They engaged in intense physical exertion during a hunt or battle, and then passively rested until the next time. Modern times have made us slaves to the clock: we do the same things at the same times on a regular schedule. Even those who incorporate strenuous exercise into their routine tend to do the same things at the same time on the same days. The author argues that this is not remotely what our heritage has evolved us for.

Once Pareto gets into your head, it's hard to get him out. Most approaches to diet, nutrition, and exercise (including my own) view the human body as a system near equilibrium. The author argues that one shouldn't look at the mean but rather the kurtosis of the distribution, as it's the extremes that matter—don't tediously “do cardio” like all of the treadmill trudgers at the gym, but rather push your car up a hill every now and then, or randomly raise your heart rate into the red zone.

This all makes perfect sense to me. I happened to finish this book almost precisely six months after adopting my own version of the paleo diet, not from a desire to lose weight (I'm entirely happy with my weight, which hasn't varied much in the last twenty years, thanks to the feedback mechanism of The Hacker's Diet) but due to the argument that it averts late-onset diseases and extends healthy lifespan. Well, it's too early to form any conclusions on either of these, and in any case you can draw any curve you like through a sample size of one, but after half a year on paleo I can report that my weight is stable, my blood pressure is right in the middle of the green zone (as opposed to low-yellow before), I have more energy, sleep better, and have seen essentially all of the aches and pains and other symptoms of low-level inflammation disappear. Will you have cravings for things you've forgone when you transition to paleo? Absolutely—in my experience it takes about three months for them to go away. When I stopped salting my food, everything tasted like reprocessed blaah for the first couple of weeks, but now I appreciate the flavours below the salt.

For the time being, I'm going to continue this paleo thing, not primarily due to the biochemical and epidemiological arguments here, but because I've been doing it for six months and I feel better than I have for years. I am a creature of habit, and I find it very difficult to introduce kurtosis into my lifestyle: when exogenous events do so, I deem it an “entropic storm”. When it's 15:00, I go for my one hour walk. When it's 18:00, I eat, etc. Maybe I should find some way to introduce randomness into my life….

An excellent Kindle edition is available, with the table of contents, notes, and index all properly linked to the text.

June 2011 Permalink

Dworkin, Ronald W. Artificial Happiness. New York: Carroll & Graf, 2006. ISBN 0-78671-714-9.
Western societies, with the United States in the lead, appear to be embarked on a grand scale social engineering experiment with little consideration of the potentially disastrous consequences both for individuals and the society at large. Over the last two decades “minor depression”, often no more than what, in less clinical nomenclature one would term unhappiness, has become seen as a medical condition treatable with pharmaceuticals, and prescription of these medications, mostly by general practitioners, not psychiatrists or psychologists, has skyrocketed, with drugs such as Prozac, Paxil, and Zoloft regularly appearing on lists of the most frequently prescribed. Tens of million of people in the United States take these pills, which are being prescribed to children and adolescents as well as adults.

Now, there's no question that these medications have been a Godsend for individuals suffering from severe clinical depression, which is now understood in many cases to be an organic disease caused by imbalances in the metabolism of neurotransmitters in the brain. But this vast public health experiment in medicating unhappiness is another thing altogether. Unhappiness, like pain, is a signal that something's wrong, and a motivator to change things for the better. But if unhappiness is seen as a disease which is treated by swallowing pills, this signal is removed, and people are numbed or stupefied out of taking action to eliminate the cause of their unhappiness: changing jobs or careers, reducing stress, escaping from abusive personal relationships, or embarking on some activity which they find personally rewarding. Self esteem used to be thought of as something you earned from accomplishing difficult things; once it becomes a state of mind you get from a bottle of pills, then what will become of all the accomplishments the happily medicated no longer feel motivated to achieve?

These are serious questions, and deserve serious investigation and a book-length treatment of the contemporary scene and trends. This is not, however, that book. The author is an M.D. anæsthesiologist with a Ph.D. in political philosophy from Johns Hopkins University, and a senior fellow at the Hudson Institute—impressive credentials. Notwithstanding them, the present work reads like something written by somebody who learned Marxism from a comic book. Individuals, entire professions, and groups as heterogeneous as clergy of organised religions are portrayed like cardboard cutouts—with stick figures drawn on them—in crayon. Each group the author identifies is seen as acting monolithically toward a specific goal, which is always nefarious in some way, advancing an agenda based solely on its own interest. The possibility that a family doctor might prescribe antidepressants for an unhappy patient in the belief that he or she is solving a problem for the patient is scarcely considered. No, the doctor is part of a grand conspiracy of “primary care physicians” advancing an agenda to usurp the “turf” (a term he uses incessantly) of first psychiatrists, and finally organised religion.

After reading this entire book, I still can't decide whether the author is really as stupid as he seems, or simply writes so poorly that he comes across that way. Each chapter starts out lurching toward a goal, loses its way and rambles off in various directions until the requisite number of pages have been filled, and then states a conclusion which is not justified by the content of the chapter. There are few cliches in the English language which are not used here—again and again. Here is an example of one of hundreds of paragraphs to which the only rational reaction is “Huh?”.

So long as spirituality was an idea, such as believing in God, it fell under religious control. However, if doctors redefined spirituality to mean a sensual phenomenon—a feeling—then doctors would control it, since feelings had long since passed into the medical profession's hands, the best example being unhappiness. Turning spirituality into a feeling would also help doctors square the phenomenon with their own ideology. If spirituality were redefined to mean a feeling rather than an idea, then doctors could group spirituality with all the other feelings, including unhappiness, thereby preserving their ideology's integrity. Spirituality, like unhappiness, would become a problem of neurotransmitters and a subclause of their ideology. (Page 226.)
A reader opening this book is confronted with 293 pages of this. This paragraph appears in chapter nine, “The Last Battle”, which describes the Manichean struggle between doctors and organised religion in the 1990s for the custody of the souls of Americans, ending in a total rout of religion. Oh, you missed that? Me too.

Mass medication with psychotropic drugs is a topic which cries out for a statistical examination of its public health dimensions, but Dworkin relates only anecdotes of individuals he has known personally, all of whose minds he seems to be able to read, diagnosing their true motivations which even they don't perceive, and discerning their true destiny in life, which he believes they are failing to follow due to medication for unhappiness.

And if things weren't muddled enough, he drags in “alternative medicine” (the modern, polite term for what used to be called “quackery”) and ”obsessive exercise” as other sources of Artificial Happiness (which he capitalises everywhere), which is rather odd since he doesn't believe either works except through the placebo effect. Isn't it just a little bit possible that some of those people working out at the gym are doing so because it makes them feel better and likely to live longer? Dworkin tries to envision the future for the Happy American, decoupled from the traditional trajectory through life by the ability to experience chemically induced happiness at any stage. Here, he seems to simultaneously admire and ridicule the culture of the 1950s, of which his knowledge seems to be drawn from re-runs of “Leave it to Beaver”. In the conclusion, he modestly proposes a solution to the problem which requires completely restructuring medical education for general practitioners and redefining the mission of all organised religions. At least he doesn't seem to have a problem with self-esteem!

October 2006 Permalink

Johnson, Steven. The Ghost Map. New York: Riverhead Books, 2006. ISBN 1-59448-925-4.
From the dawn of human civilisation until sometime in the nineteenth century, cities were net population sinks—the increased mortality from infectious diseases, compounded by the unsanitary conditions, impure water, and food transported from the hinterland and stored without refrigeration so shortened the lives of city-dwellers (except for the ruling class and the wealthy, a small fraction of the population) that a city's population was maintained only by a constant net migration to it from the countryside. In densely-packed cities, not only does an infected individual come into contact with many more potential victims than in a rural environment, highly virulent strains of infectious agents which would “burn out” due to rapidly killing their hosts in farm country or a small village can prosper in a city, since each infected host still has the opportunity to infect many others before succumbing. Cities can be thought of as Petri dishes for evolving killer microbes.

No civic culture medium was as hospitable to pathogens as London in the middle of the 19th century. Its population, 2.4 million in 1851, had exploded from just one million at the start of the century, and all of these people had been accommodated in a sprawling metropolis almost devoid of what we would consider a public health infrastructure. Sewers, where they existed, were often open and simply dumped into the Thames, whence other Londoners drew their drinking water, downstream. Other residences dumped human waste in cesspools, emptied occasionally (or maybe not) by “night-soil men”. Imperial London was a smelly, and a deadly place. Observing it first-hand is what motivated Friedrich Engels to document and deplore The Condition of the Working Class in England (January 2003).

Among the diseases which cut down inhabitants of cities, one of the most feared was cholera. In 1849, an outbreak killed 14,137 in London, and nobody knew when or where it might strike next. The prevailing theory of disease at this epoch was that infection was caused by and spread through “miasma”: contaminated air. Given how London stank and how deadly it was to its inhabitants, this would have seemed perfectly plausible to people living before the germ theory of disease was propounded. Edwin Chadwick, head of the General Board of Health in London at the epoch, went so far as to assert (p. 114) “all smell is disease”. Chadwick was, in many ways, one of the first advocates and implementers of what we have come to call “big government”—that the state should take an active role in addressing social problems and providing infrastructure for public health. Relying upon the accepted “miasma” theory and empowered by an act of Parliament, he spent the 1840s trying to eliminate the stink of the cesspools by connecting them to sewers which drained their offal into the Thames. Chadwick was, by doing so, to provide one of the first demonstrations of that universal concomitant of big government, unintended consequences: “The first defining act of a modern, centralized public-health authority was to poison an entire urban population.” (p. 120).

When, in 1854, a singularly virulent outbreak of cholera struck the Soho district of London, physician and pioneer in anæsthesia John Snow found himself at the fulcrum of a revolution in science and public health toward which he had been working for years. Based upon his studies of the 1849 cholera outbreak, Snow had become convinced that the pathogen spread through contamination of water supplies by the excrement of infected individuals. He had published a monograph laying out this theory in 1849, but it swayed few readers from the prevailing miasma theory. He was continuing to document the case when cholera exploded in his own neighbourhood. Snow's mind was not only prepared to consider a waterborne infection vector, he was also one of the pioneers of the emerging science of epidemiology: he was a founding member of the London Epidemiological Society in 1850. Snow's real-time analysis of the epidemic caused him to believe that the vector of infection was contaminated water from the Broad Street pump, and his persuasive presentation of the evidence to the Board of Governors of St. James Parish caused them to remove the handle from that pump, after which the contagion abated. (As the author explains, the outbreak was already declining at the time, and in all probability the water from the Broad Street pump was no longer contaminated then. However, due to subsequent events and discoveries made later, had the handle not been removed there would have likely been a second wave of the epidemic, with casualties comparable to the first.)

Afterward, Snow, with the assistance of initially-sceptical clergyman Henry Whitehead, whose intimate knowledge of the neighbourhood and its residents allowed compiling the data which not only confirmed Snow's hypothesis but identified what modern epidemiologists would call the “index case” and “vector of contagion”, revised his monograph to cover the 1854 outbreak, illustrated by a map which illustrated its casualties that has become a classic of on-the-ground epidemiology and the graphical presentation of data. Most brilliant was Snow's use (and apparent independent invention) of a Voronoi diagram to show the boundary, by streets, of the distance, not in Euclidean space, but by walking time, of the area closer to the Broad Street pump than to others in the neighbourhood. (Oddly, the complete map with this crucial detail does not appear in the book: only a blow-up of the central section without the boundary. The full map is here; depending on your browser, you may have to click on the map image to display it at full resolution. The dotted and dashed line is the Voronoi cell enclosing the Broad Street pump.)

In the following years, London embarked upon a massive program to build underground sewers to transport the waste of its millions of residents downstream to the tidal zone of the Thames and later, directly to the sea. There would be one more cholera outbreak in London in 1866—in an area not yet connected to the new sewers and water treatment systems. Afterward, there has not been a single epidemic of cholera in London. Other cities in the developed world learned this lesson and built the infrastructure to provide their residents clean water. In the developing world, cholera continues to take its toll: in the 1990s an outbreak in South America infected more than a million people and killed almost 10,000. Fortunately, administration of rehydration therapy (with electrolytes) has drastically reduced the likelihood of death from a cholera infection. Still, you have to wonder why, in a world where billions of people lack access to clean water and third world mega-cities are drawing millions to live in conditions not unlike London in the 1850s, that some believe that laptop computers are the top priority for children growing up there.

A paperback edition is now available.

December 2007 Permalink