Dutton, Edward and Michael A. Woodley of Menie.
At Our Wits' End.
Exeter, UK: Imprint Academic, 2018.
ISBN 978-1-84540-985-2.
During the Great Depression, the Empire State Building was built,
from the beginning of foundation excavation to
official opening, in 410 days (less than 14 months). After
the destruction of the World Trade Center in New York on
September 11, 2001, design and construction of its replacement,
the new
One
World Trade Center was completed on November 3, 2014, 4801
days (160 months) later.
In the 1960s, from U.S. president Kennedy's proposal of a manned
lunar mission to the landing of Apollo 11 on the Moon, 2978
days (almost 100 months) elapsed. In January, 2004, U.S. president
Bush announced the
“Vision
for Space Exploration”, aimed at a human return to the
lunar surface by 2020. After a comical series of studies,
revisions, cancellations, de-scopings, redesigns, schedule
slips, and cost overruns, its successor now plans to launch a
lunar flyby mission (not even a lunar orbit like
Apollo 8) in June 2022, 224 months later. A lunar
landing is planned for no sooner than 2028, almost 300 months
after the “vision”, and almost nobody believes that
date (the landing craft design has not yet begun, and there is
no funding for it in the budget).
Wherever you look: junk science, universities corrupted with
bogus “studies” departments, politicians peddling
discredited nostrums a moment's critical thinking reveals to be
folly, an economy built upon an ever-increasing tower of debt
that nobody really believes is ever going to be paid off, and
the dearth of major, genuine innovations (as opposed to
incremental refinement of existing technologies, as has driven
the computing, communications, and information technology
industries) in every field: science, technology, public policy,
and the arts, it often seems like the world is getting dumber.
What if it really is?
That is the thesis explored by this insightful book, which is
packed with enough “hate facts” to detonate the
head of any bien pensant
academic or politician. I define a “hate fact” as
something which is indisputably true, well-documented by evidence
in the literature, which has not been contradicted, but the
citation of which is considered “hateful” and can
unleash outrage mobs upon anyone so foolish as to utter the
fact in public and be a career-limiting move for those
employed in Social Justice Warrior-converged organisations.
(An example of a hate fact, unrelated to the topic of this
book, is the FBI violent crime statistics broken down by
the race of the criminal and victim. Nobody disputes the
accuracy of this information or the methodology by which it is
collected, but woe betide anyone so foolish as to cite the
data or draw the obvious conclusions from it.)
In April 2004 I made my own foray into the question of
declining intelligence in
“Global IQ: 1950–2050”
in which I combined estimates of the mean IQ of countries with
census data and forecasts of population growth to estimate global
mean IQ for a century starting at 1950. Assuming the mean IQ
of countries remains constant (which is optimistic, since part of
the population growth in high IQ countries with low fertility
rates is due to migration from countries with lower IQ), I found
that global mean IQ, which was 91.64 for a population of 2.55
billion in 1950, declined to 89.20 for the 6.07 billion alive
in 2000, and was expected to fall to 86.32 for the 9.06 billion
population forecast for 2050. This is mostly due to the
explosive population growth forecast for Sub-Saharan Africa,
where many of the populations with low IQ reside.
This is a particularly dismaying prospect, because there is no
evidence for sustained consensual self-government in nations with
a mean IQ less than 90.
But while I was examining global trends assuming national IQ
remains constant, in the present book the authors explore
the provocative question of whether the population of today's
developed nations is becoming dumber due to the inexorable
action of natural selection on whatever genes determine
intelligence. The argument is relatively simple, but based
upon a number of pillars, each of which is a “hate fact”,
although non-controversial among those who study these
matters in detail.
There is a factor,
“general
intelligence” or g, which measures
the ability to solve a wide variety of mental problems,
and this factor, measured by IQ tests, is largely stable across
an individual's life.
Intelligence, as measured by IQ tests, is, like height,
in part heritable. The heritability of IQ is estimated
at around 80%, which means that 80% of children's IQ can
be estimated from that of their parents, and 20% is due
to other factors.
IQ correlates positively with factors contributing to
success in society. The correlation with performance in
education is 0.7, with highest educational level completed
0.5, and with salary 0.3.
In Europe, between 1400 and around 1850, the wealthier
half of the population had more children who survived to
adulthood than the poorer half.
Because IQ correlates with social success, that portion
of the population which was more intelligent produced
more offspring.
Just as in selective breeding of animals by selecting
those with a desired trait for mating, this resulted in
a population whose average IQ increased (slowly) from
generation to generation over this half-millennium.
The gradually rising IQ of the population resulted in a
growing standard of living as knowledge and inventions
accumulated due to the efforts of those with greater
intelligence over time. In particular, even a relatively
small increase in the mean IQ of a population makes an
enormous difference in the tiny fraction of people with
“genius level” IQ who are responsible for
many of the significant breakthroughs in all forms of
human intellectual endeavour. If we consider an IQ
of 145 as genius level, in a population of a million
with a mean IQ of 100, one in 741 people will have an
IQ of 145 or above, so there will be around 1350 people
with such an IQ. But if the population's mean IQ is
95, just five points lower, only one in 2331 people will
have a genius level IQ, and there will be just 429
potential geniuses in the population of a million.
In a population of a million with a mean IQ of 90,
there will be just 123 potential geniuses.
(Some technical details are in order. A high IQ
[generally 125 or above] appears to be a necessary condition
for genius-level achievement, but it is insufficient by itself.
Those who produce feats of genius usually combine high
intelligence with persistence, ambition, often a single-minded
focus on a task, and usually require an environment which
allows them to acquire the knowledge and intellectual tools
required to apply their talent. But since a high IQ is
a requirement, the mean IQ determines what fraction of the
population are potential geniuses; other factors
such as the society's educational institutions, resources
such as libraries, and wealth which allows some people to concentrate
on intellectual endeavours instead of manual labour, contribute to
how many actual works of genius will be produced. The mean IQ of
most Western industrial nations is around 100, and the
standard
deviation of IQ is normalised to be 15. Using this
information you can perform calculations such as those
in the previous paragraph using Fourmilab's
z
Score Calculator, as explained in my
Introduction
to Probability and Statistics.)
Of the pillars of the argument listed above, items 1 through 3
are noncontroversial except by those who deny the existence of
general intelligence entirely or the ability of IQ tests to
measure it. The authors present the large body of highly
persuasive evidence in favour of those items in a form
accessible to the non-specialist. If you reject that evidence,
then you needn't consider the rest of the argument.
Item 4, the assertion that wealthier families had more children
survive to adulthood, is substantiated by a variety of research,
much of it done in England, where recorded wills and church
records of baptisms and deaths provide centuries of demographic
data. One study, for example, examining wills filed between
1585 and 1638 in Suffolk and Essex found that the richer half of
estates (determined by the bequests in the wills) had almost
twice as many children named in wills compared to the poorer
half. An investigation of records in Norfolk covering the years
1500 to 1630 found an average of four children for middle class
families as opposed to two for the lower class. Another,
covering Saxony in Germany between 1547 and 1671, found the
middle class had an average of 3.4 children who survived to
become married, while the working class had just 1.6. This
differential fertility seems, in conjunction with item 5, the
known correlation between intelligence and social success, to
make plausible that a process of selection for intelligence was
going on, and probably had been for centuries. (Records are
sparse before the 17th century, so detailed research for that
period is difficult.)
Another form of selection got underway as the middle ages
gave way to the
early
modern period around the year 1500 in Europe. While in
medieval times criminals were rarely executed due to
opposition by the Church, by the early modern era almost all
felonies received the death penalty. This had the effect of
“culling the herd” of its most violent members who,
being predominantly young, male, and of low intelligence, would
often be removed from the breeding population before fathering
any children. To the extent that the propensity to violent
crime is heritable (which seems plausible, as almost all human
characteristics are heritable to one degree or another), this
would have “domesticated” the European human
population and contributed to the well-documented dramatic
drop in the murder rate in this period. It would have also
selected out those of low intelligence, who are prone to
violent crime. Further, in England, there was a provision
called
“Benefit
of Clergy”
where those who could
demonstrate literacy could escape the hangman. This was another
selection for intelligence.
If intelligence was gradually increasing in Europe from
the middle ages through the time of the Industrial Revolution,
can we find evidence of this in history? Obviously, we don't
have IQ tests from that period, but there are other suggestive
indications. Intelligent people have lower
time preference:
they are willing to defer immediate gratification for a reward
in the future. The rate of interest on borrowed money is a
measure of a society's overall time preference. Data covering
the period from 1150 through 1950 found that interest rates had
declined over the entire time, from over 10% in the year 1200 to
around 5% in the 1800s. This is consistent with an increase in
intelligence.
Literacy correlates with intelligence, and records from marriage
registers and court documents show continually growing literacy
from 1580 through 1920. In the latter part of this period, the
introduction of government schools contributed to much of the
increase, but in early years it may reflect growing intelligence.
A population with growing intelligence should produce more
geniuses who make contributions which are recorded in history.
In a 2005 study, American physicist Jonathan Huebner compiled
a list of 8,583 significant events in the history of science and
technology from the Stone Age through 2004. He found that,
after adjusting for the total population of the time, the rate
of innovation per capita had quadrupled between 1450 and 1870.
Independently, Charles Murray's 2003 book
Human Accomplishment found
that the rate of innovation and the appearance of the figures
who created them increased from the Middle Ages through the 1870s.
The authors contend that a growing population with increasing
mean intelligence eventually reached a critical mass which
led to the industrial revolution, due to a sufficiently
large number of genius intellects alive at the same time and
an intelligent workforce who could perform the jobs needed
to build and operate the new machines. This created
unprecedented prosperity and dramatically increased the standard
of living throughout the society.
And then an interesting thing happened. It's called the
“demographic
transition”, and it's been observed in country after
country as it develops from a rural, agrarian economy to an
urban, industrial society. Pre-industrial societies are
characterised by a high birth rate, a high rate of infant and
childhood mortality, and a stable or very slowly growing
population. Families have many children in the hope of having
a few survive to adulthood to care for them in old age and
pass on their parents' genes. It is in this phase that the intense
selection pressure obtains: the better-off and presumably more
intelligent parents will have more children survive to adulthood.
Once industrialisation begins, it is usually accompanied by
public health measures, better sanitation, improved access to
medical care, and the introduction of innovations such as
vaccination, antiseptics, and surgery with anæsthesia.
This results in a dramatic fall in the mortality rate for the
young, larger families, and an immediate bulge in the
population. As social welfare benefits are extended to reach
the poor through benefits from employers, charity, or government
services, this occurs more broadly across social classes,
reducing the disparity in family sizes among the rich and poor.
Eventually, parents begin to see the advantage of smaller
families now that they can be confident their offspring have
a high probability of surviving to adulthood. This is
particularly the case for the better-off, as they realise
their progeny will gain an advantage by splitting their
inheritance fewer ways and in receiving the better education
a family can afford for fewer children. This results in
a decline in the birth rate, which eventually reaches the
replacement rate (or below), where it comes into line with
the death rate.
But what does this do to the selection for intelligence from
which humans have been benefitting for centuries? It
ends it, and eventually puts it into reverse. In country
after country, the better educated and well-off (both correlates
of intelligence) have fewer children than the less intelligent.
This is easy to understand: in the prime child-bearing years
they tend to be occupied with their education and starting a
career. They marry later, have children (if at all) at an older
age, and due to the female biological clock, have fewer kids even
if they desire more. They also use contraception to plan their
families and tend to defer having children until the “right
time”, which sometimes never comes.
Meanwhile, the less intelligent, who in the modern welfare state
are often clients on the public dole, who have less impulse control,
high time preference, and when they use contraception often do
so improperly resulting in unplanned pregnancies, have more
children. They start earlier, don't bother with getting
married (as the stigma of single motherhood has largely been
eliminated), and rely upon the state to feed, house, educate, and
eventually imprison their progeny. This sad reality was
hilariously mocked in the introduction to the 2006 film
Idiocracy.
While this makes for a funny movie, if the population is really
getting dumber, it will have profound implications for the future.
There will not just be a falling general level of intelligence but far
fewer of the genius-level intellects who drive innovation in
science, the arts, and the economy. Further, societies which reach
the point where this decline sets in well before others that have
industrialised more recently will find themselves at a
competitive disadvantage across the board. (U.S. and Europe, I'm
talking about China, Korea, and [to a lesser extent] Japan.)
If you've followed the intelligence issue, about now you probably
have steam coming out your ears waiting to ask, “But what
about the
Flynn effect?”
IQ tests are usually “normed” to preserve the same
mean and standard deviation (100 and 15 in the U.S. and Britain)
over the years. James Flynn discovered that, in fact, measured by
standardised tests which were not re-normed, measured IQ had rapidly
increased in the 20th century in many countries around the world.
The increases were sometimes breathtaking: on the standardised
Raven's
Progressive Matrices
test (a nonverbal test considered to have little cultural bias),
the scores of British schoolchildren increased by 14 IQ points—almost
a full standard deviation—between 1942 and 2008. In the U.S.,
IQ scores seemed to be rising by around three points per decade, which
would imply that people a hundred years ago were two standard deviations
more stupid that those today, at the threshold of retardation. The
slightest grasp of history (which, sadly many people today lack) will
show how absurd such a supposition is.
What's going on, then? The authors join James Flynn in concluding
that what we're seeing is an increase in the population's
proficiency in taking IQ tests, not an actual increase in
general intelligence (g). Over time, children are exposed
to more and more standardised tests and tasks which require the
skills tested by IQ tests and, if practice doesn't make perfect, it
makes better, and with more exposure to media of all kinds, skills
of memorisation, manipulation of symbols, and spatial perception will
increase. These are correlates of g which IQ tests
measure, but what we're seeing may be specific skills which do not
correlate with g itself. If this be the case, then eventually
we should see the overall decline in general intelligence
overtake the Flynn effect and result in a downturn in IQ scores. And
this is precisely what appears to be happening.
Norway, Sweden, and Finland have almost universal male military
service and give conscripts a standardised IQ test when they
report for training. This provides a large database, starting in
1950, of men in these countries, updated yearly. What is seen is
an increase in IQ as expected from the Flynn effect from the start
of the records in 1950 through 1997, when the scores topped out
and began to decline. In Norway, the decline since 1997 was
0.38 points per decade, while in Denmark it was 2.7 points per
decade. Similar declines have been seen in Britain, France, the
Netherlands, and Australia. (Note that this decline may be due to
causes other than decreasing intelligence of the original population.
Immigration from lower-IQ countries will also contribute to decreases
in the mean score of the cohorts tested. But the consequences for
countries with falling IQ may be the same regardless of the cause.)
There are other correlates of general intelligence which have
little of the cultural bias of which some accuse IQ tests. They
are largely based upon the assumption that g is
something akin to the CPU clock speed of a computer: the ability
of the brain to perform basic tasks. These include simple
reaction time (how quickly can you push a button, for example,
when a light comes on), the ability to discriminate among
similar colours, the use of uncommon words, and the ability to
repeat a sequence of digits in reverse order. All of these
measures (albeit often from very sparse data sets) are
consistent with increasing general intelligence in Europe up
to some time in the 19th century and a decline ever since.
If this is true, what does it mean for our civilisation? The
authors contend that there is an inevitable cycle in the rise
and fall of civilisations which has been seen many times in
history. A society starts out with a low standard of living,
high birth and death rates, and strong selection for intelligence.
This increases the mean general intelligence of the population and,
much faster, the fraction of genius level intellects. These
contribute to a growth in the standard of living in the society,
better conditions for the poor, and eventually a degree of
prosperity which reduces the infant and childhood death rate.
Eventually, the birth rate falls, starting with the more intelligent
and better off portion of the population. The birth rate falls to
or below replacement, with a higher fraction of births now from
less intelligent parents. Mean IQ and the fraction of geniuses
falls, the society falls into stagnation and decline, and usually ends
up being conquered or supplanted by a younger civilisation still on
the rising part of the intelligence curve. They argue that this
pattern can be seen in the histories of Rome, Islamic civilisation,
and classical China.
And for the West—are we doomed to idiocracy? Well, there may be
some possible escapes or technological fixes. We may discover the
collection of genes responsible for the hereditary transmission of
intelligence and develop interventions to select for them in the
population. (Think this crosses the “ick factor”? What
parent would look askance at a pill which gave their child an
IQ boost of 15 points? What government wouldn't make these pills
available to all their citizens purely on the basis of
international competitiveness?) We may send some tiny fraction of
our population to Mars, space habitats, or other challenging
environments where they will be re-subjected to intense selection
for intelligence and breed a successor society (doubtless very different
from our own) which will start again at the beginning of the
eternal cycle. We may have a religious revival (they happen when
you least expect them), which puts an end to the cult of
pessimism, decline, and death and restores belief in large
families and, with it, the selection for intelligence. (Some may
look at Joseph Smith as a prototype of this, but so far the impact of
his religion has been on the margins outside areas where believers
congregate.) Perhaps some of our increasingly sparse population of
geniuses will figure out
artificial general intelligence
and our mind children will slip the surly bonds of biology and
its tedious eternal return to stupidity.
We might embrace the decline but vow to preserve
everything we've learned as a bequest to our successors: stored in
multiple locations in ways the next Enlightenment centuries hence
can build upon, just as scholars in the Renaissance rediscovered the
works of the ancient Greeks and Romans.
Or, maybe we won't. In which case, “Winter has come and it's
only going to get colder. Wrap up warm.”
Here is a James Delingpole
interview of the authors
and discussion of the book.
People say sometimes that Beauty is only superficial.
That may be so. But at least it is not as superficial
as Thought. To me, Beauty is the wonder of wonders.
It is only shallow people who do not judge by
appearances.
From childhood, however, we have been exhorted not to judge
people by their appearances. In
Skin in the Game (August 2019),
Nassim Nicholas Taleb advises choosing the surgeon who
“doesn't
look like a surgeon” because their success is more likely
due to competence than first impressions.
Despite this,
physiognomy,
assessing a person's characteristics from their appearance, is
as natural to humans as breathing, and has been an instinctual
part of human behaviour as old as our species. Thinkers and writers
from Aristotle through the great novelists of the 19th century
believed that an individual's character was reflected in, and
could be inferred from their appearance, and crafted and
described their characters accordingly. Jules Verne would
often spend a paragraph describing the appearance of his
characters and what that implied for their behaviour.
Is physiognomy all nonsense, a pseudoscience like
phrenology,
which purported to predict mental characteristics by measuring
bumps on the skull which were claimed indicate the development
of “cerebral organs” with specific functions? Or,
is there something to it, after all? Humans are a social
species and, as such, have evolved to be exquisitely sensitive
to signals sent by others of their kind, conveyed through subtle
means such as a tone of voice, facial expression, or posture.
Might we also be able to perceive and interpret messages which
indicate properties such as honesty, intelligence, courage,
impulsiveness, criminality, diligence, and more? Such an
ability, if possible, would be advantageous to individuals in
interacting with others and, contributing to success in
reproducing and raising offspring, would be selected for by
evolution.
In this short book (or long essay—the text is just 85
pages), the author examines the evidence and concludes that
there are legitimate correlations between appearance and
behaviour, and that human instincts are picking up genuine
signals which are useful in interacting with others. This
seems perfectly plausible: the development of the human body
and face are controlled by the genetic inheritance of the
individual and modulated through the effects of hormones, and
it is well-established that both genetics and hormones are
correlated with a variety of behavioural traits.
Let's consider a reasonably straightforward example. A
study published in 2008 found a statistically significant
correlation between the width of the face (cheekbone to
cheekbone distance compared to brow to upper lip) and
aggressiveness (measured by the number of penalty
minutes received) among a sample of 90 ice hockey
players. Now, a wide face is also known to correlate
with a high testosterone level in males, and testosterone
correlates with aggressiveness and selfishness. So, it
shouldn't be surprising to find the wide face morphology
correlated with the consequences of high-testosterone
behaviour.
In fact, testosterone and other hormone levels play a
substantial part in many of the correlations between appearance
and behaviour discussed by the author. Many people believe
they can identify, with reasonable reliability, homosexuals
just from their appearance: the term “gaydar”
has come into use for this ability. In 2017, researchers
trained an artificial intelligence program with a set of
photographs of individuals with known sexual orientations
and then tested the program on a set of more than 35,000
images. The program correctly identified the sexual
orientation of men 81% of the time and women with 74%
accuracy.
Of course, appearance goes well beyond factors which are inherited
or determined by hormones. Tattoos, body piercings, and other
irreversible modifications of appearance correlate with low
time preference, which correlates with low intelligence and
the other characteristics of r-selected
lifestyle. Choices of clothing indicate an individual's
self-identification, although fashion trends change rapidly
and differ from region to region, so misinterpretation is a
risk.
The author surveys a wide variety of characteristics including
fat/thin body type, musculature, skin and hair, height,
face shape, breast size in women, baldness and beards in men,
eye spacing, tattoos, hair colour, facial symmetry,
handedness, and finger length ratio, and presents
citations to research, most published recently, supporting
correlations between these aspects of appearance and
behaviour. He cautions that while people may be good at
sensing and interpreting these subtle signals among members
of their own race, there are substantial and consistent
differences between the races, and no inferences can be
drawn from them, nor are members of one race generally
able to read the signals from members of another.
One gets the sense (although less strongly) that this is another
field where advances in genetics and data science are piling
up a mass of evidence which will roll over the stubborn defenders
of the “blank slate” like a truth tsunami. And
again, this is an area where people's instincts, honed by
millennia of evolution, are still relied upon despite the
scorn of “experts”. (So afraid were the authors
of the Wikipedia
page
on physiognomy [retrieved 2019-12-16] of the “computer
gaydar” paper mentioned above that they declined to cite
the
peer reviewed paper in the
Journal of Personality and Social Psychology but
instead linked to a BBC News piece which dismissed
it as “dangerous” and “junk science”.
Go on whistling, folks, as the wave draws near and begins to crest….)
Is the case for physiognomy definitively made? I think not, and
as I suspect the author would agree, there are many aspects of
appearance and a multitude of personality traits, some of which
may be significantly correlated and others not at all. Still,
there is evidence for some linkage, and it appears to be growing
as more work in the area (which is perilous to the careers of
those who dare investigate it) accumulates. The scientific
evidence, summarised here, seems to be, as so often happens,
confirming the instincts honed over hundreds of generations by
the inexorable process of evolution: you can form some
conclusions just by observing people, and this information is
useful in the competition which is life on Earth. Meanwhile,
when choosing programmers for a project team, the one who shows up
whose eyebrows almost meet their hairline, sporting a plastic baseball
cap worn backward with the adjustment strap on the smallest peg,
with a scraggly soybeard, pierced nose, and visible tattoos isn't
likely to be my pick. She's probably a WordPress developer.