Tuesday, April 9, 2019

Reading List: The Powers of the Earth

Corcoran, Travis J. I. The Powers of the Earth. New Hampshire: Morlock Publishing, 2017. ISBN 978-1-9733-1114-0.
Corcoran, Travis J. I. Causes of Separation. New Hampshire: Morlock Publishing, 2018. ISBN 978-1-9804-3744-4.
(Note: This is novel is the first of an envisioned four volume series titled Aristillus. It and the second book, Causes of Separation, published in May, 2018, together tell a single story which reaches a decisive moment just as the first book ends. Unusually, this will be a review of both novels, taken as a whole. If you like this kind of story at all, there's no way you'll not immediately plunge into the second book after setting down the first.)

Around the year 2050, collectivists were firmly in power everywhere on Earth. Nations were subordinated to the United Nations, whose force of Peace Keepers (PKs) had absorbed all but elite special forces, and were known for being simultaneously brutal, corrupt, and incompetent. (Due to the equality laws, military units had to contain a quota of “Alternatively Abled Soldiers” who other troops had to wheel into combat.) The United States still existed as a country, but after decades of rule by two factions of the Democrat party: Populist and Internationalist, was mired in stagnation, bureaucracy, crumbling infrastructure, and on the verge of bankruptcy. The U.S. President, Themba Johnson, a former talk show host who combined cluelessness, a volatile temper, and vulpine cunning when it came to manipulating public opinion, is confronted with all of these problems and looking for a masterstroke to get beyond the next election.

Around 2050, when the collectivists entered the inevitable end game their policies lead to everywhere they are tried, with the Bureau of Sustainable Research (BuSuR) suppressing new technologies in every field and the Construction Jobs Preservation Act and Bureau of Industrial Planning banning anything which might increase productivity, a final grasp to loot the remaining seed corn resulted in the CEO Trials aimed at the few remaining successful companies, with expropriation of their assets and imprisonment of their leaders. CEO Mike Martin manages to escape from prison and link up with renegade physicist Ponnala (“Ponzie”) Srinivas, inventor of an anti-gravity drive he doesn't want the slavers to control. Mike buys a rustbucket oceangoing cargo ship, equips it with the drive, an airtight compartment and life support, and flees Earth with a cargo of tunnel boring machines and water to exile on the Moon, in the crater Aristillus in Mare Imbrium on the lunar near side where, fortuitously, the impact of a metal-rich asteroid millions of years ago enriched the sub-surface with metals rare in the Moon's crust.

Let me say a few words about the anti-gravity drive, which is very unusual and original, and whose properties play a significant role in the story. The drive works by coupling to the gravitational field of a massive body and then pushing against it, expending energy as it rises and gains gravitational potential energy. Momentum is conserved, as an equal and opposite force is exerted on the massive body against which it is pushing. The force vector is always along the line connecting the centre of mass of the massive body and the drive unit, directed away from the centre of mass. The force is proportional to the strength of the gravitational field in which the drive is operating, and hence stronger when pushing against a body like Earth as opposed to a less massive one like the Moon. The drive's force diminishes with distance from the massive body as its gravitational field falls off with the inverse square law, and hence the drive generates essentially no force when in empty space far from a gravitating body. When used to brake a descent toward a massive body, the drive converts gravitational potential energy into electricity like the regenerative braking system of an electric vehicle: energy which can be stored for use when later leaving the body.

Because the drive can only push outward radially, when used to, say, launch from the Earth to the Moon, it is much like Jules Verne's giant cannon—the launch must occur at the latitude and longitude on Earth where the Moon will be directly overhead at the time the ship arrives at the Moon. In practice, the converted ships also carried auxiliary chemical rockets and reaction control thrusters for trajectory corrections and precision maneuvering which could not be accomplished with the anti-gravity drive.

By 2064, the lunar settlement, called Aristillus by its inhabitants, was thriving, with more than a hundred thousand residents, and growing at almost twenty percent a year. (Well, nobody knew for sure, because from the start the outlook shared by the settlers was aligned with Mike Martin's anarcho-capitalist worldview. There was no government, no taxes, no ID cards, no business licenses, no regulations, no zoning [except covenants imposed by property owners on those who sub-leased property from them], no central bank, no paper money [an entrepreneur had found a vein of gold left by the ancient impactor and gone into business providing hard currency], no elections, no politicians, no forms to fill out, no police, and no army.) Some of these “features” of life on grey, regimented Earth were provided by private firms, while many of the others were found to be unnecessary altogether.

The community prospered as it grew. Like many frontier settlements, labour was in chronic short supply, and even augmented by robot rovers and machines (free of the yoke of BuSuR), there was work for anybody who wanted it and job offers awaiting new arrivals. A fleet of privately operated ships maintained a clandestine trade with Earth, bringing goods which couldn't yet be produced on the Moon, atmosphere, water from the oceans (in converted tanker ships), and new immigrants who had sold their Earthly goods and quit the slave planet. Waves of immigrants from blood-soaked Nigeria and chaotic China established their own communities and neighbourhoods in the ever-growing network of tunnels beneath Aristillus.

The Moon has not just become a refuge for humans. When BuSuR put its boot on the neck of technology, it ordered the shutdown of a project to genetically “uplift” dogs to human intelligence and beyond, creating “Dogs” (the capital letter denoting the uplift) and all existing Dogs to be euthanised. Many were, but John (we never learn his last name), a former U.S. Special Forces operator, manages to rescue a colony of Dogs from one of the labs before the killers arrive and escape with them to Aristillus, where they have set up the Den and engage in their own priorities, including role-playing games, software development, and trading on the betting markets. Also rescued by John was Gamma, the first Artificial General Intelligence to be created, whose intelligence is above the human level but not (yet, anyway) intelligence runaway singularity-level transcendent. Gamma has established itself in its own facility in Sinus Lunicus on the other side of Mare Imbrium, and has little contact with the human or Dog settlers.

Inevitably, liberty produces prosperity, and prosperity eventually causes slavers to regard the free with envious eyes, and slowly and surely draw their plans against them.

This is the story of the first interplanetary conflict, and a rousing tale of liberty versus tyranny, frontier innovation against collectivised incompetence, and principles (there is even the intervention of a Vatican diplomat) confronting brutal expedience. There are delicious side-stories about the creation of fake news, scheming politicians, would-be politicians in a libertarian paradise, open source technology, treachery, redemption, and heroism. How do three distinct species: human, Dog, and AI work together without a top-down structure or subordinating one to another? Can the lunar colony protect itself without becoming what its settlers left Earth to escape?

Woven into the story is a look at how a libertarian society works (and sometimes doesn't work) in practice. Aristillus is in no sense a utopia: it has plenty of rough edges and things to criticise. But people there are free, and they prefer it to the prison planet they escaped.

This is a wonderful, sprawling, action-packed story with interesting characters, complicated conflicts, and realistic treatment of what a small colony faces when confronted by a hostile planet of nine billion slaves. Think of this as Heinlein's The Moon is a Harsh Mistress done better. There are generous tips of the hat to Heinlein and other science fiction in the book, but this is a very different story with an entirely different outcome, and truer to the principles of individualism and liberty. I devoured these books and give them my highest recommendation. The Powers of the Earth won the 2018 Prometheus Award for best libertarian science fiction novel.

Posted at 15:07 Permalink

Sunday, April 7, 2019

Reading List: Connected: The Emergence of Global Consciousness

Nelson, Roger D. Connected: The Emergence of Global Consciousness. Princeton: ICRL Press, 2019. ISBN 978-1-936033-35-5.
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.

Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.

Such beliefs have been common over much of human history and in a wide variety of very diverse cultures around the globe, but in recent centuries these beliefs have been displaced by the view of mechanistic, reductionist science, which argues that the brain is just a kind of (phenomenally complicated) biological computer and that consciousness can be thought of as an emergent phenomenon which arises when the brain computer's software becomes sufficiently complex to be able to examine its own operation. From this perspective, consciousness is confined within the brain, cannot affect the outside world or the consciousness of others except by physical interactions initiated by motor neurons, and perceives the world only through sensory neurons. There is no “consciousness field”, and individual consciousness dies when the brain does.

But while this view is more in tune with the scientific outlook which spawned the technological revolution that has transformed the world and continues to accelerate, it has, so far, made essentially zero progress in understanding consciousness. Although we have built electronic computers which can perform mathematical calculations trillions of times faster than the human brain, and are on track to equal the storage capacity of that brain some time in the next decade or so, we still don't have the slightest idea how to program a computer to be conscious: to be self-aware and act out of a sense of free will (if free will, however defined, actually exists). So, if we adopt a properly scientific and sceptical view, we must conclude that the jury is still out on the question of consciousness. If we don't understand enough about it to program it into a computer, then we can't be entirely confident that it is something we could program into a computer, or that it is just some kind of software running on our brain-computer.

It looks like humans are, dare I say, programmed to believe in consciousness as a force not confined to the brain. Many cultures have developed shamanism, religions, philosophies, and practices which presume the existence of the following kinds of what Dean Radin calls Real Magic, and which I quote from my review of his book with that title.

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

Starting in the 19th century, a small number of scientists undertook to investigate whether these phenomena could possibly be real, whether they could be demonstrated under controlled conditions, and what mechanism might explain these kinds of links between consciousness and will and the physical world. In 1882 the Society for Psychical Research was founded in London and continues to operate today, publishing three journals. Psychic research, now more commonly called parapsychology, continues to investigate the interaction of consciousness with the outside world through (unspecified) means other than the known senses, usually in laboratory settings where great care is taken to ensure no conventional transfer of information occurs and with elaborate safeguards against fraud, either by experimenters or test subjects. For a recent review of the state of parapsychology research, I recommend Dean Radin's excellent 2006 book, Entangled Minds.

Parapsychologists such as Radin argue that while phenomena such as telepathy, precognition, and psychokinesis are very weak effects, elusive, and impossible to produce reliably on demand, the statistical evidence for their existence from large numbers of laboratory experiments is overwhelming, with a vanishingly small probability that the observed results are due to chance. Indeed, the measured confidence levels and effect sizes of some categories of parapsychological experiments exceed those of medical clinical trials such as those which resulted in the recommendation of routine aspirin administration to reduce the risk of heart disease in older males.

For more than a quarter of a century, an important centre of parapsychology research was the Princeton Engineering Anomalies Research (PEAR) laboratory, established in 1979 by Princeton University's Dean of Engineering, Robert G. Jahn. (The lab closed in 2007 with Prof. Jahn's retirement, and has now been incorporated into the International Consciousness Research Laboratories, which is the publisher of the present book.) An important part of PEAR's research was with electronic random event generators (REGs) connected to computers in experiments where a subject (or “operator”, in PEAR terminology) would try to influence the generator to produce an excess of one or zero bits. In a large series of experiments [PDF] run over a period of twelve years with multiple operators, it was reported that an influence in the direction of the operator's intention was seen with a highly significant probability of chance of one in a trillion. The effect size was minuscule, with around one bit in ten thousand flipping in the direction of the operator's stated goal.

If one operator can produce a tiny effect on the random data, what if many people were acting together, not necessarily with active intention, but with their consciousnesses focused on a single thing, for example at a sporting event, musical concert, or religious ceremony? The miniaturisation of electronics and computers eventually made it possible to build a portable REG and computer which could be taken into the field. This led to the FieldREG experiments in which this portable unit was taken to a variety of places and events to monitor its behaviour. The results were suggestive of an effect, but the data set was far too small to be conclusive.

Mindsong random event generator In 1998, Roger D. Nelson, the author of this book, realised that the rapid development and worldwide deployment of the Internet made it possible to expand the FieldREG concept to a global scale. Random event generators based upon quantum effects (usually shot noise from tunnelling across a back-biased Zener diode or a resistor) had been scaled down to small, inexpensive devices which could be attached to personal computers via an RS-232 serial port. With more and more people gaining access to the Internet (originally mostly via dial-up to commercial Internet Service Providers, then increasingly via persistent broadband connections such as ADSL service over telephone wires or a cable television connection), it might be possible to deploy a network of random event generators at locations all around the world, each of which would constantly collect timestamped data which would be transmitted to a central server, collected there, and made available to researchers for analysis by whatever means they chose to apply.

As Roger Nelson discussed the project with his son Greg (who would go on to be the principal software developer for the project), Greg suggested that what was proposed was essentially an electroencephalogram (EEG) for the hypothetical emerging global mind, an “ElectroGaiaGram” or EGG. Thus was born the “EGG Project” or, as it is now formally called, the Global Consciousness Project. Just as the many probes of an EEG provide a (crude) view into the operation of a single brain, perhaps the wide-flung, always-on network of REGs would pick up evidence of coherence when a large number of the world's minds were focused on a single event or idea. Once the EGG project was named, terminology followed naturally: the individual hosts running the random event generators would be “eggs” and the central data archiving server the “basket”.

In April 1998, Roger Nelson released the original proposal for the project and shortly thereafter Greg Nelson began development of the egg and basket software. I became involved in the project in mid-summer 1998 and contributed code to the egg and basket software, principally to allow it to be portable to other variants of Unix systems (it was originally developed on Linux) and machines with different byte order than the Intel processors on which it ran, and also to reduce the resource requirements on the egg host, making it easier to run on a non-dedicated machine. I also contributed programs for the basket server to assemble daily data summaries from the raw data collected by the basket and to produce a real-time network status report. Evolved versions of these programs remain in use today, more than two decades later. On August 2nd, 1998, I began to run the second egg in the network, originally on a Sun workstation running Solaris; this was the first non-Linux, non-Intel, big-endian egg host in the network. A few days later, I brought up the fourth egg, running on a Sun server in the Hall of the Servers one floor below the second egg; this used a different kind of REG, but was otherwise identical. Both of these eggs have been in continuous operation from 1998 to the present (albeit with brief outages due to power failures, machine crashes, and other assorted disasters over the years), and have migrated from machine to machine over time. The second egg is now connected to Raspberry Pi running Linux, while the fourth is now hosted on a Dell Intel-based server also running Linux, which was the first egg host to run on a 64-bit machine in native mode.

Here is precisely how the network measures deviation from the expectation for genuinely random data. The egg hosts all run a Network Time Protocol (NTP) client to provide accurate synchronisation with Internet time server hosts which are ultimately synchronised to atomic clocks or GPS. At the start of every second a total of 200 bits are read from the random event generator. Since all the existing generators provide eight bits of random data transmitted as bytes on a 9600 baud serial port, this involves waiting until the start of the second, reading 25 bytes from the serial port (first flushing any potentially buffered data), then breaking the eight bits out of each byte of data. A precision timing loop guarantees that the sampling starts at the beginning of the second-long interval to the accuracy of the computer's clock.

This process produces 200 random bits. These bits, one or zero, are summed to produce a “sample” which counts the number of one bits for that second. This sample is stored in a buffer on the egg host, along with a timestamp (in Unix time() format), which indicates when it was taken.

Buffers of completed samples are archived in files on the egg host's file system. Periodically, the basket host will contact the egg host over the Internet and request any samples collected after the last packet it received from the egg host. The egg will then transmit any newer buffers it has filled to the basket. All communications are performed over the stateless UDP Internet protocol, and the design of the basket request and egg reply protocol is robust against loss of packets or packets being received out of order.

(This data transfer protocol may seem odd, but recall that the network was designed more than twenty years ago when many people, especially those outside large universities and companies, had dial-up Internet access. The architecture would allow a dial-up egg to collect data continuously and then, when it happened to be connected to the Internet, respond to a poll from the basket and transmit its accumulated data during the time it was connected. It also makes the network immune to random outages in Internet connectivity. Over two decades of operation, we have had exactly zero problems with Internet outages causing loss of data.)

When a buffer from an egg host is received by the basket, it is stored in a database directory for that egg. The buffer contains a time stamp identifying the second at which each sample within it was collected. All times are stored in Universal Time (UTC), so no correction for time zones or summer and winter time is required.

This is the entire collection process of the network. The basket host, which was originally located at Princeton University and now is on a server at global-mind.org, only stores buffers in the database. Buffers, once stored, are never modified by any other program. Bad data, usually long strings of zeroes or ones produced when a hardware random event generator fails electrically, are identified by a “sanity check” program and then manually added to a “rotten egg” database which causes these sequences to be ignored by analysis programs. The random event generators are very simple and rarely fail, so this is a very unusual circumstance.

The raw database format is difficult for analysis programs to process, so every day an automated program (which I wrote) is run which reads the basket database, extracts every sample collected for the previous 24 hour period (or any desired 24 hour window in the history of the project), and creates a day summary file with a record for every second in the day with a column for the samples from each egg which reported that day. Missing data (eggs which did not report for that second) is indicated by a blank in that column. The data are encoded in CSV format which is easy to load into a spreadsheet or read with a program. Because some eggs may not report immediately due to Internet outages or other problems, the summary data report is re-generated two days later to capture late-arriving data. You can request custom data reports for your own analysis from the Custom Data Request page. If you are interested in doing your own exploratory analysis of the Global Consciousness Project data set, you may find my EGGSHELL C++ libraries useful.

The analysis performed by the Project proceeds from these summary files as follows.

First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).

Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.

It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:

Summing Stouffer Z-scores

over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance.

Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds.

The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.

Cumulative deviation: State of the Union 2012

What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)?

Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.

Cumulative deviation: Gadaffi killed, 2011-10-20

Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111.

What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes).

The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.

Cumulative deviation: GCP 1998 through 2015

Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.

Cumulative deviation: GCP results versus pseudorandom simulations

Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.

Cumulative deviation of chi-square: terrorist attacks 2001-09-11

The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.

Cumulative deviation of Z-score: terrorist attacks 2001-09-11

Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.

Cumulative deviation of chi-square: seven day window around 2001-09-11

Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.

Odds: twenty hour window around 2001-09-11, one hour smoothing

Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.

Binned odds: 2001-09-06 to 2001-09-13

The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.

Binned odds: pseudorandom data 2001-09-06 to 2001-09-13

So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”.

If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere.

The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs.

A Kindle edition is available.

All graphs in this article are as posted on the Global Consciousness Project Web site.

Posted at 21:10 Permalink

Wednesday, February 6, 2019

Your Sky and Solar System Live Updates

I have posted an overhaul of the Web pages supporting Your Sky and Solar System Live. The Your Sky Object Catalogues for asteroids by name, asteroids by number, and periodic comets now include links both to show the current position of the object in the sky in Your Sky and, for objects in non-hyperbolic orbits, plot the orbit in Solar System Live, automatically selecting a plot of the inner or full solar system depending upon the semi-major axis of the object's orbit.

The Object Catalogue files have been upgraded in style and typography from the 1990s to the eve of the Roaring Twenties, and a common CSS file defines the style for all files. The automatically-generated catalogues for asteroids and comets are all now XHTML 1.0 Strict (some of the other catalogues remain Transitional); all have passed validation. A new logo was developed which is compatible with a white background and used in all of the pages. All static GIF files in the Your Sky document tree have been converted to PNG. Information in the Object Catalogue planets page for Pluto has been updated to reflect data from the New Horizons fly-by.

All of the request pages for Your Sky maps which contain the latitude and longitude of the observer's site now use a free geolocation server to guess the requester's location from their IP address. (This is dodgy, but even when it falls on its face, it's usually better than the alternative of simply filling in Fourmilab's co-ordinates until the user enters something else.) The main Your Sky pages are now all XHTML 1.0 Strict. (Some of the help file pages remain Transitional.)

Posted at 23:16 Permalink

Saturday, February 2, 2019

Reading List: At Our Wits' End

Dutton, Edward and Michael A. Woodley of Menie. At Our Wits' End. Exeter, UK: Imprint Academic, 2018. ISBN 978-1-84540-985-2.
During the Great Depression, the Empire State Building was built, from the beginning of foundation excavation to official opening, in 410 days (less than 14 months). After the destruction of the World Trade Center in New York on September 11, 2001, design and construction of its replacement, the new One World Trade Center was completed on November 3, 2014, 4801 days (160 months) later.

In the 1960s, from U.S. president Kennedy's proposal of a manned lunar mission to the landing of Apollo 11 on the Moon, 2978 days (almost 100 months) elapsed. In January, 2004, U.S. president Bush announced the “Vision for Space Exploration”, aimed at a human return to the lunar surface by 2020. After a comical series of studies, revisions, cancellations, de-scopings, redesigns, schedule slips, and cost overruns, its successor now plans to launch a lunar flyby mission (not even a lunar orbit like Apollo 8) in June 2022, 224 months later. A lunar landing is planned for no sooner than 2028, almost 300 months after the “vision”, and almost nobody believes that date (the landing craft design has not yet begun, and there is no funding for it in the budget).

Wherever you look: junk science, universities corrupted with bogus “studies” departments, politicians peddling discredited nostrums a moment's critical thinking reveals to be folly, an economy built upon an ever-increasing tower of debt that nobody really believes is ever going to be paid off, and the dearth of major, genuine innovations (as opposed to incremental refinement of existing technologies, as has driven the computing, communications, and information technology industries) in every field: science, technology, public policy, and the arts, it often seems like the world is getting dumber. What if it really is?

That is the thesis explored by this insightful book, which is packed with enough “hate facts” to detonate the head of any bien pensant academic or politician. I define a “hate fact” as something which is indisputably true, well-documented by evidence in the literature, which has not been contradicted, but the citation of which is considered “hateful” and can unleash outrage mobs upon anyone so foolish as to utter the fact in public and be a career-limiting move for those employed in Social Justice Warrior-converged organisations. (An example of a hate fact, unrelated to the topic of this book, is the FBI violent crime statistics broken down by the race of the criminal and victim. Nobody disputes the accuracy of this information or the methodology by which it is collected, but woe betide anyone so foolish as to cite the data or draw the obvious conclusions from it.)

In April 2004 I made my own foray into the question of declining intelligence in “Global IQ: 1950–2050” in which I combined estimates of the mean IQ of countries with census data and forecasts of population growth to estimate global mean IQ for a century starting at 1950. Assuming the mean IQ of countries remains constant (which is optimistic, since part of the population growth in high IQ countries with low fertility rates is due to migration from countries with lower IQ), I found that global mean IQ, which was 91.64 for a population of 2.55 billion in 1950, declined to 89.20 for the 6.07 billion alive in 2000, and was expected to fall to 86.32 for the 9.06 billion population forecast for 2050. This is mostly due to the explosive population growth forecast for Sub-Saharan Africa, where many of the populations with low IQ reside.

U.N. World Population Prospects: 2017 Revision

This is a particularly dismaying prospect, because there is no evidence for sustained consensual self-government in nations with a mean IQ less than 90.

But while I was examining global trends assuming national IQ remains constant, in the present book the authors explore the provocative question of whether the population of today's developed nations is becoming dumber due to the inexorable action of natural selection on whatever genes determine intelligence. The argument is relatively simple, but based upon a number of pillars, each of which is a “hate fact”, although non-controversial among those who study these matters in detail.

  1. There is a factor, “general intelligence” or g, which measures the ability to solve a wide variety of mental problems, and this factor, measured by IQ tests, is largely stable across an individual's life.
  2. Intelligence, as measured by IQ tests, is, like height, in part heritable. The heritability of IQ is estimated at around 80%, which means that 80% of children's IQ can be estimated from that of their parents, and 20% is due to other factors.
  3. IQ correlates positively with factors contributing to success in society. The correlation with performance in education is 0.7, with highest educational level completed 0.5, and with salary 0.3.
  4. In Europe, between 1400 and around 1850, the wealthier half of the population had more children who survived to adulthood than the poorer half.
  5. Because IQ correlates with social success, that portion of the population which was more intelligent produced more offspring.
  6. Just as in selective breeding of animals by selecting those with a desired trait for mating, this resulted in a population whose average IQ increased (slowly) from generation to generation over this half-millennium.

The gradually rising IQ of the population resulted in a growing standard of living as knowledge and inventions accumulated due to the efforts of those with greater intelligence over time. In particular, even a relatively small increase in the mean IQ of a population makes an enormous difference in the tiny fraction of people with “genius level” IQ who are responsible for many of the significant breakthroughs in all forms of human intellectual endeavour. If we consider an IQ of 145 as genius level, in a population of a million with a mean IQ of 100, one in 741 people will have an IQ of 145 or above, so there will be around 1350 people with such an IQ. But if the population's mean IQ is 95, just five points lower, only one in 2331 people will have a genius level IQ, and there will be just 429 potential geniuses in the population of a million. In a population of a million with a mean IQ of 90, there will be just 123 potential geniuses.

(Some technical details are in order. A high IQ [generally 125 or above] appears to be a necessary condition for genius-level achievement, but it is insufficient by itself. Those who produce feats of genius usually combine high intelligence with persistence, ambition, often a single-minded focus on a task, and usually require an environment which allows them to acquire the knowledge and intellectual tools required to apply their talent. But since a high IQ is a requirement, the mean IQ determines what fraction of the population are potential geniuses; other factors such as the society's educational institutions, resources such as libraries, and wealth which allows some people to concentrate on intellectual endeavours instead of manual labour, contribute to how many actual works of genius will be produced. The mean IQ of most Western industrial nations is around 100, and the standard deviation of IQ is normalised to be 15. Using this information you can perform calculations such as those in the previous paragraph using Fourmilab's z Score Calculator, as explained in my Introduction to Probability and Statistics.)

Of the pillars of the argument listed above, items 1 through 3 are noncontroversial except by those who deny the existence of general intelligence entirely or the ability of IQ tests to measure it. The authors present the large body of highly persuasive evidence in favour of those items in a form accessible to the non-specialist. If you reject that evidence, then you needn't consider the rest of the argument.

Item 4, the assertion that wealthier families had more children survive to adulthood, is substantiated by a variety of research, much of it done in England, where recorded wills and church records of baptisms and deaths provide centuries of demographic data. One study, for example, examining wills filed between 1585 and 1638 in Suffolk and Essex found that the richer half of estates (determined by the bequests in the wills) had almost twice as many children named in wills compared to the poorer half. An investigation of records in Norfolk covering the years 1500 to 1630 found an average of four children for middle class families as opposed to two for the lower class. Another, covering Saxony in Germany between 1547 and 1671, found the middle class had an average of 3.4 children who survived to become married, while the working class had just 1.6. This differential fertility seems, in conjunction with item 5, the known correlation between intelligence and social success, to make plausible that a process of selection for intelligence was going on, and probably had been for centuries. (Records are sparse before the 17th century, so detailed research for that period is difficult.)

Another form of selection got underway as the middle ages gave way to the early modern period around the year 1500 in Europe. While in medieval times criminals were rarely executed due to opposition by the Church, by the early modern era almost all felonies received the death penalty. This had the effect of “culling the herd” of its most violent members who, being predominantly young, male, and of low intelligence, would often be removed from the breeding population before fathering any children. To the extent that the propensity to violent crime is heritable (which seems plausible, as almost all human characteristics are heritable to one degree or another), this would have “domesticated” the European human population and contributed to the well-documented dramatic drop in the murder rate in this period. It would have also selected out those of low intelligence, who are prone to violent crime. Further, in England, there was a provision called “Benefit of Clergy” where those who could demonstrate literacy could escape the hangman. This was another selection for intelligence.

If intelligence was gradually increasing in Europe from the middle ages through the time of the Industrial Revolution, can we find evidence of this in history? Obviously, we don't have IQ tests from that period, but there are other suggestive indications. Intelligent people have lower time preference: they are willing to defer immediate gratification for a reward in the future. The rate of interest on borrowed money is a measure of a society's overall time preference. Data covering the period from 1150 through 1950 found that interest rates had declined over the entire time, from over 10% in the year 1200 to around 5% in the 1800s. This is consistent with an increase in intelligence.

Literacy correlates with intelligence, and records from marriage registers and court documents show continually growing literacy from 1580 through 1920. In the latter part of this period, the introduction of government schools contributed to much of the increase, but in early years it may reflect growing intelligence.

A population with growing intelligence should produce more geniuses who make contributions which are recorded in history. In a 2005 study, American physicist Jonathan Huebner compiled a list of 8,583 significant events in the history of science and technology from the Stone Age through 2004. He found that, after adjusting for the total population of the time, the rate of innovation per capita had quadrupled between 1450 and 1870. Independently, Charles Murray's 2003 book Human Accomplishment found that the rate of innovation and the appearance of the figures who created them increased from the Middle Ages through the 1870s.

The authors contend that a growing population with increasing mean intelligence eventually reached a critical mass which led to the industrial revolution, due to a sufficiently large number of genius intellects alive at the same time and an intelligent workforce who could perform the jobs needed to build and operate the new machines. This created unprecedented prosperity and dramatically increased the standard of living throughout the society.

And then an interesting thing happened. It's called the “demographic transition”, and it's been observed in country after country as it develops from a rural, agrarian economy to an urban, industrial society. Pre-industrial societies are characterised by a high birth rate, a high rate of infant and childhood mortality, and a stable or very slowly growing population. Families have many children in the hope of having a few survive to adulthood to care for them in old age and pass on their parents' genes. It is in this phase that the intense selection pressure obtains: the better-off and presumably more intelligent parents will have more children survive to adulthood.

Once industrialisation begins, it is usually accompanied by public health measures, better sanitation, improved access to medical care, and the introduction of innovations such as vaccination, antiseptics, and surgery with anæsthesia. This results in a dramatic fall in the mortality rate for the young, larger families, and an immediate bulge in the population. As social welfare benefits are extended to reach the poor through benefits from employers, charity, or government services, this occurs more broadly across social classes, reducing the disparity in family sizes among the rich and poor.

Eventually, parents begin to see the advantage of smaller families now that they can be confident their offspring have a high probability of surviving to adulthood. This is particularly the case for the better-off, as they realise their progeny will gain an advantage by splitting their inheritance fewer ways and in receiving the better education a family can afford for fewer children. This results in a decline in the birth rate, which eventually reaches the replacement rate (or below), where it comes into line with the death rate.

But what does this do to the selection for intelligence from which humans have been benefitting for centuries? It ends it, and eventually puts it into reverse. In country after country, the better educated and well-off (both correlates of intelligence) have fewer children than the less intelligent. This is easy to understand: in the prime child-bearing years they tend to be occupied with their education and starting a career. They marry later, have children (if at all) at an older age, and due to the female biological clock, have fewer kids even if they desire more. They also use contraception to plan their families and tend to defer having children until the “right time”, which sometimes never comes.

Meanwhile, the less intelligent, who in the modern welfare state are often clients on the public dole, who have less impulse control, high time preference, and when they use contraception often do so improperly resulting in unplanned pregnancies, have more children. They start earlier, don't bother with getting married (as the stigma of single motherhood has largely been eliminated), and rely upon the state to feed, house, educate, and eventually imprison their progeny. This sad reality was hilariously mocked in the introduction to the 2006 film Idiocracy.

While this makes for a funny movie, if the population is really getting dumber, it will have profound implications for the future. There will not just be a falling general level of intelligence but far fewer of the genius-level intellects who drive innovation in science, the arts, and the economy. Further, societies which reach the point where this decline sets in well before others that have industrialised more recently will find themselves at a competitive disadvantage across the board. (U.S. and Europe, I'm talking about China, Korea, and [to a lesser extent] Japan.)

If you've followed the intelligence issue, about now you probably have steam coming out your ears waiting to ask, “But what about the Flynn effect?” IQ tests are usually “normed” to preserve the same mean and standard deviation (100 and 15 in the U.S. and Britain) over the years. James Flynn discovered that, in fact, measured by standardised tests which were not re-normed, measured IQ had rapidly increased in the 20th century in many countries around the world. The increases were sometimes breathtaking: on the standardised Raven's Progressive Matrices test (a nonverbal test considered to have little cultural bias), the scores of British schoolchildren increased by 14 IQ points—almost a full standard deviation—between 1942 and 2008. In the U.S., IQ scores seemed to be rising by around three points per decade, which would imply that people a hundred years ago were two standard deviations more stupid that those today, at the threshold of retardation. The slightest grasp of history (which, sadly many people today lack) will show how absurd such a supposition is.

What's going on, then? The authors join James Flynn in concluding that what we're seeing is an increase in the population's proficiency in taking IQ tests, not an actual increase in general intelligence (g). Over time, children are exposed to more and more standardised tests and tasks which require the skills tested by IQ tests and, if practice doesn't make perfect, it makes better, and with more exposure to media of all kinds, skills of memorisation, manipulation of symbols, and spatial perception will increase. These are correlates of g which IQ tests measure, but what we're seeing may be specific skills which do not correlate with g itself. If this be the case, then eventually we should see the overall decline in general intelligence overtake the Flynn effect and result in a downturn in IQ scores. And this is precisely what appears to be happening.

Norway, Sweden, and Finland have almost universal male military service and give conscripts a standardised IQ test when they report for training. This provides a large database, starting in 1950, of men in these countries, updated yearly. What is seen is an increase in IQ as expected from the Flynn effect from the start of the records in 1950 through 1997, when the scores topped out and began to decline. In Norway, the decline since 1997 was 0.38 points per decade, while in Denmark it was 2.7 points per decade. Similar declines have been seen in Britain, France, the Netherlands, and Australia. (Note that this decline may be due to causes other than decreasing intelligence of the original population. Immigration from lower-IQ countries will also contribute to decreases in the mean score of the cohorts tested. But the consequences for countries with falling IQ may be the same regardless of the cause.)

There are other correlates of general intelligence which have little of the cultural bias of which some accuse IQ tests. They are largely based upon the assumption that g is something akin to the CPU clock speed of a computer: the ability of the brain to perform basic tasks. These include simple reaction time (how quickly can you push a button, for example, when a light comes on), the ability to discriminate among similar colours, the use of uncommon words, and the ability to repeat a sequence of digits in reverse order. All of these measures (albeit often from very sparse data sets) are consistent with increasing general intelligence in Europe up to some time in the 19th century and a decline ever since.

If this is true, what does it mean for our civilisation? The authors contend that there is an inevitable cycle in the rise and fall of civilisations which has been seen many times in history. A society starts out with a low standard of living, high birth and death rates, and strong selection for intelligence. This increases the mean general intelligence of the population and, much faster, the fraction of genius level intellects. These contribute to a growth in the standard of living in the society, better conditions for the poor, and eventually a degree of prosperity which reduces the infant and childhood death rate. Eventually, the birth rate falls, starting with the more intelligent and better off portion of the population. The birth rate falls to or below replacement, with a higher fraction of births now from less intelligent parents. Mean IQ and the fraction of geniuses falls, the society falls into stagnation and decline, and usually ends up being conquered or supplanted by a younger civilisation still on the rising part of the intelligence curve. They argue that this pattern can be seen in the histories of Rome, Islamic civilisation, and classical China.

And for the West—are we doomed to idiocracy? Well, there may be some possible escapes or technological fixes. We may discover the collection of genes responsible for the hereditary transmission of intelligence and develop interventions to select for them in the population. (Think this crosses the “ick factor”? What parent would look askance at a pill which gave their child an IQ boost of 15 points? What government wouldn't make these pills available to all their citizens purely on the basis of international competitiveness?) We may send some tiny fraction of our population to Mars, space habitats, or other challenging environments where they will be re-subjected to intense selection for intelligence and breed a successor society (doubtless very different from our own) which will start again at the beginning of the eternal cycle. We may have a religious revival (they happen when you least expect them), which puts an end to the cult of pessimism, decline, and death and restores belief in large families and, with it, the selection for intelligence. (Some may look at Joseph Smith as a prototype of this, but so far the impact of his religion has been on the margins outside areas where believers congregate.) Perhaps some of our increasingly sparse population of geniuses will figure out artificial general intelligence and our mind children will slip the surly bonds of biology and its tedious eternal return to stupidity. We might embrace the decline but vow to preserve everything we've learned as a bequest to our successors: stored in multiple locations in ways the next Enlightenment centuries hence can build upon, just as scholars in the Renaissance rediscovered the works of the ancient Greeks and Romans.

Or, maybe we won't. In which case, “Winter has come and it's only going to get colder. Wrap up warm.”

Here is a James Delingpole interview of the authors and discussion of the book.

Posted at 16:08 Permalink

Thursday, January 31, 2019

HotBits: Server 3.9 released, JSON support

HotBits server version 3.9 is now in production at Fourmilab. This server is 100% upward compatible with existing HotBits users and API client programs, but has been extensively restructured to improve reliability and fault tolerance. It is able to communicate with multiple HotBits generators and recover from any timeouts or connection problems in obtaining data from them, and requests data from the two identical Fourmilab HotBits generators in a round-robin sequence.

In addition to the existing hexadecimal, binary C data structure, and XML data formats, JSON is now supported. JSON is a JavaScript-derived data structure representation which is increasingly used by Web applications. HotBits delivered in JSON format provide all of the information to clients that the XML representation delivers.

Version 3.9 of the HotBits server supports random data generation using the RDSEED instruction implemented in recent Intel microprocessors. This generator can be configured when the HotBits server is built, and allows testing a HotBits generator on a machine which has a suitable Intel processor without the need for the radioactive generator. This option is never used in production HotBits servers, but makes it much easier to test the generator software in development environments.

If you want to set up your own HotBits server (which is now more easily done if you have an Intel processor which supports RDSEED), you can download the HotBits version 3.9 software.

Posted at 00:57 Permalink

Monday, December 31, 2018

Books of the Year: 2018

Here are my picks for the best books of 2018, fiction and nonfiction. These aren't the best books published this year, but rather the best I've read in the last twelve months. The winner in both categories is barely distinguished from the pack, and the runners up are all worthy of reading. Runners up appear in alphabetical order by their author's surname. Each title is linked to my review of the book.


Winner: Runners up:


Winner: Runners up:

Posted at 13:08 Permalink

Wednesday, December 26, 2018

Reading List: Iron Sunrise

Stross, Charles. Iron Sunrise. New York: Ace, 2005. ISBN 978-0-441-01296-1.
In Accelerando (July 2011), a novel assembled from nine previously-published short stories, the author chronicles the arrival of a technological singularity on Earth: the almost-instantaneously emerging super-intellect called the Eschaton which departed the planet toward the stars. Simultaneously, nine-tenths of Earth's population vanished overnight, and those left behind, after a period of chaos, found that with the end of scarcity brought about by “cornucopia machines” produced in the first phase of the singularity, they could dispense with anachronisms such as economic systems and government. After humans achieved faster than light travel, they began to discover that the Eschaton had relocated 90% of Earth's population to habitable worlds around various stars and left them to develop in their own independent directions, guided only by this message from the Eschaton, inscribed on a monument on each world.

  1. I am the Eschaton. I am not your god.
  2. I am descended from you, and I exist in your future.
  3. Thou shalt not violate causality within my historic light cone. Or else.

The wormholes used by the Eschaton to relocate Earth's population in the great Diaspora, a technology which humans had yet to understand, not only permitted instantaneous travel across interstellar distances but also in time: the more distant the planet from Earth, the longer the settlers deposited there have had to develop their own cultures and civilisations before being contacted by faster than light ships. With cornucopia machines to meet their material needs and allow them to bootstrap their technology, those that descended into barbarism or incessant warfare did so mostly due to bad ideas rather than their environment.

Rachel Mansour, secret agent for the Earth-based United Nations, operating under the cover of an entertainment officer (or, if you like, cultural attaché), who we met in the previous novel in the series, Singularity Sky (February 2011), and her companion Martin Springfield, who has a back-channel to the Eschaton, serve as arms control inspectors—their primary mission to insure that nothing anybody on Earth or the worlds who have purchased technology from Earth invites the wrath of the Eschaton—remember that “Or else.”

A terrible fate has befallen the planet Moscow, a diaspora “McWorld” accomplished in technological development and trade, when its star, a G-type main sequence star like the Sun, explodes in a blast releasing a hundredth the energy of a supernova, destroying all life on planet Moscow within an instant of the wavefront reaching it, and the entire planet within an hour.

The problem is, type G stars just don't explode on their own. Somebody did this, quite likely using technologies which risk Big E's “or else” on whoever was responsible (or it concluded was responsible). What's more, Moscow maintained a slower-than-light deterrent fleet with relativistic planet-buster weapons to avenge any attack on their home planet. This fleet, essentially undetectable en route, has launched against New Dresden, a planet with which Moscow had a nonviolent trade dispute. The deterrent fleet can be recalled only by coded messages from two Moscow system ambassadors who survived the attack at their postings in other systems, but can also be sent an irrevocable coercion code, which cancels the recall and causes any further messages to be ignored, by three ambassadors. And somebody seems to be killing off the remaining Moscow ambassadors: if the number falls below two, the attack will arrive at New Dresden in thirty-five years and wipe out the planet and as many of its eight hundred million inhabitants as have not been evacuated.

Victoria Strowger, who detests her name and goes by “Wednesday”, has had an invisible friend since childhood, “Herman”, who speaks to her through her implants. As she's grown up, she has come to understand that, in some way, Herman is connected to Big E and, in return for advice and assistance she values highly, occasionally asks her for favours. Wednesday and her family were evacuated from one of Moscow's space stations just before the deadly wavefront from the exploded star arrived, with Wednesday running a harrowing last “errand” for Herman before leaving. Later, in her new home in an asteroid in the Septagon system, she becomes the target of an attack seemingly linked to that mystery mission, and escapes only to find her family wiped out by the attackers. With Herman's help, she flees on an interstellar liner.

While Singularity Sky was a delightful romp describing a society which had deliberately relinquished technology in order to maintain a stratified class system with the subjugated masses frozen around the Victorian era, suddenly confronted with the merry pranksters of the Festival, who inject singularity-epoch technology into its stagnant culture, Iron Sunrise is a much more conventional mystery/adventure tale about gaining control of the ambassadorial keys, figuring out who are the good and bad guys, and trying to avert a delayed but inexorably approaching genocide.

This just didn't work for me. I never got engaged in the story, didn't find the characters particularly interesting, nor came across any interesting ways in which the singularity came into play (and this is supposed to be the author's “Singularity Series”). There are some intriguing concepts, for example the “causal channel”, in which quantum-entangled particles permit instantaneous communication across spacelike separations as long as the previously-prepared entangled particles have first been delivered to the communicating parties by slower than light travel. This is used in the plot to break faster than light communication where it would be inconvenient for the story line (much as all those circumstances in Star Trek where the transporter doesn't work for one reason or another when you're tempted to say “Why don't they just beam up?”). The apparent villains, the ReMastered, (think Space Nazis who believe in a Tipler-like cult of Omega Point out-Eschaton-ing the Eschaton, with icky brain-sucking technology) were just over the top.

Accelerando and Singularity Sky were thought-provoking and great fun. This one doesn't come up to that standard.

Posted at 18:00 Permalink

Tuesday, December 25, 2018

Gnome-o-gram: Whence Gold?

ColWhiteDwarfTV.0538.jpgBy the time I was in high school in the 1960s, the origin of the chemical elements seemed pretty clear. Hydrogen was created in the Big Bang, and very shortly afterward about one quarter of it fused to make helium with a little bit of lithium. (This process is now called Big Bang nucleosynthesis, and models of it agree very well with astronomical observations of primordial gases in the universe.)

All of the heavier elements, including the carbon, oxygen, and nitrogen which, along with hydrogen, make up our bodies and all other living things on Earth, were made in stars which fused hydrogen into these heavier elements. Eventually, the massive stars fused lighter elements into iron, which cannot be fused further, and collapsed, resulting in a supernova explosion which spewed these heavy elements into space, where they were incorporated into later generations of stars such as the Sun and eventually found their way into planets and you and me. We are stardust. But we are made of these lighter elements—we are not golden.

But, as more detailed investigations into the life and death of stars proceeded, something didn't add up. Yes, you can make all of the elements up to iron in massive stars, and the abundances found in the universe agree pretty well with the models of the life and death of these stars, but the heavier elements such as gold, lead, and uranium just didn't compute: they have a large fraction of neutrons in their nuclei (if they didn't, they'd be radioactive [or more radioactive than they already are] and would have decayed long before we came on the scene to observe them), and the process of a supernova explosion doesn't seem to have any way to create nuclei with so many neutrons. "Then, a miracle happens" worked in the early days of astrophysics, but once people began to really crunch the numbers, it didn't cut it any more.

Where could all of those neutrons could have come from, and what could have provided the energy to create these heavy and relatively rare nuclei? Well, if you're looking for lots of neutrons all in the same place at the same time, there's no better place than a neutron star, which is a tiny object (radius around 10 km) with a mass greater than that of the Sun, which is entirely made of them. And if it's energy you're needing, well how about smashing two of them together at a velocity comparable to the speed of light? (Or, more precisely, the endpoint of the in-spiral of two neutron stars in a close orbit as their orbital energy decays due to emission of gravitational radiation.) Something like this, say.

This was all theory until 12:41 UTC on 2017-08-17, when gravitational wave detectors triggered on an event which turned out to be, after detailed analysis, the strongest gravitational wave ever detected. Because it was simultaneously observed by detectors in the U.S. in Washington state and Louisiana and in Italy, it was possible to localise the region in the sky from which it originated. At almost the same time, NASA and European Space Agency satellites in orbit detected a weak gamma ray burst. Before the day was out, ground-based astronomers found an anomalous source in the relatively nearby (130 million light years away) galaxy NGC 4993, which was subsequently confirmed by instruments on the ground and in space across a wide swath of the electromagnetic spectrum. This was an historic milestone in multi-messenger astronomy: for the first time an event had been observed both by gravitational and electromagnetic radiation: two entirely different channels by which we perceive the universe.

These observations allowed determining the details of the material ejected from the collision. Most of the mass of the two neutron stars went to form a black hole, but a fraction was ejected in a neutron- and energy-rich soup from which stable heavy elements could form. The observations closely agreed with the calculations of theorists who argued that elements heavier than iron that we observe in the universe are mostly formed in collisions of neutron stars.

Think about it. Do you have bit of gold on your finger, or around your neck, or hanging from your ears? Where did it come from? Well, probably it was dug up from beneath the Earth, but before that? To make it, first two massive stars had to form in the early universe, live their profligate lives, then explode in cataclysmic supernova explosions. Then the remnants of these explosions, neutron stars, had to find themselves in a death spiral as the inexorable dissipation of gravitational radiation locked them into a deadly embrace. Finally, they collided, releasing enough energy to light up the universe and jiggle our gravitational wave detectors 130 million years after the event. And then they spewed whole planetary masses of gold, silver, platinum, lead, uranium, and heaven knows how many other elements the news of which has yet to come to Harvard into the interstellar void.

In another document,I have discussed how relativity explains why gold has that mellow glow. Now we observed where gold ultimately comes from. And once again, you can't explain it without (in this case, general) relativity.

In a way, we've got ourselves back to the garden.

Posted at 15:54 Permalink

Monday, December 24, 2018

Reading List: Days of Rage

Burrough, Bryan. Days of Rage. New York: Penguin Press, 2015. ISBN 978-0-14-310797-2.
In the year 1972, there were more than 1900 domestic bombings in the United States. Think about that—that's more than five bombings a day. In an era when the occasional terrorist act by a “lone wolf” nutcase gets round the clock coverage on cable news channels, it's hard to imagine that not so long ago, most of these bombings and other mayhem, committed by “revolutionary” groups such as Weatherman, the Black Liberation Army, FALN, and The Family, often made only local newspapers on page B37, below the fold.

The civil rights struggle and opposition to the Vietnam war had turned out large crowds and radicalised the campuses, but in the opinion of many activists, yielded few concrete results. Indeed, in the 1968 presidential election, pro-war Democrat Humphrey had been defeated by pro-war Republican Nixon, with anti-war Democrats McCarthy marginalised and Robert Kennedy assassinated.

In this bleak environment, a group of leaders of one of the most radical campus organisations, the Students for a Democratic Society (SDS), gathered in Chicago to draft what became a sixteen thousand word manifesto bristling with Marxist jargon that linked the student movement in the U.S. to Third World guerrilla insurgencies around the globe. They advocated a Che Guevara-like guerrilla movement in America led, naturally, by themselves. They named the manifesto after the Bob Dylan lyric, “You don't need a weatherman to know which way the wind blows.” Other SDS members who thought the idea of armed rebellion in the U.S. absurd and insane quipped, “You don't need a rectal thermometer to know who the assholes are.”

The Weatherman faction managed to blow up (figuratively) the SDS convention in June 1969, splitting the organisation but effectively taking control of it. They called a massive protest in Chicago for October. Dubbed the “National Action”, it would soon become known as the “Days of Rage”.

Almost immediately the Weatherman plans began to go awry. Their plans to rally the working class (who the Ivy League Weatherman élite mocked as “greasers”) got no traction, with some of their outrageous “actions” accomplishing little other than landing the perpetrators in the slammer. Come October, the Days of Rage ended up in farce. Thousands had been expected, ready to take the fight to the cops and “oppressors”, but come the day, no more than two hundred showed up, most SDS stalwarts who already knew one another. They charged the police and were quickly routed with six shot (none seriously), many beaten, and more than 120 arrested. Bail bonds alone added up to US$ 2.3 million. It was a humiliating defeat. The leadership decided it was time to change course.

So what did this intellectual vanguard of the masses decide to do? Well, obviously, destroy the SDS (their source of funding and pipeline of recruitment), go underground, and start blowing stuff up. This posed a problem, because these middle-class college kids had no idea where to obtain explosives (they didn't know that at the time you could buy as much dynamite as you could afford over the counter in many rural areas with, at most, showing a driver's license), what to do with it, and how to build an underground identity. This led to, not Keystone Kops, but Klueless Kriminal misadventures, culminating in March 1970 when they managed to blow up an entire New York townhouse where a bomb they were preparing to attack a dance at Fort Dix, New Jersey detonated prematurely, leaving three of the Weather collective dead in the rubble. In the aftermath, many Weather hangers-on melted away.

This did not deter the hard core, who resolved to learn more about their craft. They issued a communiqué declaring their solidarity with the oppressed black masses (not one of whom, oppressed or otherwise, was a member of Weatherman), and vowed to attack symbols of “Amerikan injustice”. Privately, they decided to avoid killing people, confining their attacks to property. And one of their members hit the books to become a journeyman bombmaker.

The bungling Bolsheviks of Weatherman may have had Marxist theory down pat, but they were lacking in authenticity, and acutely aware of it. It was hard for those whose addresses before going underground were élite universities to present themselves as oppressed. The best they could do was to identify themselves with the cause of those they considered victims of “the system” but who, to date, seemed little inclined to do anything about it themselves. Those who cheered on Weatherman, then, considered it significant when, in the spring of 1971, a new group calling itself the “Black Liberation Army” (BLA) burst onto the scene with two assassination-style murders of New York City policemen on routine duty. Messages delivered after each attack to Harlem radio station WLIB claimed responsibility. One declared,

Every policeman, lackey or running dog of the ruling class must make his or her choice now. Either side with the people: poor and oppressed, or die for the oppressor. Trying to stop what is going down is like trying to stop history, for as long as there are those who will dare to live for freedom there are men and women who dare to unhorse the emperor.

All power to the people.

Politicians, press, and police weren't sure what to make of this. The politicians, worried about the opinion of their black constituents, shied away from anything which sounded like accusing black militants of targeting police. The press, although they'd never write such a thing or speak it in polite company, didn't think it plausible that street blacks could organise a sustained revolutionary campaign: certainly that required college-educated intellectuals. The police, while threatened by these random attacks, weren't sure there was actually any organised group behind the BLA attacks: they were inclined to believe it was a matter of random cop killers attributing their attacks to the BLA after the fact. Further, the BLA had no visible spokesperson and issued no manifestos other than the brief statements after some attacks. This contributed to the mystery, which largely persists to this day because so many participants were killed and the survivors have never spoken out.

In fact, the BLA was almost entirely composed of former members of the New York chapter of the Black Panthers, which had collapsed in the split between factions following Huey Newton and those (including New York) loyal to Eldridge Cleaver, who had fled to exile in Algeria and advocated violent confrontation with the power structure in the U.S. The BLA would perpetrate more than seventy violent attacks between 1970 and 1976 and is said to be responsible for the deaths of thirteen police officers. In 1982, they hijacked a domestic airline flight and pocketed a ransom of US$ 1 million.

Weatherman (later renamed the “Weather Underground” because the original name was deemed sexist) and the BLA represented the two poles of the violent radicals: the first, intellectual, college-educated, and mostly white, concentrated mostly on symbolic bombings against property, usually with warnings in advance to avoid human casualties. As pressure from the FBI increased upon them, they became increasingly inactive; a member of the New York police squad assigned to them quipped, “Weatherman, Weatherman, what do you do? Blow up a toilet every year or two.” They managed the escape of Timothy Leary from a minimum-security prison in California. Leary basically just walked away, with a group of Weatherman members paid by Leary supporters picking him up and arranging for he and his wife Rosemary to obtain passports under assumed names and flee the U.S. for exile in Algeria with former Black Panther leader Eldridge Cleaver.

The Black Liberation Army, being composed largely of ex-prisoners with records of violent crime, was not known for either the intelligence or impulse control of its members. On several occasions, what should have been merely tense encounters with the law turned into deadly firefights because a BLA militant opened fire for no apparent reason. Had they not been so deadly to those they attacked and innocent bystanders, the exploits of the BLA would have made a fine slapstick farce.

As the dour decade of the 1970s progressed, other violent underground groups would appear, tending to follow the model of either Weatherman or the BLA. One of the most visible, it not successful, was the “Symbionese Liberation Army” (SLA), founded by escaped convict and grandiose self-styled revolutionary Daniel DeFreeze. Calling himself “General Field Marshal Cinque”, which he pronounced “sin-kay”, and ending his fevered communications with “DEATH TO THE FASCIST INSECT THAT PREYS UPON THE LIFE OF THE PEOPLE”, this band of murderous bozos struck their first blow for black liberation by assassinating Marcus Foster, the first black superintendent of the Oakland, California school system for his “crimes against the people” of suggesting that police be called into deal with violence in the city's schools and that identification cards be issued to students. Sought by the police for the murder, they struck again by kidnapping heiress, college student, and D-list celebrity Patty Hearst, whose abduction became front page news nationwide. If that wasn't sufficiently bizarre, the abductee eventually issued a statement saying she had chosen to “stay and fight”, adopting the name “Tania”, after the nom de guerre of a Cuban revolutionary and companion of Che Guevara. She was later photographed by a surveillance camera carrying a rifle during a San Francisco bank robbery perpetrated by the SLA. Hearst then went underground and evaded capture until September 1975 after which, when being booked into jail, she gave her occupation as “Urban Guerrilla”. Hearst later claimed she had agreed to join the SLA and participate in its crimes only to protect her own life. She was convicted and sentenced to 35 years in prison, later reduced to 7 years. The sentence was later commuted to 22 months by U.S. President Jimmy Carter and she was released in 1979, and was the recipient of one of Bill Clinton's last day in office pardons in January, 2001. Six members of the SLA, including DeFreeze, died in a house fire during a shootout with the Los Angeles Police Department in May, 1974.

Violence committed in the name of independence for Puerto Rico was nothing new. In 1950, two radicals tried to assassinate President Harry Truman, and in 1954, four revolutionaries shot up the U.S. House of Representatives from the visitors' gallery, wounding five congressmen on the floor, none fatally. The Puerto Rican terrorists had the same problem as their Weatherman, BLA, or SLA bomber brethren: they lacked the support of the people. Most of the residents of Puerto Rico were perfectly happy being U.S. citizens, especially as this allowed them to migrate to the mainland to escape the endemic corruption and the poverty it engendered in the island. As the 1960s progressed, the Puerto Rico radicals increasingly identified with Castro's Cuba (which supported them ideologically, if not financially), and promised to make a revolutionary Puerto Rico a beacon of prosperity and liberty like Cuba had become.

Starting in 1974, a new Puerto Rican terrorist group, the Fuerzas Armadas de Liberación Nacional (FALN) launched a series of attacks in the U.S., most in the New York and Chicago areas. One bombing, that of the Fraunces Tavern in New York in January 1975, killed four people and injured more than fifty. Between 1974 and 1983, a total of more than 130 bomb attacks were attributed to the FALN, most against corporate targets. In 1975 alone, twenty-five bombs went off, around one every two weeks.

Other groups, such as the “New World Liberation Front” (NWLF) in northern California and “The Family” in the East continued the chaos. The NWLF, formed originally from remains of the SLA, detonated twice as many bombs as the Weather Underground. The Family carried out a series of robberies, including the deadly Brink's holdup of October 1981, and jailbreaks of imprisoned radicals.

In the first half of the 1980s, the radical violence sputtered out. Most of the principals were in prison, dead, or living underground and keeping a low profile. A growing prosperity had replaced the malaise and stagflation of the 1970s and there were abundant jobs for those seeking them. The Vietnam War and draft were receding into history, leaving the campuses with little to protest, and the remaining radicals had mostly turned from violent confrontation to burrowing their way into the culture, media, administrative state, and academia as part of Gramsci's “long march through the institutions”.

All of these groups were plagued with the “step two problem”. The agenda of Weatherman was essentially:

  1. Blow stuff up, kill cops, and rob banks.
  2. ?
  3. Proletarian revolution.

Other groups may have had different step threes: “Black liberation” for the BLA, “¡Puerto Rico libre!” for FALN, but none of them seemed to make much progress puzzling out step two. Deep thinker Bill Harris of the SLA's best attempt was, when he advocated killing policemen at random, arguing that “If they killed enough, … the police would crack down on the oppressed minorities of the Bay Area, who would then rise up and begin the revolution.”—sure thing.

In sum, all of this violence and the suffering that resulted from it accomplished precisely none of the goals of those who perpetrated it (which is a good thing: they mostly advocated for one flavour or another of communist enslavement of the United States). All it managed to do is contribute the constriction of personal liberty in the name of “security”, with metal detectors, bomb-sniffing dogs, X-ray machines, rent-a-cops, surveillance cameras, and the first round of airport security theatre springing up like mushrooms everywhere. The amount of societal disruption which can be caused by what amounted to around one hundred homicidal nutcases is something to behold. There were huge economic losses not just due to bombings, but by evacuations due to bomb threats, many doubtless perpetrated by copycats motivated by nothing more political than the desire for a day off from work. Violations of civil liberties by the FBI and other law enforcement agencies who carried out unauthorised wiretaps, burglaries, and other invasions of privacy and property rights not only discredited them, but resulted in many of the perpetrators of the mayhem walking away scot-free. Weatherman founders Bill Ayres and Bernardine Dohrn would, in 1995, launch the political career of Barack Obama at a meeting in their home in Chicago, where Ayers is now a Distinguished Professor at the University of Illinois at Chicago. Ayres, who bombed the U.S. Capitol in 1971 and the Pentagon in 1972, remarked in the 1980s that he was “Guilty as hell, free as a bird—America is a great country.”

This book is an excellent account of a largely-forgotten era in recent history. In a time when slaver radicals (a few of them the same people who set the bombs in their youth) declaim from the cultural heights of legacy media, academia, and their new strongholds in the technology firms which increasingly mediate our communications and access to information, advocate “active resistance”, “taking to the streets”, or “occupying” this or that, it's a useful reminder of where such action leads, and that it's wise to work out step two before embarking on step one.

Posted at 17:27 Permalink

Wednesday, December 19, 2018

Reading List: Minimanual of the Urban Guerrilla

Marighella, Carlos. Minimanual of the Urban Guerrilla. Seattle: CreateSpace, [1970] 2018. ISBN 978-1-4664-0680-3.
Carlos Marighella joined the Brazilian Communist Party in 1934, abandoning his studies in civil engineering to become a full time agitator for communism. He was arrested for subversion in 1936 and, after release from prison the following year, went underground. He was recaptured in 1939 and imprisoned until 1945 as part of an amnesty of political prisoners. He successfully ran for the federal assembly in 1946 but was removed from office when the Communist party was again banned in 1948. Resuming his clandestine life, he served in several positions in the party leadership and in 1953–1954 visited China to study the Maoist theory of revolution. In 1964, after a military coup in Brazil, he was again arrested, being shot in the process. After being once again released from prison, he broke with the Communist Party and began to advocate armed revolution against the military regime, travelling to Cuba to participate in a conference of Latin American insurgent movements. In 1968, he formed his own group, the Ação Libertadora Nacional (ALN) which, in September 1969, kidnapped U.S. Ambassador Charles Burke Elbrick, who was eventually released in exchange for fifteen political prisoners. In November 1969, Marighella was killed in a police ambush, prompted by a series of robberies and kidnappings by the ALN.

In June 1969, Marighella published this short book (or pamphlet: it is just 40 pages with plenty of white space at the ends of chapters) as a guide for revolutionaries attacking Brazil's authoritarian regime in the big cities. There is little or no discussion of the reasons for the rebellion; the work is addressed to those already committed to the struggle who seek practical advice for wreaking mayhem in the streets. Marighella has entirely bought into the Mao/Guevara theory of revolution: that the ultimate struggle must take place in the countryside, with rural peasants rising en masse against the regime. The problem with this approach was that the peasants seemed to be more interested in eking out their subsistence from the land than taking up arms in support of ideas championed by a few intellectuals in the universities and big cities. So, Marighella's guide is addressed to those in the cities with the goal of starting the armed struggle where there were people indoctrinated in the communist ideology on which it was based. This seems to suffer from the “step two problem”. In essence, his plan is:

  1. Blow stuff up, rob banks, and kill cops in the big cities.
  2. ?
  3. Communist revolution in the countryside.

The book is a manual of tactics: formation of independent cells operating on their own initiative and unable to compromise others if captured, researching terrain and targets and planning operations, mobility and hideouts, raising funds through bank robberies, obtaining weapons by raiding armouries and police stations, breaking out prisoners, kidnapping and exchange for money and prisoners, sabotaging government and industrial facilities, executing enemies and traitors, terrorist bombings, and conducting psychological warfare.

One problem with this strategy is that if you ignore the ideology which supposedly justifies and motivates this mayhem, it is essentially indistinguishable from the outside from the actions of non-politically-motivated outlaws. As the author notes,

The urban guerrilla is a man who fights the military dictatorship with arms, using unconventional methods. A political revolutionary, he is a fighter for his country's liberation, a friend of the people and of freedom. The area in which the urban guerrilla acts is in the large Brazilian cities. There are also bandits, commonly known as outlaws, who work in the big cities. Many times assaults by outlaws are taken as actions by urban guerrillas.

The urban guerrilla, however, differs radically from the outlaw. The outlaw benefits personally from the actions, and attacks indiscriminately without distinguishing between the exploited and the exploiters, which is why there are so many ordinary men and women among his victims. The urban guerrilla follows a political goal and only attacks the government, the big capitalists, and the foreign imperialists, particularly North Americans.

These fine distinctions tend to be lost upon innocent victims, especially since the proceeds of the bank robberies of which the “urban guerrillas” are so fond are not used to aid the poor but rather to finance still more attacks by the ever-so-noble guerrillas pursuing their “political goal”.

This would likely have been an obscure and largely forgotten work of a little-known Brazilian renegade had it not been picked up, translated to English, and published in June and July 1970 by the Berkeley Tribe, a California underground newspaper. It became the terrorist bible of groups including Weatherman, the Black Liberation Army, and Symbionese Liberation Army in the United States, the Red Army Faction in Germany, the Irish Republican Army, the Sandanistas in Nicaragua, and the Palestine Liberation Organisation. These groups embarked on crime and terror campaigns right out of Marighella's playbook with no more thought about step two. They are largely forgotten now because their futile acts had no permanent consequences and their existence was an embarrassment to the élites who largely share their pernicious ideology but have chosen to advance it through subversion, not insurrection.

A Kindle edition is available from a different publisher. You can read the book on-line for free at the Marxists Internet Archive.

Posted at 22:01 Permalink