- Gilder, George.
Life after Google.
Washington: Regnery Publishing, 2018.
ISBN 978-1-62157-576-4.
-
In his 1990 book Life after Television,
George Gilder predicted that the personal computer, then mostly boxes
that sat on desktops and worked in isolation from one another,
would become more personal, mobile, and be used more to communicate
than to compute. In the 1994 revised edition of the book, he
wrote. “The most common personal computer of the next decade will be
a digital cellular phone with an IP address … connecting
to thousands of databases of all kinds.” In contemporary
speeches he expanded on the idea, saying, “it will be as
portable as your watch and as personal as your wallet; it will
recognize speech and navigate streets; it will collect your
mail, your news, and your paycheck.” In 2000, he
published Telecosm, where
he forecast that the building out of a fibre optic communication
infrastructure and the development of successive generations of
spread spectrum digital mobile communication technologies would
effectively cause the cost of communication bandwidth (the quantity
of data which can be transmitted in a given time) to asymptotically
approach zero, just as the ability to pack more and more transistors
on microprocessor and memory chips was doing for computing.
Clearly, when George Gilder forecasts the future of computing,
communication, and the industries and social phenomena that
spring from them, it's wise to pay attention. He's not
infallible: in 1990 he predicted that “in the world of
networked computers, no one would have to see an advertisement
he didn't want to see”. Oh, well. The
very difference between that happy vision and the
advertisement-cluttered world we inhabit today, rife with
bots, malware, scams, and serial large-scale security breaches
which compromise the personal data of millions of people and
expose them to identity theft and other forms of fraud is the
subject of this book: how we got here, and how technology is
opening a path to move on to a better place.
The Internet was born with decentralisation as a central
concept. Its U.S. government-funded precursor,
ARPANET,
was intended to research and demonstrate the technology
of packet
switching, in which dedicated communication lines
from point to point (as in the telephone network) were
replaced by switching packets, which can represent all
kinds of data—text, voice, video, mail, cat
pictures—from source to destination over shared high-speed
data links. If the network had multiple paths from source
to destination, failure of one data link would simply cause
the network to reroute traffic onto a working path, and
communication protocols would cause any packets lost in the
failure to be automatically re-sent, preventing loss of
data. The network might degrade and deliver data more slowly
if links or switching hubs went down, but everything
would still get through.
This was very attractive to military planners in the Cold
War, who worried about a nuclear attack decapitating their
command and control network by striking one or a few
locations through which their communications funnelled. A
distributed network, of which ARPANET was the prototype,
would be immune to this kind of top-down attack because
there was no top: it was made up of peers, spread all over
the landscape, all able to switch data among themselves
through a mesh of interconnecting links.
As the ARPANET grew into the Internet and expanded from a small
community of military, government, university, and large
company users into a mass audience in the 1990s, this
fundamental architecture was preserved, but in practice the
network bifurcated into a two tier structure. The top tier
consisted of the original ARPANET-like users, plus
“Internet Service Providers” (ISPs), who had
top-tier (“backbone”) connectivity, and then
resold Internet access to their customers, who mostly
initially connected via dial-up modems. Over time, these customers
obtained higher bandwidth via cable television connections,
satellite dishes, digital subscriber lines (DSL) over
the wired telephone network, and, more recently, mobile
devices such as cellular telephones and tablets.
The architecture of the Internet remained the same, but this
evolution resulted in a weakening of its peer-to-peer
structure. The approaching exhaustion
of 32 bit Internet addresses
(IPv4) and
the slow deployment of its successor
(IPv6)
meant most small-scale Internet users did not have a
permanent address where others could contact them. In an
attempt to shield users from the flawed security model and
implementation of the software they ran, their Internet
connections were increasingly placed behind firewalls and subjected to
Network Address Translation (NAT), which made it impossible
to establish peer to peer connections without a third party
intermediary (which, of course, subverts the design goal of
decentralisation). While on the ARPANET and the original
Internet every site was a peer of every other (subject only
to the speed of their network connections and computer power
available to handle network traffic), the network population
now became increasingly divided into producers or publishers (who
made information available), and consumers (who used the network
to access the publishers' sites but did not publish themselves).
While in the mid-1990s it was easy (or as easy as anything
was in that era) to set up your own Web server and publish
anything you wished, now most small-scale users were forced to employ
hosting services operated by the publishers to make their
content available. Services such as AOL, Myspace, Blogger,
Facebook, and YouTube were widely used by
individuals and companies to host their content, while
those wishing their own apparently independent Web presence
moved to hosting providers who supplied, for a fee, the
servers, storage, and Internet access used by the site.
All of this led to a centralisation of data on the Web,
which was accelerated by the emergence of the high speed
fibre optic links and massive computing power upon which Gilder had
based his 1990 and 2000 forecasts. Both of these
came with great economies of scale: it cost a company
like Google or Amazon much less per unit of computing
power or network bandwidth to build a large, industrial-scale
data centre located where electrical power and cooling
were inexpensive and linked to the Internet backbone
by multiple fibre optic channels, than it cost an individual
Internet user or small company with their own server
on premises and a modest speed link to an ISP. Thus it
became practical for these Goliaths of the Internet to
suck up everybody's data and resell their computing power
and access at attractive prices.
As a example of the magnitude of the economies of scale we're
talking about, when I migrated the hosting of my
Fourmilab.ch site from my own
on-site servers and Internet connection to an Amazon Web
Services data centre, my monthly bill for hosting the site
dropped by a factor of fifty—not fifty percent,
one fiftieth the cost, and you can bet Amazon's
making money on the deal.
This tremendous centralisation is the antithesis of the concept
of ARPANET. Instead of a worldwide grid of redundant data links
and data distributed everywhere, we have a modest number of huge
data centres linked by fibre optic cables carrying traffic for
millions of individuals and enterprises. A couple of submarines
full of
Trident
D5s would probably suffice to reset the world, computer
network-wise, to 1970.
As this concentration was occurring, the same companies who were
building the data centres were offering more and more services
to users of the Internet: search engines; hosting of blogs,
images, audio, and video; E-mail services; social networks of
all kinds; storage and collaborative working tools;
high-resolution maps and imagery of the world; archives of data
and research material; and a host of others. How was all of
this to be paid for? Those giant data centres, after all,
represent a capital investment of tens of billions of
dollars, and their electricity bills are comparable to those of
an aluminium smelter. Due to the architecture of the Internet
or, more precisely, missing pieces of the puzzle, a fateful
choice was made in the early days of the build-out of these
services which now pervade our lives, and we're all
paying the price for it. So far, it has allowed the few
companies in this data oligopoly to join the ranks of the
largest, most profitable, and most highly valued enterprises in
human history, but they may be built on a flawed business model
and foundation vulnerable to disruption by software and
hardware technologies presently emerging.
The basic business model of what we might call the “consumer
Internet” (as opposed to businesses who pay to host their
Web presence, on-line stores, etc.) has, with few exceptions,
evolved to be what the author calls the “Google model”
(although it predates Google): give the product away and make money
by afflicting its users with advertisements (which are increasingly
targeted to them through information collected from the user's
behaviour on the network through intrusive tracking mechanisms).
The fundamental flaws of this are apparent to anybody who uses
the Internet: the constant clutter of advertisements, with
pop-ups, pop-overs, auto-play video and audio, flashing banners,
incessant requests to allow tracking “cookies” or
irritating notifications, and the consequent arms race between
ad blockers and means to circumvent them, with browser developers
(at least those not employed by those paid by the advertisers,
directly or indirectly) caught in the middle.
There are even absurd Web sites which charge a subscription fee
for “membership” and then bombard these paying
customers with advertisements that insult their intelligence.
But there is a fundamental problem with
“free”—it destroys the most important channel
of communication between the vendor of a product or service and
the customer: the price the customer is willing to pay.
Deprived of this information, the vendor is in the same position
as a factory manager in a centrally planned economy who has no
idea how many of each item to make because his orders are handed
down by a planning bureau equally clueless about what is
needed in the absence of a price signal. In the end, you have
freight cars of typewriter ribbons lined up on sidings while
customers wait in line for hours in the hope of buying a
new pair of shoes. Further, when the user is not the customer
(the one who pays), and especially when a “free”
service verges on monopoly status like Google search, Gmail,
Facebook, and Twitter, there is little incentive for providers
to improve the user experience or be responsive to user requests
and needs. Users are subjected to the endless torment of buggy
“beta” releases, capricious change for the sake of
change, and compromises in the user experience on behalf of the
real customers—the advertisers. Once again, this mirrors
the experience of centrally-planned economies where the market
feedback from price is absent: to appreciate this, you need only
compare consumer products from the 1970s and 1980s manufactured
in the Soviet Union with those from Japan.
The fundamental flaw in Karl Marx's economics was his
belief that the industrial revolution of his time would produce
such abundance of goods that the problem would shift from
“production amid scarcity” to “redistribution
of abundance”. In the author's view, the neo-Marxists of
Silicon Valley see the exponentially growing technologies of
computing and communication providing such abundance that they
can give away its fruits in return for collecting and monetising
information collected about their users (note, not
“customers”: customers are those who pay for the
information so collected). Once you grasp this, it's
easier to understand the politics of the barons of Silicon
Valley.
The centralisation of data and information flow in these
vast data silos creates another threat to which a
distributed system is immune: censorship or manipulation of
information flow, whether by a coercive government or
ideologically-motivated management of the companies who
provide these “free” services. We may never
know who first said “The Internet treats censorship as
damage and routes around it” (the quote has been
attributed to numerous people, including two personal
friends, so I'm not going there), but it's profound: the
original decentralised structure of the ARPANET/Internet
is as robust against censorship as it is in the face of
nuclear war. If one or more nodes on the network start to
censor information or refuse to forward it on communication
links it controls, the network routing protocols simply assume
that node is down and send data around it through other nodes
and paths which do not censor it. On a network with a
multitude of nodes and paths among them, owned by a large
and diverse population of operators, it is extraordinarily
difficult to shut down the flow of information from a given
source or viewpoint; there will almost always be an
alternative route that gets it there. (Cryptographic
protocols and secure and verified identities can similarly
avoid the alteration of information in transit or forging
information and attributing it to a different originator;
I'll discuss that later.) As with physical damage,
top-down censorship does not work because there's no top.
But with the current centralised Internet, the owners and
operators of these data silos have enormous power to put their
thumbs on the scale, tilting opinion in their favour and
blocking speech they oppose. Google can push down the page
rank of information sources of which they disapprove, so
few users will find them. YouTube can “demonetise”
videos because they dislike their content,
cutting off their creators' revenue stream overnight
with no means of appeal, or they can outright ban creators
from the platform and remove their existing content. Twitter
routinely “shadow-bans” those with whom they disagree,
causing their tweets to disappear into the void, and outright
banishes those more vocal. Internet payment processors and
crowd funding sites enforce explicit ideological litmus tests
on their users, and revoke long-standing commercial relationships
over legal speech. One might restate the original observation
about the Internet as “The centralised Internet treats
censorship as an opportunity and says,
‘Isn't it great!’ ” Today there's
a top, and those on top control the speech of everything that
flows through their data silos.
This pernicious centralisation and “free” funding
by advertisement (which is fundamentally plundering users'
most precious possessions: their time and attention) were
in large part the consequence of the Internet's lacking three
fundamental architectural layers: security, trust, and transactions.
Let's explore them.
Security. Essential to any useful communication
system, security simply means that communications between
parties on the network cannot be intercepted by third parties,
modified en route, or otherwise manipulated (for example, by
changing the order in which messages are received). The
communication protocols of the Internet, based on the
OSI model,
had no explicit security layer. It was expected to be
implemented outside the model, across the layers of
protocol. On today's Internet, security has been bolted-on,
largely through the
Transport
Layer Security (TLS) protocols (which, due to history, have
a number of other commonly used names, and are most often
encountered in the “https:” URLs by
which users access Web sites). But because it's bolted on, not
designed in from the bottom-up, and because it “just grew”
rather than having been designed in, TLS has been the locus of
numerous security flaws which put software that employs it
at risk. Further, TLS is a tool which must be used by application
designers with extreme care in order to deliver security to their
users. Even if TLS were completely flawless, it is very easy to
misuse it in an application and compromise users' security.
Trust. As indispensable as security is knowing to whom you're
talking. For example, when you connect to your bank's Web site,
how do you know you're actually talking to their server and not
some criminal whose computer has spoofed your computer's domain
name system server to intercept your communications and who, the
moment you enter your password, will be off and running to empty
your bank accounts and make your life a living Hell? Once again,
trust has been bolted on to the existing Internet through a
rickety system of “certificates” issued mostly by
large companies for outrageous fees. And, as with anything
centralised, it's vulnerable: in 2016, one of the
top-line certificate vendors was compromised, requiring myriad
Web sites (including this one) to re-issue their security
certificates.
Transactions. Business is all about
transactions; if you aren't doing transactions, you aren't
in business or, as Gilder puts it, “In business, the
ability to conduct transactions is not optional. It is the
way all economic learning and growth occur. If your product is
‘free,’ it is not a product, and you are not in
business, even if you can extort money from so-called
advertisers to fund it.” The present-day Internet
has no transaction layer, even bolted on. Instead, we have
more silos and bags hanging off the side of the Internet
called PayPal, credit card processing companies, and the
like, which try to put a Band-Aid over the suppurating
wound which is the absence of a way to send money over the
Internet in a secure, trusted, quick, efficient, and
low-overhead manner. The need for this was perceived long
before ARPANET. In
Project
Xanadu, founded by
Ted Nelson
in 1960, rule 9 of the
“original 17 rules” was, “Every document can
contain a royalty mechanism at any desired degree of granularity
to ensure payment on any portion accessed, including virtual
copies (‘transclusions’) of all or part of the
document.” While defined in terms of documents and
quoting, this implied the existence of a
micropayment
system which would allow compensating authors and publishers
for copies and quotations of their work with a granularity as
small as one character, and could easily be extended to cover
payments for products and services. A micropayment system must
be able to handle very small payments without crushing
overhead, extremely quickly, and transparently (without the
Japanese tea ceremony that buying something on-line involves
today). As originally envisioned by Ted Nelson, as you read
documents, their authors and publishers would be automatically
paid for their content, including payments to the originators of
material from others embedded within them. As long as the total
price for the document was less than what I termed the user's
“threshold of paying”, this would be completely
transparent (a user would set the threshold in the browser: if
zero, they'd have to approve all payments). There would be
no need for advertisements to support publication on a public
hypertext network (although publishers would, of course, be free
to adopt that model if they wished). If implemented in a
decentralised way, like the ARPANET, there would be no
central strangle point where censorship could be applied by
cutting off the ability to receive payments.
So, is it possible to remake the Internet, building in
security, trust, and transactions as the foundation, and replace
what the author calls the “Google system of the world”
with one in which the data silos are seen as obsolete,
control of users' personal data and work returns to their
hands, privacy is respected and the panopticon snooping of
today is seen as a dark time we've put behind us, and the
pervasive and growing censorship by plutocrat
ideologues and slaver governments becomes impotent and
obsolete? George Gilder responds “yes”, and
in this book identifies technologies already existing and
being deployed which can bring about this transformation.
At the heart of many of these technologies is the concept
of a blockchain,
an open, distributed ledger which records transactions or any
other form of information in a permanent, public, and verifiable
manner. Originally conceived as the transaction ledger for
the Bitcoin
cryptocurrency,
it provided the first means of solving the
double-spending
problem (how do you keep people from spending a unit of
electronic currency twice) without the need for a central server
or trusted authority, and hence without a potential choke-point
or vulnerability to attack or failure. Since the launch of
Bitcoin in 2009, blockchain technology has become a major area
of research, with banks and other large financial institutions,
companies such as IBM, and major university research groups
exploring applications with the goals of drastically reducing
transaction costs, improving security, and hardening systems
against single-point failure risks.
Applied to the Internet, blockchain technology can provide
security and trust (through the permanent publication of public
keys which identify actors on the network), and a transaction
layer able to efficiently and quickly execute micropayments
without the overhead, clutter, friction, and security risks of
existing payment systems. By necessity, present-day blockchain
implementations are add-ons to the existing Internet, but as the
technology matures and is verified and tested, it can move into
the foundations of a successor system, based on the same
lower-level protocols (and hence compatible with the installed
base), but eventually supplanting the patched-together
architecture of the
Domain
Name System,
certificate
authorities, and payment processors, all of which
represent vulnerabilities of the present-day Internet and
points at which censorship and control can be imposed. Technologies
to watch in these areas are:
As the bandwidth available to users on the edge of the network
increases through the deployment of fibre to the home and
enterprise and via
5G mobile
technology, the data transfer economy of scale of the great
data silos will begin to erode. Early in the Roaring Twenties,
the aggregate computing power and communication bandwidth on
the edge of the network will equal and eventually dwarf that
of the legacy data smelters of Google, Facebook, Twitter, and
the rest. There will no longer be any need for users to entrust
their data to these overbearing anachronisms and consent to multi-dozen
page “terms of service” or endure advertising
just to see their own content or share it with others. You
will be in possession of your own data, on your own server or
on space for which you freely contract with others, with backup and
other services contracted with any other provider on the
network. If your server has extra capacity, you can turn it
into money by joining the market for computing and storage
capacity, just as you take advantage of these resources when
required. All of this will be built on the new secure foundation,
so you will retain complete control over who can see your
data, no longer trusting weasel-worded promises made by
amorphous entities with whom you have no real contract to
guard your privacy and intellectual property rights. If
you wish, you can be paid for your content, with remittances
made automatically as people access it. More and more, you'll
make tiny payments for content which is no longer obstructed by
advertising and chopped up to accommodate more clutter. And
when outrage mobs of pink hairs and soybeards (each with their
own pronoun) come howling to ban you from the Internet, they'll
find nobody to shriek at and the kill switch rusting away in a
derelict data centre: your data will be in your own hands with
access through myriad routes.
Technologies moving in this direction include:
This book provides a breezy look at the present state of the
Internet, how we got here (versus where we thought we were going
in the 1990s), and how we might transcend the present-day mess
into something better if not blocked by the heavy hand of
government regulation (the risk of freezing the present-day
architecture in place by unleashing agencies like the U.S. Federal
Communications Commission, which stifled innovation in broadcasting
for six decades, to do the same to the Internet is discussed
in detail). Although it's way too early to see which of the
many contending technologies will win out (and recall that
the technically superior technology doesn't always prevail),
a survey of work in progress provides a sense for what they
have in common and what the eventual result might look like.
There are many things to quibble about here. Gilder goes on
at some length about how he believes artificial intelligence is
all nonsense, that computers can never truly think or be conscious,
and that creativity (new
information
in the Shannon sense) can
only come from the human mind, with a lot of confused arguments
from
Gödel
incompleteness, the Turing
halting problem,
and even the
uncertainty
principle of quantum mechanics. He really seems to believe in
vitalism, that there
is an élan vital which
somehow infuses the biological substrate which no machine can
embody. This strikes me as superstitious nonsense: a human
brain is a structure composed of quarks and electrons arranged in
a certain way which processes information, interacts with its
environment, and is able to observe its own operation as well as
external phenomena (which is all consciousness is about). Now, it
may be that somehow quantum mechanics is involved in all of this,
and that our existing computers, which are entirely deterministic
and classical in their operation, cannot replicate this functionality,
but if that's so it simply means we'll have to wait until
quantum computing,
which is already working in a rudimentary form in the laboratory, and is
just a different way of arranging the quarks and electrons in a
system, develops further.
He argues that while Bitcoin can be an efficient and secure
means of processing transactions, it is unsuitable as a
replacement for volatile fiat money because, unlike gold, the
quantity of Bitcoin has an absolute limit, after which
the supply will be capped. I don't get it. It seems to me
that this is a feature, not a bug. The supply of gold increases
slowly as new gold is mined, and by pure coincidence the rate
of increase in its supply has happened to approximate that of
global economic growth. But still, the existing inventory of
gold dwarfs new supply, so there isn't much difference between
a very slowly increasing supply and a static one. If you're
on a pure gold standard and economic growth is faster than
the increase in the supply of gold, there will be gradual deflation
because a given quantity of gold will buy more in the future.
But so what? In a deflationary environment, interest rates
will be low and it will be easy to fund new investment, since
investors will receive money back which will be more valuable.
With Bitcoin, once the entire supply is mined, supply will be
static (actually, very slowly shrinking, as private keys are
eventually lost, which is precisely like gold being consumed
by industrial uses from which it is not reclaimed), but Bitcoin
can be divided without limit (with minor and upward-compatible
changes to the existing protocol). So, it really doesn't matter
if, in the greater solar system economy of the year
8537, a single Bitcoin is sufficient to
buy Jupiter:
transactions will simply be done in
yocto-satoshis
or whatever. In fact, Bitcoin is better in this regard than
gold, which cannot be subdivided below the unit of one atom.
Gilder further argues, as he did in
The Scandal of Money (November 2016),
that the proper dimensional unit for money is time, since that
is the measure of what is required to create true wealth
(as opposed to funny money created by governments or fantasy
money “earned” in zero-sum speculation such as
currency trading), and that existing cryptocurrencies do not
meet this definition. I'll take his word on the latter point;
it's his definition, after all, but his time theory of money
is way too close to the Marxist
labour
theory of value to persuade me. That theory is trivially
falsified by its prediction that more value is created in
labour-intensive production of the same goods than by
producing them in a more efficient manner. In fact, value,
measured as profit, dramatically increases as the labour
input to production is reduced. Over forty centuries of human
history, the one thing in common among almost everything
used for money (at least until our post-reality era) is
scarcity: the supply is limited and it is difficult
to increase it. The genius of Bitcoin and its underlying
blockchain technology is that it solved the problem of how
to make a digital good, which can be copied at zero cost,
scarce, without requiring a central authority. That seems to
meet the essential requirement to serve as money, regardless
of how you define that term.
Gilder's books have a good record for sketching the future of
technology and identifying the trends which are contributing
to it. He has been less successful picking winners and losers;
I wouldn't make investment decisions based on his evaluation
of products and companies, but rather wait until the market
sorts out those which will endure.
Here is a
talk
by the author at the Blockstack Berlin 2018
conference which summarises the essentials of his thesis in
just eleven minutes and ends with an exhortation to designers
and builders of the new Internet to “tear down these walls”
around the data centres which imprison our personal information.
This
Uncommon
Knowledge interview provides, in 48
minutes, a calmer and more in-depth exploration of why the Google
world system must fail and what may replace it.
October 2018