Fourmilog: None Dare Call It Reason

Earth and Moon Viewer: Solar System Explorer

Tuesday, April 17, 2018 20:06

With the release of version 3.0, now in production, Earth and Moon Viewer, originally launched on the Web in 1994 as Earth Viewer, now becomes “Earth and Moon Viewer and Solar System Explorer”. In addition to viewing the Earth and its Moon using a variety of image databases, you can now also explore high-resolution imagery of Mercury, Venus, Mars and its moons Phobos and Deimos, the asteroids Ceres and Vesta, and Pluto and its moon Charon. For some bodies multiple image databases are available including spacecraft imagery and topography based upon elevation measurements. You can choose any of the available worlds and image databases from the custom request form.

All of the viewing options available for the Earth and Moon can be used when viewing the other bodies with the exception of viewing from an Earth satellite. Imagery is based upon the latest spacecraft data published by the United States Geological Survey Astrogeology Science Center.

For example, here is an image of the west part of Valles Marineris with Noctis Labyrinthus at the centre of the image and the three Tharsis volcanoes toward the left. The image is rendered from an altitude of 1000 km using the Viking orbiter global mosaic with 232 metres per pixel resolution.

mars_viking.png

Back

Reading List: Antifragile

Thursday, April 12, 2018 13:50

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

Back

Earth and Moon Viewer: New Topographic Maps

Tuesday, April 3, 2018 21:32

Since 1996, Earth and Moon Viewer has offered a topographic map of the Earth as one of the image databases which may be displayed. This map was derived from the NOAA/NCEI ETOPO2 topography database. Although the original data set contained samples with a spatial resolution of two arc seconds (two nautical miles per pixel, or a total image size of 10800×5400 pixels), main memory and disc size constraints of the era required reducing the resolution of the image within Earth and Moon Viewer to 1440×720 pixels. This was sufficient for renderings at the hemisphere or continental scale, but if you zoomed in closer, the results were disappointing. For example, here is a view of Spain, Portugal, France, and North Africa viewed from 207 kilometres above the centre of the Iberian peninsula.

e_etopo0.png

More than twenty years later, in the age of “extravagant computing”, and on the threshold of the Roaring Twenties, we can do much better than this. I have re-processed the raw ETOPO2 data set to preserve its full resolution, and with pixels which can represent 65,536 unique colours instead of the 256 used before. Here is the same image rendered from the new ETOPO2 data.

e_etopo2.png

The colours in this rendering are somewhat garish and nonetheless do not necessarily show fine detail well. Images with this database tend to look their best at either very large scale or zoomed in to near the resolution limits of the database.

In 2009, the ETOPO1 data set was released, replacing ETOPO2 for most applications. The data have twice the spatial resolution: 1 arc minute, corresponding to one nautical mile per pixel or a total image size of 21600×10800 pixels. The permanent ice sheets of Antarctica, Greenland, and some Arctic islands are included in the elevation data. Earth and Moon Viewer now provides access to a rendering of this data set, which may be selected as “NOAA/NCEI ETOPO1 Global Relief” on any page which allows choosing an Earth imagery source. The full resolution of the database is available for close-ups. Here is the same view as that above rendered with the ETOPO1 data set.

e_etopo1.png

The original low-resolution ETOPO2 data set remains available for compatibility with saved URLs which reference it, but is not directly requested by Earth and Moon Viewer's query pages.

Back

Earth and Moon Viewer Updated

Thursday, March 29, 2018 20:04

The first major update to Earth and Moon Viewer since 2012 is now posted. Changes in this release are as follows.

  • When viewing the Moon, the default image database is the 100 metre per pixel LRO LROC-WAC Global Mosaic produced by the Lunar Reconnaissance Orbiter Camera Team at Arizona State University from imagery returned by NASA's Lunar Reconnaissance Orbiter spacecraft. This data set provides more than 5700 times the resolution (measured by pixels in the image) of the Clementine imagery previously used (which remains available as an option). Since the complete image database, consisting of 8 bit grey scale values, is 5.6 gigabytes in size, three smaller sub-sampled databases are automatically selected when lower resolution images are required, reserving the 100 metre per pixel data for very close zooms (as low as 1 km), where its full detail is required and only a small portion of the entire database need be brought into memory. You may observe a small pause when displaying images at this resolution. For comparison, below are views of the crater Copernicus from an altitude of 10 km. At left is an image generated from the Lunar Reconnaissance Orbiter data, while at right is the same view generated from Clementine imagery.

    copernic_LRO.jpg   copernic_Clem.jpg

  • Enabled zooming in as close as 1 km for all image databases which support such high resolution:
    • NASA Blue Marble Monthlies (Earth)
    • NASA Blue Marble (Earth)
    • NASA Visible Earth
    • Lunar Reconnaissance Orbiter 100 m (Moon)
    Note that the generic NASA Blue Marble imagery provides 500 metres per pixel resolution on extreme zoom-ins, while the Blue Marble Monthlies have a maximum resolution of 1 km/pixel. The monthly images are available at 500 m/pixel, but disc space and server memory constraints do not presently permit supporting the 84 gigabytes such images would occupy.
  • Updated all documents to current Web standards for character set specification in XHTML 1.0 files.
  • Updated all documents to use Fourmilab's standard CSS style sheet, justify text, and employ Unicode typography for quotes, dashes, ellipses, and other special characters.
  • Upgraded all of the Named Lunar Formations catalogue pages, which were gnarly mid-1990s HTML 3.2 to XHTML 1.0 Strict, with a consistent and much better looking style sheet. The list of Lunar Landing Sites has been updated to add post-Apollo impact and soft landing missions. All links in the catalogues now select the Lunar Reconnaissance Orbiter imagery rather than Clementine.
  • The View above Cities page now selects the NASA Blue Marble Monthlies image database.
  • The Earth and Moon Map Explorer now uses the NASA Blue Marble Monthlies for the Earth and the Lunar Reconnaissance Orbiter imagery for the Moon.
  • Converted legacy .gif images to PNG everywhere (except for a few animated GIFs, for which there is no alternative).
  • To support the very large grey scale Lunar Reconnaissance Orbiter image, a new version of the internal Earth Viewer Image Format, EVIF4, has been added. While previous versions of the format supported colour-mapped images with separate day and night imagery (either in the same file: EVIF1 and 2, or in separate files: EVIF3) with 16 bits per pixel, in EVIF4 pixels are 8 bit grey scale values and the night image is synthesised on the fly by shading the pixel values, either smoothly or sharply depending on whether the body being viewed has an atmosphere. While this format is presently used only for the LRO images, it may prove useful for other grey scale data such as radar maps of Venus and Titan. Users may apply gamma correction to images generated from EVIF4 databases to adjust contrast as they wish.
  • All documents are now XHTML 1.0 Strict or Transitional, and all have been validated for compliance by the W3C Markup Validation Service.
  • A number of stale and broken links have been fixed. All citations of books on Amazon now point to the most recent edition.
  • The HTML generated by requests to Earth and Moon Viewer is now XHTML 1.0 Strict and validated for standards compliance. Embedded CSS improves the formatting of result documents.

Back

JavaScrypt Updated

Wednesday, March 14, 2018 23:01

I have just posted a new version of JavaScrypt, the first major update in thirteen years.

JavaScrypt is a collection of Web pages which implement a complete symmetrical encryption facility that runs entirely within your browser, using JavaScript for all computation. When you encrypt or decrypt with JavaScrypt, nothing is sent over the Internet; you can run JavaScrypt from a local copy on a machine not connected to the Internet. JavaScrypt encrypts with the Advanced Encryption Standard (AES) using 256 bit keys: this is the standard accepted by the U.S. government for encryption of Top Secret data. (While JavaScrypt is completely compatible with AES, it has not been certified by the U.S. National Security Agency as an approved cryptographic module and should not be used in applications where this is a requirement.) Companion modules provide a text-based steganography facility and generation of pass phrases and encryption keys.

This update is 100% compatible with earlier releases of JavaScrypt: encrypted files can be exchanged by the old and new versions with no difficulties. The updates bring JavaScrypt in line with contemporary Web standards.

  • All HTML files are now XHTML 1.0 Strict and verified for compliance.
  • There is a uniform CSS style sheet for all pages and the style is more pleasing to the eye.
  • Unicode typography is used for characters such as quotes, ellipses, and dashes.
  • All JavaScript files now specify “use strict” and are compliant with that mode.
  • <label> containers are used on check boxes and radio buttons so you can click the labels as well as the boxes.
  • Added the option to generate signature for pass phrases using the SHA-224 and SHA-256 hash algorithms in addition to MD5.
  • Citations to books on Amazon have been updated to reference the latest editions and links changed to the current recommended format.

For complete details of the changes in this version, see the development log.

If you've been using the previous version of JavaScrypt and start to use the update, you may encounter some JavaScript errors due to incompatibility between JavaScript files stored in your browser's cache and the new HTML documents. Flushing your browser's cache and reloading the page should remedy these problems. (This shouldn't be necessary if browsers were competently implemented, but after more than twenty years seeing this done wrong, I despair of its ever being fixed.)

Back