Tuesday, April 17, 2018

Earth and Moon Viewer: Solar System Explorer

With the release of version 3.0, now in production, Earth and Moon Viewer, originally launched on the Web in 1994 as Earth Viewer, now becomes “Earth and Moon Viewer and Solar System Explorer”. In addition to viewing the Earth and its Moon using a variety of image databases, you can now also explore high-resolution imagery of Mercury, Venus, Mars and its moons Phobos and Deimos, the asteroids Ceres and Vesta, and Pluto and its moon Charon. For some bodies multiple image databases are available including spacecraft imagery and topography based upon elevation measurements. You can choose any of the available worlds and image databases from the custom request form.

All of the viewing options available for the Earth and Moon can be used when viewing the other bodies with the exception of viewing from an Earth satellite. Imagery is based upon the latest spacecraft data published by the United States Geological Survey Astrogeology Science Center.

For example, here is an image of the west part of Valles Marineris with Noctis Labyrinthus at the centre of the image and the three Tharsis volcanoes toward the left. The image is rendered from an altitude of 1000 km using the Viking orbiter global mosaic with 232 metres per pixel resolution.

mars_viking.png

Posted at 20:06 Permalink

Thursday, April 12, 2018

Reading List: Antifragile

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

Posted at 13:50 Permalink

Tuesday, April 3, 2018

Earth and Moon Viewer: New Topographic Maps

Since 1996, Earth and Moon Viewer has offered a topographic map of the Earth as one of the image databases which may be displayed. This map was derived from the NOAA/NCEI ETOPO2 topography database. Although the original data set contained samples with a spatial resolution of two arc seconds (two nautical miles per pixel, or a total image size of 10800×5400 pixels), main memory and disc size constraints of the era required reducing the resolution of the image within Earth and Moon Viewer to 1440×720 pixels. This was sufficient for renderings at the hemisphere or continental scale, but if you zoomed in closer, the results were disappointing. For example, here is a view of Spain, Portugal, France, and North Africa viewed from 207 kilometres above the centre of the Iberian peninsula.

e_etopo0.png

More than twenty years later, in the age of “extravagant computing”, and on the threshold of the Roaring Twenties, we can do much better than this. I have re-processed the raw ETOPO2 data set to preserve its full resolution, and with pixels which can represent 65,536 unique colours instead of the 256 used before. Here is the same image rendered from the new ETOPO2 data.

e_etopo2.png

The colours in this rendering are somewhat garish and nonetheless do not necessarily show fine detail well. Images with this database tend to look their best at either very large scale or zoomed in to near the resolution limits of the database.

In 2009, the ETOPO1 data set was released, replacing ETOPO2 for most applications. The data have twice the spatial resolution: 1 arc minute, corresponding to one nautical mile per pixel or a total image size of 21600×10800 pixels. The permanent ice sheets of Antarctica, Greenland, and some Arctic islands are included in the elevation data. Earth and Moon Viewer now provides access to a rendering of this data set, which may be selected as “NOAA/NCEI ETOPO1 Global Relief” on any page which allows choosing an Earth imagery source. The full resolution of the database is available for close-ups. Here is the same view as that above rendered with the ETOPO1 data set.

e_etopo1.png

The original low-resolution ETOPO2 data set remains available for compatibility with saved URLs which reference it, but is not directly requested by Earth and Moon Viewer's query pages.

Posted at 21:32 Permalink

Thursday, March 29, 2018

Earth and Moon Viewer Updated

The first major update to Earth and Moon Viewer since 2012 is now posted. Changes in this release are as follows.
  • When viewing the Moon, the default image database is the 100 metre per pixel LRO LROC-WAC Global Mosaic produced by the Lunar Reconnaissance Orbiter Camera Team at Arizona State University from imagery returned by NASA's Lunar Reconnaissance Orbiter spacecraft. This data set provides more than 5700 times the resolution (measured by pixels in the image) of the Clementine imagery previously used (which remains available as an option). Since the complete image database, consisting of 8 bit grey scale values, is 5.6 gigabytes in size, three smaller sub-sampled databases are automatically selected when lower resolution images are required, reserving the 100 metre per pixel data for very close zooms (as low as 1 km), where its full detail is required and only a small portion of the entire database need be brought into memory. You may observe a small pause when displaying images at this resolution. For comparison, below are views of the crater Copernicus from an altitude of 10 km. At left is an image generated from the Lunar Reconnaissance Orbiter data, while at right is the same view generated from Clementine imagery.

    copernic_LRO.jpg   copernic_Clem.jpg

  • Enabled zooming in as close as 1 km for all image databases which support such high resolution:
    • NASA Blue Marble Monthlies (Earth)
    • NASA Blue Marble (Earth)
    • NASA Visible Earth
    • Lunar Reconnaissance Orbiter 100 m (Moon)
    Note that the generic NASA Blue Marble imagery provides 500 metres per pixel resolution on extreme zoom-ins, while the Blue Marble Monthlies have a maximum resolution of 1 km/pixel. The monthly images are available at 500 m/pixel, but disc space and server memory constraints do not presently permit supporting the 84 gigabytes such images would occupy.
  • Updated all documents to current Web standards for character set specification in XHTML 1.0 files.
  • Updated all documents to use Fourmilab's standard CSS style sheet, justify text, and employ Unicode typography for quotes, dashes, ellipses, and other special characters.
  • Upgraded all of the Named Lunar Formations catalogue pages, which were gnarly mid-1990s HTML 3.2 to XHTML 1.0 Strict, with a consistent and much better looking style sheet. The list of Lunar Landing Sites has been updated to add post-Apollo impact and soft landing missions. All links in the catalogues now select the Lunar Reconnaissance Orbiter imagery rather than Clementine.
  • The View above Cities page now selects the NASA Blue Marble Monthlies image database.
  • The Earth and Moon Map Explorer now uses the NASA Blue Marble Monthlies for the Earth and the Lunar Reconnaissance Orbiter imagery for the Moon.
  • Converted legacy .gif images to PNG everywhere (except for a few animated GIFs, for which there is no alternative).
  • To support the very large grey scale Lunar Reconnaissance Orbiter image, a new version of the internal Earth Viewer Image Format, EVIF4, has been added. While previous versions of the format supported colour-mapped images with separate day and night imagery (either in the same file: EVIF1 and 2, or in separate files: EVIF3) with 16 bits per pixel, in EVIF4 pixels are 8 bit grey scale values and the night image is synthesised on the fly by shading the pixel values, either smoothly or sharply depending on whether the body being viewed has an atmosphere. While this format is presently used only for the LRO images, it may prove useful for other grey scale data such as radar maps of Venus and Titan. Users may apply gamma correction to images generated from EVIF4 databases to adjust contrast as they wish.
  • All documents are now XHTML 1.0 Strict or Transitional, and all have been validated for compliance by the W3C Markup Validation Service.
  • A number of stale and broken links have been fixed. All citations of books on Amazon now point to the most recent edition.
  • The HTML generated by requests to Earth and Moon Viewer is now XHTML 1.0 Strict and validated for standards compliance. Embedded CSS improves the formatting of result documents.

Posted at 20:04 Permalink

Wednesday, March 14, 2018

JavaScrypt Updated

I have just posted a new version of JavaScrypt, the first major update in thirteen years.

JavaScrypt is a collection of Web pages which implement a complete symmetrical encryption facility that runs entirely within your browser, using JavaScript for all computation. When you encrypt or decrypt with JavaScrypt, nothing is sent over the Internet; you can run JavaScrypt from a local copy on a machine not connected to the Internet. JavaScrypt encrypts with the Advanced Encryption Standard (AES) using 256 bit keys: this is the standard accepted by the U.S. government for encryption of Top Secret data. (While JavaScrypt is completely compatible with AES, it has not been certified by the U.S. National Security Agency as an approved cryptographic module and should not be used in applications where this is a requirement.) Companion modules provide a text-based steganography facility and generation of pass phrases and encryption keys.

This update is 100% compatible with earlier releases of JavaScrypt: encrypted files can be exchanged by the old and new versions with no difficulties. The updates bring JavaScrypt in line with contemporary Web standards.

  • All HTML files are now XHTML 1.0 Strict and verified for compliance.
  • There is a uniform CSS style sheet for all pages and the style is more pleasing to the eye.
  • Unicode typography is used for characters such as quotes, ellipses, and dashes.
  • All JavaScript files now specify “use strict” and are compliant with that mode.
  • <label> containers are used on check boxes and radio buttons so you can click the labels as well as the boxes.
  • Added the option to generate signature for pass phrases using the SHA-224 and SHA-256 hash algorithms in addition to MD5.
  • Citations to books on Amazon have been updated to reference the latest editions and links changed to the current recommended format.

For complete details of the changes in this version, see the development log.

If you've been using the previous version of JavaScrypt and start to use the update, you may encounter some JavaScript errors due to incompatibility between JavaScript files stored in your browser's cache and the new HTML documents. Flushing your browser's cache and reloading the page should remedy these problems. (This shouldn't be necessary if browsers were competently implemented, but after more than twenty years seeing this done wrong, I despair of its ever being fixed.)

Posted at 23:01 Permalink

Thursday, February 15, 2018

Reading List: The Ministry of Ungentlemanly Warfare

Lewis, Damien. The Ministry of Ungentlemanly Warfare. New York: Quercus, 2015. ISBN 978-1-68144-392-8.
After becoming prime minister in May 1940, one of Winston Churchill's first acts was to establish the Special Operations Executive (SOE), which was intended to conduct raids, sabotage, reconnaissance, and support resistance movements in Axis-occupied countries. The SOE was not part of the military: it was a branch of the Ministry of Economic Warfare and its very existence was a state secret, camouflaged under the name “Inter-Service Research Bureau”. Its charter was, as Churchill described it, to “set Europe ablaze”.

The SOE consisted, from its chief, Brigadier Colin McVean Gubbins, who went by the designation “M”, to its recruits, of people who did not fit well with the regimentation, hierarchy, and constraints of life in the conventional military branches. They could, in many cases, be easily mistaken for blackguards, desperadoes, and pirates, and that's precisely what they were in the eyes of the enemy—unconstrained by the rules of warfare, striking by stealth, and sowing chaos, mayhem, and terror among occupation troops who thought they were far from the front.

Leading some of the SOE's early exploits was Gustavus “Gus” March-Phillipps, founder of the British Army's Small Scale Raiding Force, and given the SOE designation “Agent W.01”, meaning the first agent assigned to the west Africa territory with the leading zero identifying him as “trained and licensed to use all means to liquidate the enemy”—a license to kill. The SOE's liaison with the British Navy, tasked with obtaining support for its operations and providing cover stories for them, was a fellow named Ian Fleming.

One of the SOE's first and most daring exploits was Operation Postmaster, with the goal of seizing German and Italian ships anchored in the port of Santa Isabel on the Spanish island colony of Fernando Po off the coast of west Africa. Given the green light by Churchill over the strenuous objections of the Foreign Office and Admiralty, who were concerned about the repercussions if British involvement in what amounted to an act of piracy in a neutral country were to be disclosed, the operation was mounted under the strictest secrecy and deniability, with a cover story prepared by Ian Fleming. Despite harrowing misadventures along the way, the plan was a brilliant success, capturing three ships and their crews and delivering them to the British-controlled port of Lagos without any casualties. Vindicated by the success, Churchill gave the SOE the green light to raid Nazi occupation forces on the Channel Islands and the coast of France.

On his first mission in Operation Postmaster was Anders Lassen, an aristocratic Dane who enlisted as a private in the British Commandos after his country was occupied by the Nazis. With his silver-blond hair, blue eyes, and accent easily mistaken for German, Lassen was apprehended by the Home Guard on several occasions while on training missions in Britain and held as a suspected German spy until his commanders intervened. Lassen was given a field commission, direct from private to second lieutenant, immediately after Operation Postmaster, and went on to become one of the most successful leaders of special operations raids in the war. As long as Nazis occupied his Danish homeland, he was possessed with a desire to kill as many Nazis as possible, wherever and however he could, and when in combat was animated by a berserker drive and ability to improvise that caused those who served with him to call him the “Danish Viking”.

This book provides a look into the operations of the SOE and its successor organisations, the Special Air Service and Special Boat Service, seen through the career of Anders Lassen. So numerous were special operations, conducted in many theatres around the world, that this kind of focus is necessary. Also, attrition in these high-risk raids, often far behind enemy lines, was so high there are few individuals one can follow throughout the war. As the war approached its conclusion, Lassen was the only surviving participant in Operation Postmaster, the SOE's first raid.

Lassen went on to lead raids against Nazi occupation troops in the Channel Islands, leading Churchill to remark, “There comes from the sea from time to time a hand of steel which plucks the German sentries from their posts with growing efficiency.” While these “butcher-and-bolt” raids could not liberate territory, they yielded prisoners, code books, and radio contact information valuable to military intelligence and, more importantly, forced the Germans to strengthen their garrisons in these previously thought secure posts, tying down forces which could otherwise be sent to active combat fronts. Churchill believed that the enemy should be attacked wherever possible, and SOE was a precision weapon which could be deployed where conventional military forces could not be used.

As the SOE was absorbed into the military Special Air Service, Lassen would go on to fight in North Africa, Crete, the Aegean islands, then occupied by Italian and German troops, and mainland Greece. His raid on a German airbase on occupied Crete took out fighters and bombers which could have opposed the Allied landings in Sicily. Later, his small group of raiders, unsupported by any other force, liberated the Greek city of Salonika, bluffing the German commander into believing Lassen's forty raiders and two fishing boats were actually a British corps of thirty thousand men, with armour, artillery, and naval support.

After years of raiding in peripheral theatres, Lassen hungered to get into the “big war”, and ended up in Italy, where his irregular form of warfare and disdain for military discipline created friction with his superiors. But he got results, and his unit was tasked with reconnaissance and pathfinding for an Allied crossing of Lake Comacchio (actually, more of a swamp) in Operation Roast in the final days of the war. It was there he was to meet his end, in a fierce engagement against Nazi troops defending the north shore. For this, he posthumously received the Victoria Cross, becoming the only non-Commonwealth citizen so honoured in World War II.

It is a cliché to say that a work of history “reads like a thriller”, but in this case it is completely accurate. The description of the raid on the Kastelli airbase on Crete would, if made into a movie, probably cause many viewers to suspect it to be fictionalised, but that's what really happened, based upon after action reports by multiple participants and aerial reconnaissance after the fact.

World War II was a global conflict, and while histories often focus on grand battles such as D-day, Stalingrad, Iwo Jima, and the fall of Berlin, there was heroism in obscure places such as the Greek islands which also contributed to the victory, and combatants operating in the shadows behind enemy lines who did their part and often paid the price for the risks they willingly undertook. This is a stirring story of this shadow war, told through the short life of one of its heroes.

Posted at 00:06 Permalink

Saturday, February 10, 2018

Gnome-o-gram: Experts

Ever since the 19th century, the largest industry in Zambia has been copper mining, which today accounts for 85% of the country's exports. The economy of the nation and the prosperity of its people rise and fall with the price of copper on the world market, so nothing is so important to industry and government planners as the expectation for the price of this commodity in the future. Since the 1970s, the World Bank has issued regular forecasts for the price of copper and other important commodities, and the government of Zambia and other resource-based economies often base their economic policy upon these pronouncements by high-powered experts with masses of data at their fingertips. Let's see how they've done.

World Bank forecasts of copper price vs. actual price, 1970-1995

The above chart, from a paper [PDF] by Angus Deaton in the Summer 1999 issue of the Journal of Economic Perspectives shows, for the years 1970 through 1995, the actual price of copper (solid heavy line) and successive forecasts (light dashed lines) by the august seers of the World Bank. Each forecast departs from the actual price line on the date at which it was issued.

Over a period of a quarter of a century, every forecast by the World Bank has been totally wrong. Further, unlike predictions made by throwing darts while blindfolded, where you'd expect half to be too high and half too low, every single prediction from the 1970s until 1987 erred wildly on the high side, while every one after that date was absurdly pessimistic. You'd have made a much better forecast for the period simply by plotting a random walk between 50 and 100.

And yet people based decisions upon these forecasts, and those in the industry or who depended upon it for their livelihood suffered as a result. Did any of the “experts” who cranked out these predictions suffer or lose their cushy jobs? I doubt it.

In the investment world, firms and forecasters are required to warn potential customers that “past performance is no guarantee of future results”. But in a case like this, past performance is a pretty strong clue that the idiots who turned it in are no more likely to produce usable numbers in the future than a blind monkey firing a shotgun at the chart.

Now, bear in mind what Michael Crichton named the “Murray Gell-Mann Amnesia Effect”:

You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

The next time you hear a politician, economist, or other wonk confidently forecast things five or ten years in the future, remember the World Bank and copper prices. Odds are the numbers they're quoting are just as bogus, and they'll pay no price when they're found to be fantasy. Who pays the price? You do.

Posted at 16:05 Permalink

Sunday, February 4, 2018

Reading List: Life 3.0

Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.
The Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth's surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it's hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.

This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.

About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.

What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.

To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.

Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they've figured out how to pass what they've learned to other humans via speech, writing, and more recently, YouTube comments.

While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you're born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.

The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.

Life 1.0, 2.0, and 3.0

The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don't worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn't malevolence, but competence”. It's unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don't align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.

But isn't this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that's interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).

My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.

The Intelligence Landscape

Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it's game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.

Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?

First of all, let's assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That's complicated. Here are the alternatives discussed by the author. I've deliberately given just the titles without summaries to stimulate your imagination about their consequences.

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookeeper
  • 1984
  • Reversion
  • Self-destruction

Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that's assuming we'll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.

Let's assume progress toward AGI continues over the next few decades. I believe that what I've been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year's wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.

This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don't (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?

The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.

Income per U.S. Household: 1920–2015

I'm not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn't any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don't have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.

Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.

Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it's entirely possible it may have been an event so improbable we'll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents.”).

But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn't matter; like the bacteria and ants, we will have done our part.

The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.

In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.

Posted at 13:54 Permalink

Thursday, February 1, 2018

Reading List: Starship Grifters

Kroese, Robert. Starship Grifters. Seattle: 47North, 2014. ISBN 978-1-4778-1848-0.
This is the funniest science fiction novel I have read in quite a while. Set in the year 3013, not long after galactic civilisation barely escaped an artificial intelligence apocalypse and banned fully self-aware robots, the story is related by Sasha, one of a small number of Self-Arresting near Sentient Heuristic Androids built to be useful without running the risk of their taking over. SASHA robots are equipped with an impossible-to-defeat watchdog module which causes a hard reboot whenever they are on the verge of having an original thought. The limitation of the design proved a serious handicap, and all of their manufacturers went bankrupt. Our narrator, Sasha, was bought at an auction by the protagonist, Rex Nihilo, for thirty-five credits in a lot of “ASSORTED MACHINE PARTS”. Sasha is Rex's assistant and sidekick.

Rex is an adventurer. Sasha says he “never had much of an interest in anything but self-preservation and the accumulation of wealth, the latter taking clear precedence over the former.” Sasha's built in limitations (in addition to the new idea watchdog, she is unable to tell a lie, but if humans should draw incorrect conclusions from incomplete information she provides them, well…) pose problems in Rex's assorted lines of work, most of which seem to involve scams, gambling, and contraband of various kinds. In fact, Rex seems to fit in very well with the universe he inhabits, which appears to be firmly grounded in Walker's Law: “Absent evidence to the contrary, assume everything is a scam”. Evidence appears almost totally absent, and the oppressive tyranny called the Galactic Malarchy, those who supply it, the rebels who oppose it, entrepreneurs like Rex working in the cracks, organised religions and cults, and just about everybody else, appear to be on the make or on the take, looking to grift everybody else for their own account. Cosmologists attribute this to the “Strong Misanthropic Principle, which asserts that the universe exists in order to screw with us.” Rex does his part, although he usually seems to veer between broke and dangerously in debt.

Perhaps that's due to his somewhat threadbare talent stack. As Shasha describes him, Rex doesn't have a head for numbers. Nor does he have much of a head for letters, and “Newtonian physics isn't really his strong suit either”. He is, however, occasionally lucky, or so it seems at first. In an absurdly high-stakes card game with weapons merchant Gavin Larviton, reputed to be one of the wealthiest men in the galaxy, Rex manages to win, almost honestly, not only Larviton's personal starship, but an entire planet, Schnufnaasik Six. After barely escaping a raid by Malarchian marines led by the dread and squeaky-voiced Lord Heinous Vlaak, Rex and Sasha set off in the ship Rex has won, the Flagrante Delicto, to survey the planetary prize.

It doesn't take Rex long to discover, not surprisingly, that he's been had, and that his financial situation is now far more dire than he'd previously been able to imagine. If any of the bounty hunters now on his trail should collar him, he could spend a near-eternity on the prison planet of Gulagatraz (the names are a delight in themselves). So, it's off the rebel base on the forest moon (which is actually a swamp; the swamp moon is all desert) to try to con the Frente Repugnante (all the other names were taken by rival splinter factions, so they ended up with “Revolting Front”, which was translated to Spanish to appear to Latino planets) into paying for a secret weapon which exists only in Rex's imagination.

Thus we embark upon a romp which has a laugh-out-loud line about every other page. This is comic science fiction in the vein of Keith Laumer's Retief stories. As with Laumer, Kroese achieves the perfect balance of laugh lines, plot development, interesting ideas, and recurring gags (there's a planet-destroying weapon called the “plasmatic entropy cannon” which the oft-inebriated Rex refers to variously as the “positronic endoscopy cannon”, “pulmonary embolism cannon”, “ponderosa alopecia cannon”, “propitious elderberry cannon”, and many other ways). There is a huge and satisfying reveal at the end—I kind of expected one was coming, but I'd have never guessed the details.

If reading this leaves you with an appetite for more Rex Nihilo, there is a prequel novella, The Chicolini Incident, and a sequel, Aye, Robot.

The Kindle edition is free for Kindle Unlimited subscribers.

Posted at 22:32 Permalink

Thursday, January 25, 2018

Reading List: Artemis

Weir, Andy. Artemis. New York: Crown, 2017. ISBN 978-0-553-44812-2.
Seldom has a first-time novelist burst onto the scene so spectacularly as Andy Weir with The Martian (November 2014). Originally written for his own amusement and circulated chapter by chapter to a small but enthusiastic group of fans who provided feedback and suggestions as the story developed, he posted the completed novel as a free download on his Web site. Some people who had heard of it by word of mouth but lacked the technical savvy to download documents and transfer them to E-readers inquired whether he could make a Kindle version available. Since you can't give away Kindle books, he published it at the minimum possible price. Before long, the book was rising into the Amazon bestseller list in science fiction, and he was contacted by a major publisher about doing a print edition. These publishers only accept manuscripts through agents, and he didn't have one (nor do agents usually work with first-time authors, which creates a chicken-and-egg problem for the legacy publishing industry), so the publisher put him in touch with a major agent and recommended the manuscript. This led to a 2014 hardcover edition and then a Hollywood movie in 2016 which was nominated for 7 Oscars and won two Golden Globes including Best Motion Picture and Best Performance by an Actor in its category.

The question fans immediately asked themselves was, “Is this a one shot, or can he repeat?” Well, I think we have the answer: with Artemis, Andy Weir has delivered another story of grand master calibre and shown himself on track to join the ranks of the legends of the genre.

In the latter part of the 21st century commerce is expanding into space, and the Moon is home to Artemis, a small settlement of around 2000 permanent residents, situated in the southern part of the Sea of Tranquility, around 40 km from the Apollo 11 landing site. A substantial part of the economy of Artemis is based upon wealthy tourists who take the train from Artemis to the Apollo 11 Visitor Center (where they can look, but not touch or interfere with the historical relics) and enjoy the luxuries and recreations which cater to them back in the pleasure domes.

Artemis is the creation of the Kenya Space Corporation (KSC), which officially designates it “Kenya Offshore Platform Artemis” and operates under international maritime law. As space commerce burgeoned in the 21st century, Kenya's visionary finance minister, Fidelis Ngugi, leveraged Kenya's equatorial latitude (it's little appreciated that once reliable fully-reusable launch vehicles are developed, there's no need to launch over water) and hands-off regulatory regime provided a golden opportunity for space entrepreneurs to escape the nanny state regulation and crushing tax burden of “developed” countries. With tax breaks and an African approach to regulation, entrepreneurs and money flowed in from around the world, making Kenya into a space superpower and enriching its economy and opportunities for its people. Twenty years later Ngugi was Administrator of Artemis; she was, in effect, ruler of the Moon.

While Artemis was a five star experience for the tourists which kept its economy humming, those who supported the settlement and its industries lived in something more like a frontier boom town of the 19th century. Like many such settlements, Artemis attracted opportunity-seekers and those looking to put their pasts behind them from many countries and cultures. Those established tend to attract more like them, and clannish communities developed around occupations: most people in Life Support were Vietnamese, while metal-working was predominantly Hungarian. For whatever reason, welding was dominated by Saudis, including Ammar Bashara, who emigrated to Artemis with his six-year old daughter Jasmine. Twenty years later, Ammar runs a prosperous welding business and Jasmine (“Jazz”) is, shall we say, more irregularly employed.

Artemis is an “energy intense” Moon settlement of the kind described in Steven D. Howe's Honor Bound Honor Born (May 2014). The community is powered by twin 27 megawatt nuclear reactors located behind a berm one kilometre from the main settlement. The reactors not only provide constant electricity and heat through the two week nights and days of the Moon, they power a smelter which processes the lunar regolith into raw materials. The Moon's crust is about 40% oxygen, 20% silicon, 12% iron, and 8% aluminium. With abundant power, these elements can be separated and used to manufacture aluminium and iron for structures, glass from silicon and oxygen, and all with abundant left-over oxygen to breathe. There is no need for elaborate recycling of oxygen: there's always plenty more coming out of the smelter. Many denizens of Artemis subsist largely on “gunk”, an algae-based food grown locally in vats which is nutritious but unpalatable and monotonous. There are a variety of flavours, all of which are worse than the straight stuff.

Jazz works as a porter. She picks up things somewhere in the settlement and delivers them to their destinations using her personally-owned electric-powered cart. Despite the indigenous production of raw materials, many manufactured goods and substances are imported from Earth or factories in Earth orbit, and every time a cargo ship arrives, business is brisk for Jasmine and her fellow porters. Jazz is enterprising and creative, and has a lucrative business on the side: smuggling. Knowing the right people in the spaceport and how much to cut them in, she has a select clientele to which she provides luxury goods from Earth which aren't on the approved customs manifests.

For this, she is paid in “slugs”. No, not slimy molluscs, but “soft-landed grams”, credits which can be exchanged to pay KSC to deliver payload from Earth to Artemis. Slugs act as a currency, and can be privately exchanged among individuals' handheld computers much as Bitcoin today. Jazz makes around 12,000 slugs a month as a porter, and more, although variable, from her more entrepreneurial sideline.

One of her ultra-wealthy clients approaches her with a highly illegal, almost certainly unethical, and very likely perilous proposal. Surviving for as long as she has in her risky business has given Jazz a sense for where the edge is and the good sense not to step over it.

“I'm sorry but this isn't my thing. You'll have to find someone else.”

“I'll offer you a million slugs.”

“Deal.”

Thus begins an adventure in which Jazz has to summon all of her formidable intellect, cunning, and resources, form expedient alliances with unlikely parties, solve a technological mystery, balance honour with being a outlaw, and discover the economic foundation of Artemis, which is nothing like it appears from the surface. All of this is set in a richly textured and believable world which we learn about as the story unfolds: Weir is a master of “show, don't tell”. And it isn't just a page-turning thriller (although that it most certainly is); it's also funny, and in the right places and amount.

This is where I'd usually mention technical goofs and quibbles. I'll not do that because I didn't find any. The only thing I'm not sure about is Artemis' using a pure oxygen atmosphere at 20% of Earth sea-level pressure. This works for short- and moderate-duration space missions, and was used in the U.S. Mercury, Gemini, and Apollo missions. For exposure to pure oxygen longer than two weeks, a phenomenon called absorption atelectasis can develop, which is the collapse of the alveoli in the lungs due to complete absorption of the oxygen gas (see this NASA report [PDF]). The presence of a biologically inert gas such as nitrogen, helium, argon, or neon will keep the alveoli inflated and prevent this phenomenon. The U.S. Skylab missions used an atmosphere of 72% oxygen and 28% nitrogen to avoid this risk, and the Soviet Salyut and Mir space stations used a mix of nitrogen and oxygen with between 21% and 40% oxygen. The Space Shuttle and International Space Station use sea-level atmospheric pressure with 21% oxygen and the balance nitrogen. The effects of reduced pressure on the boiling point of water and the fire hazard of pure oxygen even at reduced pressure are accurately described, but I'm not sure the physiological effects of a pure oxygen atmosphere for long-term habitation have been worked through.

Nitpicking aside, this is a techno-thriller which is also an engaging human story, set in a perfectly plausible and believable future where not only the technology but the economics and social dynamics work. We may just be welcoming another grand master to the pantheon.

Posted at 22:55 Permalink