2016  

January 2016

Waldman, Jonathan. Rust. New York: Simon & Schuster, 2015. ISBN 978-1-4516-9159-7.
In May of 1980 two activists, protesting the imprisonment of a Black Panther convicted of murder, climbed the Statue of Liberty in New York harbour, planning to unfurl a banner high on the statue. After spending a cold and windy night aloft, they descended and surrendered to the New York Police Department's Emergency Service Unit. Fearful that the climbers may have damaged the fragile copper cladding of the monument, a comprehensive inspection was undertaken. What was found was shocking.

The structure of the Statue of Liberty was designed by Alexandre-Gustave Eiffel, and consists of an iron frame weighing 135 tons, which supports the 80 ton copper skin. As marine architects know well, a structure using two dissimilar metals such as iron and copper runs a severe risk of galvanic corrosion, especially in an environment such as the sea air of a harbour. If the iron and copper were to come into contact, a voltage would flow across the junction, and the iron would be consumed in the process. Eiffel's design prevented the iron and copper from touching one another by separating them with spacers made of asbestos impregnated with shellac.

What Eiffel didn't anticipate is that over the years superintendents of the statue would decide to “protect” its interior by applying various kinds of paint. By 1980 eight coats of paint had accumulated, almost as thick as the copper skin. The paint trapped water between the skin and the iron frame, and this set electrolysis into action. One third of the rivets in the frame were damaged or missing, and some of the frame's iron ribs had lost two thirds of their material. The asbestos insulators had absorbed water and were long gone. The statue was at risk of structural failure.

A private fund-raising campaign raised US$ 277 million to restore the statue, which ended up replacing most of its internal structure. On July 4th, 1986, the restored statue was inaugurated, marking its 100th anniversary.

Earth, uniquely among known worlds, has an atmosphere with free oxygen, produced by photosynthetic plants. While much appreciated by creatures like ourselves which breathe it, oxygen is a highly reactive gas and combines with many other elements, either violently in fire, or more slowly in reactions such as rusting metals. Further, 71% of the Earth's surface is covered by oceans, whose salty water promotes other forms of corrosion all too familiar to owners of boats. This book describes humanity's “longest war”: the battle against the corruption of our works by the inexorable chemical process of corrosion.

Consider an everyday object much more humble than the Statue of Liberty: the aluminium beverage can. The modern can is one of the most highly optimised products of engineering ever created. Around 180 billion cans are produced and consumed every year around the world: four six packs for every living human being. Reducing the mass of each can by just one gram will result in an annual saving of 180,000 metric tons of aluminium worth almost 300 million dollars at present prices, so a long list of clever tricks has been employed to reduce the mass of cans. But it doesn't matter how light or inexpensive the can is if it explodes, leaks, or changes the flavour of its contents. Coca-Cola, with a pH of 2.75 and a witches’ brew of ingredients, under a pressure of 6 atmospheres, is as corrosive to bare aluminium as battery acid. If the inside of the can were not coated with a proprietary epoxy lining (whose composition depends upon the product being canned, and is carefully guarded by can manufacturers), the Coke would corrode through the thin walls of the can in just three days. The process of scoring the pop-top removes the coating around the score, and risks corrosion and leakage if a can is stored on its side; don't do that.

The author takes us on an eclectic tour the history of corrosion and those who battle it, from the invention of stainless steel, inspecting the trans-Alaska oil pipeline by sending a “pig” (essentially a robot submarine equipped with electronic sensors) down its entire length, and evangelists for galvanizing (zinc coating) steel. We meet Dan Dunmire, the Pentagon's rust czar, who estimates that corrosion costs the military on the order of US$ 20 billion a year and describes how even the most humble of mitigation strategies can have huge payoffs. A new kind of gasket intended to prevent corrosion where radio antennas protrude through the fuselage of aircraft returned 175 times its investment in a single year. Overall return on investment in the projects funded by his office is estimated as fifty to one. We're introduced to the world of the corrosion engineer, a specialty which, while not glamorous, pays well and offers superb job security, since rust will always be with us.

Not everybody we encounter battles rust. Photographer Alyssha Eve Csük has turned corrosion into fine art. Working at the abandoned Bethlehem Steel Works in Pennsylvania, perhaps the rustiest part of the rust belt, she clandestinely scrambles around the treacherous industrial landscape in search of the beauty in corrosion.

This book mixes the science of corrosion with the stories of those who fight it, in the past and today. It is an enlightening and entertaining look into the most mundane of phenomena, but one which affects all the technological works of mankind.

 Permalink

Levenson, Thomas. The Hunt for Vulcan. New York: Random House, 2015. ISBN 978-0-8129-9898-6.
The history of science has been marked by discoveries in which, by observing where nobody had looked before, with new and more sensitive instruments, or at different aspects of reality, new and often surprising phenomena have been detected. But some of the most profound of our discoveries about the universe we inhabit have come from things we didn't observe, but expected to.

By the nineteenth century, one of the most solid pillars of science was Newton's law of universal gravitation. With a single equation a schoolchild could understand, it explained why objects fall, why the Moon orbits the Earth and the Earth and other planets the Sun, the tides, and the motion of double stars. But still, one wonders: is the law of gravitation exactly as Newton described, and does it work everywhere? For example, Newton's gravity gets weaker as the inverse square of the distance between two objects (for example, if you double the distance, the gravitational force is four times weaker [2² = 4]) but has unlimited range: every object in the universe attracts every other object, however weakly, regardless of distance. But might gravity not, say, weaken faster at great distances? If this were the case, the orbits of the outer planets would differ from the predictions of Newton's theory. Comparing astronomical observations to calculated positions of the planets was a way to discover such phenomena.

In 1781 astronomer William Herschel discovered Uranus, the first planet not known since antiquity. (Uranus is dim but visible to the unaided eye and doubtless had been seen innumerable times, including by astronomers who included it in star catalogues, but Herschel was the first to note its non-stellar appearance through his telescope, originally believing it a comet.) Herschel wasn't looking for a new planet; he was observing stars for another project when he happened upon Uranus. Further observations of the object confirmed that it was moving in a slow, almost circular orbit, around twice the distance of Saturn from the Sun.

Given knowledge of the positions, velocities, and masses of the planets and Newton's law of gravitation, it should be possible to predict the past and future motion of solar system bodies for an arbitrary period of time. Working backward, comparing the predicted influence of bodies on one another with astronomical observations, the masses of the individual planets can be estimated to produce a complete model of the solar system. This great work was undertaken by Pierre-Simon Laplace who published his Mécanique céleste in five volumes between 1799 and 1825. As the middle of the 19th century approached, ongoing precision observations of the planets indicated that all was not proceeding as Laplace had foreseen. Uranus, in particular, continued to diverge from where it was expected to be after taking into account the gravitational influence upon its motion by Saturn and Jupiter. Could Newton have been wrong, and the influence of gravity different over the vast distance of Uranus from the Sun?

In the 1840s two mathematical astronomers, Urbain Le Verrier in France and John Couch Adams in Britain, working independently, investigated the possibility that Newton was right, but that an undiscovered body in the outer solar system was responsible for perturbing the orbit of Uranus. After almost unimaginably tedious calculations (done using tables of logarithms and pencil and paper arithmetic), both Le Verrier and Adams found a solution and predicted where to observe the new planet. Adams failed to persuade astronomers to look for the new world, but Le Verrier prevailed upon an astronomer at the Berlin Observatory to try, and Neptune was duly discovered within one degree (twice the apparent size of the full Moon) of his prediction.

This was Newton triumphant. Not only was the theory vindicated, it had been used, for the first time in history, to predict the existence of a previously unknown planet and tell the astronomers right where to point their telescopes to observe it. The mystery of the outer solar system had been solved. But problems remained much closer to the Sun.

The planet Mercury orbits the Sun every 88 days in an eccentric orbit which never exceeds half the Earth's distance from the Sun. It is a small world, with just 6% of the Earth's mass. As an inner planet, Mercury never appears more than 28° from the Sun, and can best be observed in the morning or evening sky when it is near its maximum elongation from the Sun. (With a telescope, it is possible to observe Mercury in broad daylight.) Flush with his success with Neptune, and rewarded with the post of director of the Paris Observatory, in 1859 Le Verrier turned his attention toward Mercury.

Again, through arduous calculations (by this time Le Verrier had a building full of minions to assist him, but so grueling was the work and so demanding a boss was Le Verrier that during his tenure at the Observatory 17 astronomers and 46 assistants quit) the influence of all of the known planets upon the motion of Mercury was worked out. If Mercury orbited a spherical Sun without other planets tugging on it, the point of its closest approach to the Sun (perihelion) in its eccentric orbit would remain fixed in space. But with the other planets exerting their gravitational influence, Mercury's perihelion should advance around the Sun at a rate of 526.7 arcseconds per century. But astronomers who had been following the orbit of Mercury for decades measured the actual advance of the perihelion as 565 arcseconds per century. This left a discrepancy of 38.3 arcseconds, for which there was no explanation. (The modern value, based upon more precise observations over a longer period of time, for the perihelion precession of Mercury is 43 arcseconds per century.) Although small (recall that there are 1,296,000 arcseconds in a full circle), this anomalous precession was much larger than the margin of error in observations and clearly indicated something was amiss. Could Newton be wrong?

Le Verrier thought not. Just as he had done for the anomalies of the orbit of Uranus, Le Verrier undertook to calculate the properties of an undiscovered object which could perturb the orbit of Mercury and explain the perihelion advance. He found that a planet closer to the Sun (or a belt of asteroids with equivalent mass) would do the trick. Such an object, so close to the Sun, could easily have escaped detection, as it could only be readily observed during a total solar eclipse or when passing in front of the Sun's disc (a transit). Le Verrier alerted astronomers to watch for transits of this intra-Mercurian planet.

On March 26, 1859, Edmond Modeste Lescarbault, a provincial physician in a small town and passionate amateur astronomer turned his (solar-filtered) telescope toward the Sun. He saw a small dark dot crossing the disc of the Sun, taking one hour and seventeen minutes to transit, just as expected by Le Verrier. He communicated his results to the great man, and after a visit and detailed interrogation, the astronomer certified the doctor's observation as genuine and computed the orbit for the new planet. The popular press jumped upon the story. By February 1860, planet Vulcan was all the rage.

Other observations began to arrive, both from credible and unknown observers. Professional astronomers mounted worldwide campaigns to observe the Sun around the period of predicted transits of Vulcan. All of the planned campaigns came up empty. Searches for Vulcan became a major focus of solar eclipse expeditions. Unless the eclipse happened to occur when Vulcan was in conjunction with the Sun, it should be readily observable when the Sun was obscured by the Moon. Eclipse expeditions prepared detailed star charts for the vicinity of the Sun to exclude known stars for the search during the fleeting moments of totality. In 1878, an international party of eclipse chasers including Thomas Edison descended on Rawlins, Wyoming to hunt Vulcan in an eclipse crossing that frontier town. One group spotted Vulcan; others didn't. Controversy and acrimony ensued.

After 1878, most professional astronomers lost interest in Vulcan. The anomalous advance of Mercury's perihelion was mostly set aside as “one of those things we don't understand”, much as astronomers regard dark matter today. In 1915, Einstein published his theory of gravitation: general relativity. It predicted that when objects moved rapidly or gravitational fields were strong, their motion would deviate from the predictions of Newton's theory. Einstein recalled the moment when he performed the calculation of the motion of Mercury in his just-completed theory. It predicted precisely the perihelion advance observed by the astronomers. He said that his heart shuddered in his chest and that he was “beside himself with joy.”

Newton was wrong! For the extreme conditions of Mercury's orbit, so close to the Sun, Einstein's theory of gravitation is required to obtain results which agree with observation. There was no need for planet Vulcan, and now it is mostly forgotten. But the episode is instructive as to how confidence in long-accepted theories and wishful thinking can lead us astray when what might be needed is an overhaul of our most fundamental theories. A century hence, which of our beliefs will be viewed as we regard planet Vulcan today?

 Permalink

Ward, Jonathan H. Countdown to a Moon Launch. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17791-5.
In the companion volume, Rocket Ranch (December 2015), the author describes the gargantuan and extraordinarily complex infrastructure which was built at the Kennedy Space Center (KSC) in Florida to assemble, check out, and launch the Apollo missions to the Moon and the Skylab space station. The present book explores how that hardware was actually used, following the “processing flow” of the Apollo 11 launch vehicle and spacecraft from the arrival of components at KSC to the moment of launch.

As intricate as the hardware was, it wouldn't have worked, nor would it have been possible to launch flawless mission after flawless mission on time had it not been for the management tools employed to coordinate every detail of processing. Central to this was PERT (Program Evaluation and Review Technique), a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine and missile systems. PERT breaks down the progress of a project into milestones connected by activities into a graph of dependencies. Each activity has an estimated time to completion. A milestone might be, say, the installation of the guidance system into a launch vehicle. That milestone would depend upon the assembly of the components of the guidance system (gyroscopes, sensors, electronics, structure, etc.), each of which would depend upon their own components. Downstream, integrated test of the launch vehicle would depend upon the installation of the guidance system. Many activities proceed in parallel and only come together when a milestone has them as its mutual dependencies. For example, the processing and installation of rocket engines is completely independent of work on the guidance system until they join at a milestone where an engine steering test is performed.

As a project progresses, the time estimates for the various activities will be confronted with reality: some will be completed ahead of schedule while other will slip due to unforeseen problems or over-optimistic initial forecasts. This, in turn, ripples downstream in the dependency graph, changing the time available for activities if the final completion milestone is to be met. For any given graph at a particular time, there will be a critical path of activities where a schedule slip of any one will delay the completion milestone. Each lower level milestone in the graph has its own critical path leading to it. As milestones are completed ahead or behind schedule, the overall critical path will shift. Knowing the critical path allows program managers to concentrate resources on items along the critical path to avoid, wherever possible, overall schedule slips (with the attendant extra costs).

Now all this sounds complicated, and in a project with the scope of Apollo, it is almost bewildering to contemplate. The Launch Control Center was built with four firing rooms. Three were outfitted with all of the consoles to check out and launch a mission, but the fourth cavernous room ended up being used to display and maintain the PERT charts for activities in progress. Three levels of charts were maintained. Level A was used by senior management and contained hundreds of major milestones and activities. Each of these was expanded out into a level B chart which, taken together, tracked in excess of 7000 milestones. These, in turn, were broken down into detail on level C charts, which tracked more than 40,000 activities. The level B and C charts were displayed on more than 400 square metres of wall space in the back room of firing room four. As these detailed milestones were completed on the level C charts, changes would propagate down that chart and those which affected its completion upward to the level A and B charts.

Now, here's the most breathtaking thing about this: they did it all by hand! For most of the Apollo program, computer implementations of PERT were not available (or those that existed could not handle this level of detail). (Today, the PERT network for processing of an Apollo mission could be handled on a laptop computer.) There were dozens of analysts and clerks charged with updating the networks, with the processing flow displayed on an enormous board with magnetic strips which could be shifted around by people climbing up and down rolling staircases. Photographers would take pictures of the board which were printed and distributed to managers monitoring project status.

If PERT was essential to coordinating all of the parallel activities in preparing a spacecraft for launch, configuration control was critical to ensure than when the countdown reached T0, everything would work as expected. Just as there was a network of dependencies in the PERT chart, the individual components were tested, subassemblies were tested, assemblies of them were tested, all leading up to an integrated test of the assembled launcher and spacecraft. The successful completion of a test established a tested configuration for the item. Anything which changed that configuration in any way, for example unplugging a cable and plugging it back in, required re-testing to confirm that the original configuration had been restored. (One of the pins in the connector might not have made contact, for instance.) This was all documented by paperwork signed off by three witnesses. The mountain of paper was intimidating; there was even a slide rule calculator for estimating the cost of various kinds of paperwork.

With all of this management superstructure it may seem a miracle that anything got done at all. But, as the end of the decade approached, the level of activity at KSC was relentless (and took a toll upon the workforce, although many recall it as the most intense and rewarding part of their careers). Several missions were processed in parallel: Apollo 11 rolled out to the launch pad while Apollo 10 was still en route to the Moon, and Apollo 12 was being assembled and tested.

To illustrate how all of these systems and procedures came together, the author takes us through the processing of Apollo 11 in detail, starting around six months before launch when the Saturn V stages, and command, service, and lunar modules arrived independently from the contractors who built them or the NASA facilities where they had been individually tested. The original concept for KSC was that it would be an “operational spaceport” which would assemble pre-tested components into flight vehicles, run integrated system tests, and then launch them in an assembly-line fashion. In reality, the Apollo and Saturn programs never matured to this level, and were essentially development and test projects throughout. Components not only arrived at KSC with “some assembly required”; they often were subject to a blizzard of engineering change orders which required partially disassembling equipment to make modifications, then exhaustive re-tests to verify the previously tested configuration had been restored.

Apollo 11 encountered relatively few problems in processing, so experiences from other missions where problems arose are interleaved to illustrate how KSC coped with contingencies. While Apollo 16 was on the launch pad, a series of mistakes during the testing process damaged a propellant tank in the command module. The only way to repair this was to roll the entire stack back to the Vehicle Assembly Building, remove the command and service modules, return them to the spacecraft servicing building then de-mate them, pull the heat shield from the command module, change out the tank, then put everything back together, re-stack, and roll back to the launch pad. Imagine how many forms had to be filled out. The launch was delayed just one month.

The process of servicing the vehicle on the launch pad is described in detail. Many of the operations, such as filling tanks with toxic hypergolic fuel and oxidiser, which burn on contact, required evacuating the pad of all non-essential personnel and special precautions for those engaged in these hazardous tasks. As launch approached, the hurdles became higher: a Launch Readiness Review and the Countdown Demonstration Test, a full dress rehearsal of the countdown up to the moment before engine start, including fuelling all of the stages of the launch vehicle (and then de-fuelling them after conclusion of the test).

There is a wealth of detail here, including many obscure items I've never encountered before. Consider “Forward Observers”. When the Saturn V launched, most personnel and spectators were kept a safe distance of more than 5 km from the launch pad in case of calamity. But three teams of two volunteers each were stationed at sites just 2 km from the pad. They were charged with observing the first seconds of flight and, if they saw a catastrophic failure (engine explosion or cut-off, hard-over of an engine gimbal, or the rocket veering into the umbilical tower), they would signal the astronauts to fire the launch escape system and abort the mission. If this happened, the observers would then have to dive into crude shelters often frequented by rattlesnakes to ride out the fiery aftermath.

Did you know about the electrical glitch which almost brought the Skylab 2 mission to flaming catastrophe moments after launch? How lapses in handling of equipment and paperwork almost spelled doom for the crew of Apollo 13? The time an oxygen leak while fuelling a Saturn V booster caused cars parked near the launch pad to burst into flames? It's all here, and much more. This is an essential book for those interested in the engineering details of the Apollo project and the management miracles which made its achievements possible.

 Permalink

Regis, Ed. Monsters. New York: Basic Books, 2015. ISBN 978-0-465-06594-3.
In 1863, as the American Civil War raged, Count Ferdinand von Zeppelin, an ambitious young cavalry officer from the German kingdom of Württemberg arrived in America to observe the conflict and learn its lessons for modern warfare. He arranged an audience with President Lincoln, who authorised him to travel among the Union armies. Zeppelin spent a month with General Joseph Hooker's Army of the Potomac. Accustomed to German military organisation, he was unimpressed with what he saw and left to see the sights of the new continent. While visiting Minnesota, he ascended in a tethered balloon and saw the landscape laid out below him like a military topographical map. He immediately grasped the advantage of such an eye in the sky for military purposes. He was impressed.

Upon his return to Germany, Zeppelin pursued a military career, distinguishing himself in the 1870 war with France, although being considered “a hothead”. It was this characteristic which brought his military career to an abrupt end in 1890. Chafing under what he perceived as stifling leadership by the Prussian officer corps, he wrote directly to the Kaiser to complain. This was a bad career move; the Kaiser “promoted” him into retirement. Adrift, looking for a new career, Zeppelin seized upon controlled aerial flight, particularly for its military applications. And he thought big.

By 1890, France was at the forefront of aviation. By 1885 the first dirigible, La France, had demonstrated aerial navigation over complex closed courses and carried passengers. Built for the French army, it was just a technology demonstrator, but to Zeppelin it demonstrated a capability with such potential that Germany must not be left behind. He threw his energy into the effort, formed a company, raised the money, and embarked upon the construction of Luftschiff Zeppelin 1 (LZ 1).

Count Zeppelin was not a man to make small plans. Eschewing sub-scale demonstrators or technology-proving prototypes, he went directly to a full scale airship intended to be militarily useful. It was fully 128 metres long, almost two and a half times the size of La France, longer than a football field. Its rigid aluminium frame contained 17 gas bags filled with hydrogen, and it was powered by two gasoline engines. LZ 1 flew just three times. An observer from the German War Ministry reported it to be “suitable for neither military nor for non-military purposes.” Zeppelin's company closed its doors and the airship was sold for scrap.

By 1905, Zeppelin was ready to try again. On its first flight, the LZ 2 lost power and control and had to make a forced landing. Tethered to the ground at the landing site, it was caught by the wind and destroyed. It was sold for scrap. Later the LZ 3 flew successfully, and Zeppelin embarked upon construction of the LZ 4, which would be larger still. While attempting a twenty-four hour endurance flight, it suffered motor failure, landed, and while tied down was caught by wind. Its gas bags rubbed against one another and static electricity ignited the hydrogen, which reduced the airship to smoking wreckage.

Many people would have given up at this point, but not the redoubtable Count. The LZ 5, delivered to the military, was lost when carried away by the wind after an emergency landing and dashed against a hill. LZ 6 burned in its hangar after an engine caught fire. LZ 7, the first civilian passenger airship, crashed into a forest on its first flight and was damaged beyond repair. LZ 8, its replacement, was destroyed by a gust of wind while being walked out of its hangar.

With the outbreak of war in 1914, the airship went to war. Germany operated 117 airships, using them for reconnaissance and even bombing targets in England. Of the 117, fully 81 were destroyed, about half due to enemy action and half by the woes which had wrecked so many airships prior to the conflict.

Based upon this stunning record of success, after the end of the Great War, Britain decided to embark in earnest on its own airship program, building even larger airships than Germany. Results were no better, culminating in the R100 and R101, built to provide air and cargo service on routes throughout the Empire. On its maiden flight to India in 1930, R101 crashed and burned in a storm while crossing France, killing 48 of the 54 on board. After the catastrophe, the R100 was retired and sold for scrap.

This did not deter the Americans, who, in addition to their technical prowess and “can do” spirit, had access to helium, produced as a by-product of their natural gas fields. Unlike hydrogen, helium is nonflammable, so the risk of fire, which had destroyed so many airships using hydrogen, was entirely eliminated. Helium does not provide as much lift as hydrogen, but this can be compensated for by increasing the size of the ship. Helium is also around fifty times more expensive than hydrogen, which makes managing an airship in flight more difficult. While the commander of a hydrogen airship can freely “valve” gas to reduce lift when required, doing this in a helium ship is forbiddingly expensive and restricted only to the most dire of emergencies.

The U.S. Navy believed the airship to be an ideal platform for long-range reconnaissance, anti-submarine patrols, and other missions where its endurance, speed, and the ability to operate far offshore provided advantages over ships and heavier than air craft. Between 1921 and 1935 the Navy operated five rigid airships, three built domestically and two abroad. Four of the five crashed in storms or due to structural failure, killing dozens of crew.

This sorry chronicle leads up to a detailed recounting of the history of the Hindenburg. Originally designed to use helium, it was redesigned for hydrogen after it became clear the U.S., which had forbidden export of helium in 1927, would not grant a waiver, especially to a Germany by then under Nazi rule. The Hindenburg was enormous: at 245 metres in length, it was longer than the U.S. Capitol building and more than three times the length of a Boeing 747. It carried between 50 and 72 passengers who were served by a crew of 40 to 61, with accommodations (apart from the spartan sleeping quarters) comparable to first class on ocean liners. In 1936, the great ship made 17 transatlantic crossings without incident. On its first flight to the U.S. in 1937, it was destroyed by fire while approaching the mooring mast at Lakehurst, New Jersey. The disaster and its aftermath are described in detail. Remarkably, given the iconic images of the flaming airship falling to the ground and the structure glowing from the intense heat of combustion, of the 97 passengers and crew on board, 62 survived the disaster. (One of the members of the ground crew also died.)

Prior to the destruction of the Hindenburg, a total of twenty-six hydrogen filled airships had been destroyed by fire, excluding those shot down in wartime, with a total of 250 people killed. The vast majority of all rigid airships built ended in disaster—if not due to fire then structural failure, weather, or pilot error. Why did people continue to pursue this technology in the face of abundant evidence that it was fundamentally flawed?

The author argues that rigid airships are an example of a “pathological technology”, which he characterises as:

  1. Embracing something huge, either in size or effects.
  2. Inducing a state bordering on enthralment among its proponents…
  3. …who underplay its downsides, risks, unintended consequences, and obvious dangers.
  4. Having costs out of proportion to the benefits it is alleged to provide.

Few people would argue that the pursuit of large airships for more than three decades in the face of repeated disasters was a pathological technology under these criteria. Even setting aside the risks from using hydrogen as a lifting gas (which I believe the author over-emphasises: prior to the Hindenburg accident nobody had ever been injured on a commercial passenger flight of a hydrogen airship, and nobody gives a second thought today about boarding an airplane with 140 tonnes of flammable jet fuel in the tanks and flying across the Pacific with only two engines). Seemingly hazardous technologies can be rendered safe with sufficient experience and precautions. Large lighter than air ships were, however, inherently unsafe because they were large and lighter than air: nothing could be done about that. They were are the mercy of the weather, and if they were designed to be strong enough to withstand whatever weather conditions they might encounter, they would have been too heavy to fly. As the experience of the U.S. Navy with helium airships demonstrated, it didn't matter if you were immune to the risks of hydrogen; the ship would eventually be destroyed in a storm.

The author then moves on from airships to discuss other technologies he deems pathological, and here, in my opinion, goes off the rails. The first of these technologies is Project Plowshare, a U.S. program to explore the use of nuclear explosions for civil engineering projects such as excavation, digging of canals, creating harbours, and fracturing rock to stimulate oil and gas production. With his characteristic snark, Regis mocks the very idea of Plowshare, and yet examination of the history of the program belies this ridicule. For the suggested applications, nuclear explosions were far more economical than chemical detonations and conventional earthmoving equipment. One principal goal of Plowshare was to determine the efficacy of such explosions and whether they would pose risks (for example, release of radiation) which were unacceptable. Over 11 years 26 nuclear tests were conducted under the program, most at the Nevada Test Site, and after a review of the results it was concluded the radiation risk was unacceptable and the results unpromising. Project Plowshare was shut down in 1977. I don't see what's remotely pathological about this. You have an idea for a new technology; you explore it in theory; conduct experiments; then decide it's not worth pursuing. Now maybe if you're Ed Regis, you may have been able to determine at the outset, without any of the experimental results, that the whole thing was absurd, but a great many people with in-depth knowledge of the issues involved preferred to run the experiments, take the data, and decide based upon the results. That, to me, seems the antithesis of pathological.

The next example of a pathological technology is the Superconducting Super Collider, a planned particle accelerator to be built in Texas which would have an accelerator ring 87.1 km in circumference and collide protons at a centre of mass energy of 40 TeV. The project was approved and construction begun in the 1980s. In 1993, Congress voted to cancel the project and work underway was abandoned. Here, the fit with “pathological technology” is even worse. Sure, the project was large, but it was mostly underground: hardly something to “enthral” anybody except physics nerds. There were no risks at all, apart from those in any civil engineering project of comparable scale. The project was cancelled because it overran its budget estimates but, even if completed, would probably have cost less than a tenth the expenditures to date on the International Space Station, which has produced little or nothing of scientific value. How is it pathological when a project, undertaken for well-defined goals, is cancelled when those funding it, seeing its schedule slip and budget balloon beyond that projected, pull the plug on it? Isn't that how things are supposed to work? Who were the seers who forecast all of this at the project's inception?

The final example of so-called pathological technology is pure spite. Ed Regis has a fine time ridiculing participants in the first 100 Year Starship symposium, a gathering to explore how and why humans might be able, within a century, to launch missions (robotic or crewed) to other star systems. This is not a technology at all, but rather an exploration of what future technologies might be able to do, and the limits imposed by the known laws of physics upon potential technologies. This is precisely the kind of “exploratory engineering” that Konstantin Tsiolkovsky engaged in when he worked out the fundamentals of space flight in the late 19th and early 20th centuries. He didn't know the details of how it would be done, but he was able to calculate, from first principles, the limits of what could be done, and to demonstrate that the laws of physics and properties of materials permitted the missions he envisioned. His work was largely ignored, which I suppose may be better than being mocked, as here.

You want a pathological technology? How about replacing reliable base load energy sources with inefficient sources at the whim of clouds and wind? Banning washing machines and dishwashers that work in favour of ones that don't? Replacing toilets with ones that take two flushes in order to “save water”? And all of this in order to “save the planet” from the consequences predicted by a theoretical model which has failed to predict measured results since its inception, through policies which impoverish developing countries and, even if you accept the discredited models, will have negligible results on the global climate. On this scandal of our age, the author is silent. He concludes:

Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped.

Or better yet, never begun.

Except, it seems, you can only recognise them in retrospect.

 Permalink

February 2016

McCullough, David. The Wright Brothers. New York: Simon & Schuster, 2015. ISBN 978-1-4767-2874-2.
On December 8th, 1903, all was in readiness. The aircraft was perched on its launching catapult, the brave airman at the controls. The powerful internal combustion engine roared to life. At 16:45 the catapult hurled the craft into the air. It rose straight up, flipped, and with its wings coming apart, plunged into the Potomac river just 20 feet from the launching point. The pilot was initially trapped beneath the wreckage but managed to free himself and swim to the surface. After being rescued from the river, he emitted what one witness described as “the most voluble series of blasphemies” he had ever heard.

So ended the last flight of Samuel Langley's “Aerodrome”. Langley was a distinguished scientist and secretary of the Smithsonian Institution in Washington D.C. Funded by the U.S. Army and the Smithsonian for a total of US$ 70,000 (equivalent to around 1.7 million present-day dollars), the Aerodrome crashed immediately on both of its test flights, and was the subject of much mockery in the press.

Just nine days later, on December 17th, two brothers, sons of a churchman, with no education beyond high school, and proprietors of a bicycle shop in Dayton, Ohio, readied their own machine for flight near Kitty Hawk, on the windswept sandy hills of North Carolina's Outer Banks. Their craft, called just the Flyer, took to the air with Orville Wright at the controls. With the 12 horsepower engine driving the twin propellers and brother Wilbur running alongside to stabilise the machine as it moved down the launching rail into the wind, Orville lifted the machine into the air and achieved the first manned heavier-than-air powered flight, demonstrating the Flyer was controllable in all three axes. The flight lasted just 12 seconds and covered a distance of 120 feet.

After the first flight, the brothers took turns flying the machine three more times on the 17th. On the final flight Wilbur flew a distance of 852 feet in a flight of 59 seconds (a strong headwind was blowing, and this flight was over half a mile through the air). After completion of the fourth flight, while being prepared to fly again, a gust of wind caught the machine and dragged it, along with assistant John T. Daniels, down the beach toward the ocean. Daniels escaped, but the Flyer was damaged beyond repair and never flew again. (The Flyer which can seen in the Smithsonian's National Air and Space Museum today has been extensively restored.)

Orville sent a telegram to his father in Dayton announcing the success, and the brothers packed up the remains of the aircraft to be shipped back to their shop. The 1903 season was at an end. The entire budget for the project between 1900 through the successful first flights was less than US$ 1000 (24,000 dollars today), and was funded entirely by profits from the brothers' bicycle business.

How did two brothers with no formal education in aerodynamics or engineering succeed on a shoestring budget while Langley, with public funds at his disposal and the resources of a major scientific institution fail so embarrassingly? Ultimately it was because the Wright brothers identified the key problem of flight and patiently worked on solving it through a series of experiments. Perhaps it was because they were in the bicycle business. (Although they are often identified as proprietors of a “bicycle shop”, they also manufactured their own bicycles and had acquired the machine tools, skills, and co-workers for the business, later applied to building the flying machine.)

The Wrights believed the essential problem of heavier than air flight was control. The details of how a bicycle is built don't matter much: you still have to learn to ride it. And the problem of control in free flight is much more difficult than riding a bicycle, where the only controls are the handlebars and, to a lesser extent, shifting the rider's weight. In flight, an airplane must be controlled in three axes: pitch (up and down), yaw (left and right), and roll (wings' angle to the horizon). The means for control in each of these axes must be provided, and what's more, just as for a child learning to ride a bike, the would-be aeronaut must master the skill of using these controls to maintain his balance in the air.

Through a patient program of subscale experimentation, first with kites controlled by from the ground by lines manipulated by the operators, then gliders flown by a pilot on board, the Wrights developed their system of pitch control by a front-mounted elevator, yaw by a rudder at the rear, and roll by warping the wings of the craft. Further, they needed to learn how to fly using these controls and verify that the resulting plane would be stable enough that a person could master the skill of flying it. With powerless kites and gliders, this required a strong, consistent wind. After inquiries to the U.S. Weather Bureau, the brothers selected the Kitty Hawk site on the North Carolina coast. Just getting there was an adventure, but the wind was as promised and the sand and lack of large vegetation was ideal for their gliding experiments. They were definitely “roughing it” at this remote site, and at times were afflicted by clouds of mosquitos of Biblical plague proportions, but starting in 1900 they tested a series of successively larger gliders and by 1902 had a design which provided three axis control, stability, and the controls for a pilot on board. In the 1902 season they made more than 700 flights and were satisfied the control problem had been mastered.

Now all that remained was to add an engine and propellers to the successful glider design, again scaling it up to accommodate the added weight. In 1903, you couldn't just go down to the hardware store and buy an engine, and automobile engines were much too heavy, so the Wrights' resourceful mechanic, Charlie Taylor, designed and built the four cylinder motor from scratch, using the new-fangled material aluminium for the engine block. The finished engine weighed just 152 pounds and produced 12 horsepower. The brothers could find no references for the design of air propellers and argued intensely over the topic, but eventually concluded they'd just have to make a best guess and test it on the real machine.

The Flyer worked the on the second attempt (an earlier try on December 14th ended in a minor crash when Wilbur over-controlled at the moment of take-off). But this stunning success was the product of years of incremental refinement of the design, practical testing, and mastery of airmanship through experience.

Those four flights in December of 1903 are now considered one of the epochal events of the twentieth century, but at the time they received little notice. Only a few accounts of the flights appeared in the press, and some of them were garbled and/or sensationalised. The Wrights knew that the Flyer (whose wreckage was now in storage crates at Dayton), while a successful proof of concept and the basis for a patent filing, was not a practical flying machine. It could only take off into the strong wind at Kitty Hawk and had not yet demonstrated long-term controlled flight including aerial maneuvers such as turns or flying around a closed course. It was just too difficult travelling to Kitty Hawk, and the facilities of their camp there didn't permit rapid modification of the machines based upon experimentation.

They arranged to use an 84 acre cow pasture called Huffman Prairie located eight miles from Dayton along an interurban trolley line which made it easy to reach. The field's owner let them use it without charge as long as they didn't disturb the livestock. The Wrights devised a catapult to launch their planes, powered by a heavy falling weight, which would allow them to take off in still air. It was here, in 1904, that they refined the design into a practical flying machine and fully mastered the art of flying it over the course of about fifty test flights. Still, there was little note of their work in the press, and the first detailed account was published in the January 1905 edition of Gleanings in Bee Culture. Amos Root, the author of the article and publisher of the magazine, sent a copy to Scientific American, saying they could republish it without fee. The editors declined, and a year later mocked the achievements of the Wright brothers.

For those accustomed to the pace of technological development more than a century later, the leisurely pace of progress in aviation and lack of public interest in the achievement of what had been a dream of humanity since antiquity seems odd. Indeed, the Wrights, who had continued to refine their designs, would not become celebrities nor would their achievements be widely acknowledged until a series of demonstrations Wilbur would perform at Le Mans in France in the summer of 1908. Le Figaro wrote, “It was not merely a success, but a triumph…a decisive victory for aviation, the news of which will revolutionize scientific circles throughout the world.” And it did: stories of Wilbur's exploits were picked up by the press on the Continent, in Britain, and, belatedly, by papers in the U.S. Huge crowds came out to see the flights, and the intrepid American aviator's name was on every tongue.

Meanwhile, Orville was preparing for a series of demonstration flights for the U.S. Army at Fort Myer, Virginia. The army had agreed to buy a machine if it passed a series of tests. Orville's flights also began to draw large crowds from nearby Washington and extensive press coverage. All doubts about what the Wrights had wrought were now gone. During a demonstration flight on September 17, 1908, a propeller broke in flight. Orville tried to recover, but the machine plunged to the ground from an altitude of 75 feet, severely injuring him and killing his passenger, Lieutenant Thomas Selfridge, who became the first person to die in an airplane crash. Orville's recuperation would be long and difficult, aided by his sister, Katharine.

In early 1909, Orville and Katharine would join Wilbur in France, where he was to do even more spectacular demonstrations in the south of the country, training pilots for the airplanes he was selling to the French. Upon their return to the U.S., the Wrights were awarded medals by President Taft at the White House. They were feted as returning heroes in a two day celebration in Dayton. The diligent Wrights continued their work in the shop between events.

The brothers would return to Fort Myer, the scene of the crash, and complete their demonstrations for the army, securing the contract for the sale of an airplane for US$ 30,000. The Wrights would continue to develop their company, defend their growing portfolio of patents against competitors, and innovate. Wilbur was to die of typhoid fever in 1912, aged only 45 years. Orville sold his interest in the Wright Company in 1915 and, in his retirement, served for 28 years on the National Advisory Committee for Aeronautics, the precursor of NASA. He died in 1948. Neither brother ever married.

This book is a superb evocation of the life and times of the Wrights and their part in creating, developing, promoting, and commercialising one of the key technologies of the modern world.

 Permalink

Carlson, W. Bernard. Tesla: Inventor of the Electrical Age. Princeton: Princeton University Press, 2013. ISBN 978-0-691-16561-5.
Nicola Tesla was born in 1858 in a village in what is now Croatia, then part of the Austro-Hungarian Empire. His father and grandfather were both priests in the Orthodox church. The family was of Serbian descent, but had lived in Croatia since the 1690s among a community of other Serbs. His parents wanted him to enter the priesthood and enrolled him in school to that end. He excelled in mathematics and, building on a boyhood fascination with machines and tinkering, wanted to pursue a career in engineering. After completing high school, Tesla returned to his village where he contracted cholera and was near death. His father promised him that if he survived, he would “go to the best technical institution in the world.” After nine months of illness, Tesla recovered and, in 1875 entered the Joanneum Polytechnic School in Graz, Austria.

Tesla's university career started out brilliantly, but he came into conflict with one of his physics professors over the feasibility of designing a motor which would operate without the troublesome and unreliable commutator and brushes of existing motors. He became addicted to gambling, lost his scholarship, and dropped out in his third year. He worked as a draftsman, taught in his old high school, and eventually ended up in Prague, intending to continue his study of engineering at the Karl-Ferdinand University. He took a variety of courses, but eventually his uncles withdrew their financial support.

Tesla then moved to Budapest, where he found employment as chief electrician at the Budapest Telephone Exchange. He quickly distinguished himself as a problem solver and innovator and, before long, came to the attention of the Continental Edison Company of France, which had designed the equipment used in Budapest. He was offered and accepted a job at their headquarters in Ivry, France. Most of Edison's employees had practical, hands-on experience with electrical equipment, but lacked Tesla's formal education in mathematics and physics. Before long, Tesla was designing dynamos for lighting plants and earning a handsome salary. With his language skills (by that time, Tesla was fluent in Serbian, German, and French, and was improving his English), the Edison company sent him into the field as a trouble-shooter. This further increased his reputation and, in 1884 he was offered a job at Edison headquarters in New York. He arrived and, years later, described the formalities of entering the U.S. as an immigrant: a clerk saying “Kiss the Bible. Twenty cents!”.

Tesla had never abandoned the idea of a brushless motor. Almost all electric lighting systems in the 1880s used direct current (DC): electrons flowed in only one direction through the distribution wires. This is the kind of current produced by batteries, and the first electrical generators (dynamos) produced direct current by means of a device called a commutator. As the generator is turned by its power source (for example, a steam engine or water wheel), power is extracted from the rotating commutator by fixed brushes which press against it. The contacts on the commutator are wired to the coils in the generator in such a way that a constant direct current is maintained. When direct current is used to drive a motor, the motor must also contain a commutator which converts the direct current into a reversing flow to maintain the motor in rotary motion.

Commutators, with brushes rubbing against them, are inefficient and unreliable. Brushes wear and must eventually be replaced, and as the commutator rotates and the brushes make and break contact, sparks may be produced which waste energy and degrade the contacts. Further, direct current has a major disadvantage for long-distance power transmission. There was, at the time, no way to efficiently change the voltage of direct current. This meant that the circuit from the generator to the user of the power had to run at the same voltage the user received, say 120 volts. But at such a voltage, resistance losses in copper wires are such that over long runs most of the energy would be lost in the wires, not delivered to customers. You can increase the size of the distribution wires to reduce losses, but before long this becomes impractical due to the cost of copper it would require. As a consequence, Edison electric lighting systems installed in the 19th century had many small powerhouses, each supplying a local set of customers.

Alternating current (AC) solves the problem of power distribution. In 1881 the electrical transformer had been invented, and by 1884 high-efficiency transformers were being manufactured in Europe. Powered by alternating current (they don't work with DC), a transformer efficiently converts current from one voltage and current to another. For example, power might be transmitted from the generating station to the customer at 12000 volts and 1 ampere, then stepped down to 120 volts and 100 amperes by a transformer at the customer location. Losses in a wire are purely a function of current, not voltage, so for a given level of transmission loss, the cables to distribute power at 12000 volts will cost a hundredth as much as if 120 volts were used. For electric lighting, alternating current works just as well as direct current (as long as the frequency of the alternating current is sufficiently high that lamps do not flicker). But electricity was increasingly used to power motors, replacing steam power in factories. All existing practical motors ran on DC, so this was seen as an advantage to Edison's system.

Tesla worked only six months for Edison. After developing an arc lighting system only to have Edison put it on the shelf after acquiring the rights to a system developed by another company, he quit in disgust. He then continued to work on an arc light system in New Jersey, but the company to which he had licensed his patents failed, leaving him only with a worthless stock certificate. To support himself, Tesla worked repairing electrical equipment and even digging ditches, where one of his foremen introduced him to Alfred S. Brown, who had made his career in telegraphy. Tesla showed Brown one of his patents, for a “thermomagnetic motor”, and Brown contacted Charles F. Peck, a lawyer who had made his fortune in telegraphy. Together, Peck and Brown saw the potential for the motor and other Tesla inventions and in April 1887 founded the Tesla Electric Company, with its laboratory in Manhattan's financial district.

Tesla immediately set to make his dream of a brushless AC motor a practical reality and, by using multiple AC currents, out of phase with one another (the polyphase system), he was able to create a magnetic field which itself rotated. The rotating magnetic field induced a current in the rotating part of the motor, which would start and turn without any need for a commutator or brushes. Tesla had invented what we now call the induction motor. He began to file patent applications for the motor and the polyphase AC transmission system in the fall of 1887, and by May of the following year had been granted a total of seven patents on various aspects of the motor and polyphase current.

One disadvantage of the polyphase system and motor was that it required multiple pairs of wires to transmit power from the generator to the motor, which increased cost and complexity. Also, existing AC lighting systems, which were beginning to come into use, primarily in Europe, used a single phase and two wires. Tesla invented the split-phase motor, which would run on a two wire, single phase circuit, and this was quickly patented.

Unlike Edison, who had built an industrial empire based upon his inventions, Tesla, Peck, and Brown had no interest in founding a company to manufacture Tesla's motors. Instead, they intended to shop around and license the patents to an existing enterprise with the resources required to exploit them. George Westinghouse had developed his inventions of air brakes and signalling systems for railways into a successful and growing company, and was beginning to compete with Edison in the electric light industry, installing AC systems. Westinghouse was a prime prospect to license the patents, and in July 1888 a deal was concluded for cash, notes, and a royalty for each horsepower of motors sold. Tesla moved to Pittsburgh, where he spent a year working in the Westinghouse research lab improving the motor designs. While there, he filed an additional fifteen patent applications.

After leaving Westinghouse, Tesla took a trip to Europe where he became fascinated with Heinrich Hertz's discovery of electromagnetic waves. Produced by alternating current at frequencies much higher than those used in electrical power systems (Hertz used a spark gap to produce them), here was a demonstration of transmission of electricity through thin air—with no wires at all. This idea was to inspire much of Tesla's work for the rest of his life. By 1891, he had invented a resonant high frequency transformer which we now call a Tesla coil, and before long was performing spectacular demonstrations of artificial lightning, illuminating lamps at a distance without wires, and demonstrating new kinds of electric lights far more efficient than Edison's incandescent bulbs. Tesla's reputation as an inventor was equalled by his talent as a showman in presentations before scientific societies and the public in both the U.S. and Europe.

Oddly, for someone with Tesla's academic and practical background, there is no evidence that he mastered Maxwell's theory of electromagnetism. He believed that the phenomena he observed with the Tesla coil and other apparatus were not due to the Hertzian waves predicted by Maxwell's equations, but rather something he called “electrostatic thrusts”. He was later to build a great edifice of mistaken theory on this crackpot idea.

By 1892, plans were progressing to harness the hydroelectric power of Niagara Falls. Transmission of this power to customers was central to the project: around one fifth of the American population lived within 400 miles of the falls. Westinghouse bid Tesla's polyphase system and with Tesla's help in persuading the committee charged with evaluating proposals, was awarded the contract in 1893. By November of 1896, power from Niagara reached Buffalo, twenty miles away, and over the next decade extended throughout New York. The success of the project made polyphase power transmission the technology of choice for most electrical distribution systems, and it remains so to this day. In 1895, the New York Times wrote:

Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.

After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed.

None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles.

Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy.

In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.

 Permalink

March 2016

Flint, Eric. 1632. Riverdale, NY: Baen Publishing, 2000. ISBN 978-0-671-31972-4.
Nobody knows how it happened, nor remotely why. Was it a bizarre physics phenomenon, an act of God, intervention by aliens, or “just one of those things”? One day, with a flash and a bang which came to be called the Ring of Fire, the town of Grantville, West Virginia and its environs in the present day was interchanged with an equally large area of Thuringia, in what is now Germany, in the year 1632.

The residents of Grantville discover a sharp boundary where the town they know so well comes to an end and the new landscape begins. What's more, they rapidly discover they aren't in West Virginia any more, encountering brutal and hostile troops ravaging the surrounding countryside. After rescuing two travellers and people being attacked by the soldiers and using their superior firepower to bring hostilities to a close, they begin to piece together what has happened. They are not only in central Europe, but square in the middle of the Thirty Years' War: the conflict between Catholic and Protestant forces which engulfed much of the continent.

Being Americans, and especially being self-sufficient West Virginians, the residents of Grantville take stock of their situation and start planning to make of the most of the situation they've been dealt. They can count themselves lucky that the power plant was included within the Ring of Fire, so the electricity will stay on as long as there is fuel to run it. There are local coal mines and people with the knowledge to work them. The school and its library were within the circle, so there is access to knowledge of history and technology, as well as the school's shop and several machine shops in town. As a rural community, there are experienced farmers, and the land in Thuringia is not so different from West Virginia, although the climate is somewhat harsher. Supplies of fuel for transportation are limited to stocks on hand and in the tanks of vehicles with no immediate prospect of obtaining more. There are plenty of guns and lots of ammunition, but even with the reloading skills of those in the town, eventually the supply of primers and smokeless powder will be exhausted.

Not only does the town find itself in the middle of battles between armies, those battles have created a multitude of refugees who press in on the town. Should Grantville put up a wall and hunker down, or welcome them, begin to assimilate them as new Americans, and put them to work to build a better society based upon the principles which kept religious wars out of the New World? And how can a small town, whatever its technological advantages and principles, deal with contending forces thousands of times larger? Form an alliance? But with whom, and on what terms? And what principles must be open to compromise and which must be inviolate?

This is a thoroughly delightful story which will leave you with admiration for the ways of rural America, echoing those of their ancestors who built a free society in a wilderness. Along with the fictional characters, we encounter key historical figures of the era, who are depicted accurately. There are a number of coincidences which make things work (for example, Grantville having a power plant, and encountering Scottish troops in the army of the King of Sweden who speak English), but without those coincidences the story would fall apart. The thought which recurred as I read the novel is what would have happened if, instead, an effete present-day American university town had been plopped down in the midst of the Thirty Years War instead of Grantville. I'd give it forty-eight hours at most.

This novel is the first in what has become a large and expanding Ring of Fire universe, including novels by the author and other writers set all over Europe and around the world, short stories, periodicals, and a role-playing game. If you loved this story, as I did, there's much more to explore.

This book is a part of the Baen Free Library. You can read the book online or download it in a wide variety of electronic book formats, all free of digital rights management, directly from the book's page at the Baen site. The Kindle edition may also be downloaded for free from Amazon.

 Permalink

Munroe, Randall. Thing Explainer. New York: Houghton Mifflin, 2015. ISBN 978-0-544-66825-6.
What a great idea! The person who wrote this book explains not simple things like red world sky cars, tiny water bags we are made of, and the shared space house, with only the ten hundred words people use most.

There are many pictures with words explaining each thing. The idea came from the Up Goer Five picture he drew earlier.

Up Goer Five

Drawing by Randall Munroe / xkcd used under right to share but not to sell (CC BY-NC 2.5).
(The words in the above picture are drawn. In the book they are set in sharp letters.)

Many other things are explained here. You will learn about things in the house like food-heating radio boxes and boxes that clean food holders; living things like trees, bags of stuff inside you, and the tree of life; the Sun, Earth, sky, and other worlds; and even machines for burning cities and boats that go under the seas to throw them at other people. This is not just a great use of words, but something you can learn much from.

There is art in explaining things in the most used ten hundred words, and this book is a fine work of that art.

Read this book, then try explaining such things yourself. You can use this write checker to see how you did.

Can you explain why time slows down when you go fast? Or why things jump around when you look at them very close-up? This book will make you want to try it. Enjoy!

The same writer also created What If? (2015-11)

Here, I have only written with the same ten hundred most used words as in the book.

 Permalink

April 2016

Jenne, Mike. Blue Gemini. New York: Yucca Publishing, 2015. ISBN 978-1-63158-047-5.
It is the late 1960s, and the Apollo project is racing toward the Moon. The U.S. Air Force has not abandoned its manned space flight ambitions, and is proceeding with its Manned Orbiting Laboratory program, nominally to explore the missions military astronauts can perform in an orbiting space station, but in reality a large manned reconnaissance satellite. Behind the curtain of secrecy and under the cover of the blandly named “Aerospace Support Project”, the Air Force was simultaneously proceeding with a much more provocative project: Blue Gemini. Using the Titan II booster and a modified version of the two-man spacecraft from NASA's recently-concluded Gemini program, its mission was to launch on short notice, rendezvous with and inspect uncooperative targets (think Soviet military satellites), and optionally attach a package to them which, on command from the ground, could destroy the satellite, de-orbit it, or throw it out of control. All of this would have to be done covertly, without alerting the Soviets to the intrusion.

Inconclusive evidence and fears that the Soviets, in response to the U.S. ballistic missile submarine capability, were preparing to place nuclear weapons in orbit, ready to rain down onto the U.S. upon command, even if the Soviet missile and bomber forces were destroyed, gave Blue Gemini a high priority. Operating out of Wright-Patterson Air Force Base in Ohio, flight hardware for the Gemini-I interceptor spacecraft, Titan II missiles modified for man-rating, and a launching site on Johnston Island in the Pacific were all being prepared, and three flight crews were in training.

Scott Ourecky had always dreamed of flying. In college, he enrolled in Air Force ROTC, underwent primary flight training, and joined the Air Force upon graduation. Once in uniform, his talent for engineering and mathematics caused him to advance, but his applications for flight training were repeatedly rejected, and he had resigned himself to a technical career in advanced weapon development, most recently at Eglin Air Force Base in Florida. There he is recruited to work part-time on the thorny technical problems of a hush-hush project: Blue Gemini.

Ourecky settles in and undertakes the formidable challenges faced by the mission. (NASA's Gemini rendezvous targets were cooperative: they had transponders and flashing beacons which made them easier to locate, and missions could be planned so that rendezvous would be accomplished when communications with ground controllers would be available. In Blue Gemini the crew would be largely on their own, with only brief communication passes available.) Finally, after an incident brought on by the pressure and grueling pace of training, he finds himself in the right seat of the simulator, paired with hot-shot pilot Drew Carson (who views non-pilots as lesser beings, and would rather be in Vietnam adding combat missions to his service record rather than sitting in a simulator in Ohio on a black program which will probably never be disclosed).

As the story progresses, crisis after crisis must be dealt with, all against a deadline which, if not met, will mean the almost-certain cancellation of the project.

This is fiction: no Gemini interceptor program ever existed (although one of the missions for which the Space Shuttle was designed was essentially the same: a one orbit inspection or snatch-and-return of a hostile satellite). But the remarkable thing about this novel is that, unlike many thrillers, the author gets just about everything absolutely right. This does not stop with the technical details of the Gemini and Titan hardware, but also Pentagon politics, inter-service rivalry, the interaction of military projects with political forces, and the dynamics of the relations between pilots, engineers, and project administrators. It works as a thriller, as a story with characters who develop in interesting ways, and there are no jarring goofs to distract you from the narrative. (Well, hardly any: the turbine engines of a C-130 do not “cough to life”.)

There are numerous subplots and characters involved in them, and when this book comes to an end, they're just left hanging in mid-air. That's because this is the first of a multi-volume work in progress. The second novel, Blue Darker than Black, picks up where the first ends. The third, Pale Blue, is scheduled to be published in August 2016.

 Permalink

Goldsmith, Barbara. Obsessive Genius. New York: W. W. Norton, 2005. ISBN 978-0-393-32748-9.
Maria Salomea Skłodowska was born in 1867 in Warsaw, Poland, then part of the Russian Empire. She was the fifth and last child born to her parents, Władysław and Bronisława Skłodowski, both teachers. Both parents were members of a lower class of the aristocracy called the Szlachta, but had lost their wealth through involvement in the Polish nationalist movement opposed to Russian rule. They retained the love of learning characteristic of their class, and had independently obtained teaching appointments before meeting and marrying. Their children were raised in an intellectual atmosphere, with their father reading books aloud to them in Polish, Russian, French, German, and English, all languages in which he was fluent.

During Maria's childhood, her father lost his teaching position after his anti-Russian sentiments and activities were discovered, and supported himself by operating a boarding school for boys from the provinces. In cramped and less than sanitary conditions, one of the boarders infected two of the children with typhus: Marie's sister Zofia died. Three years later, her mother, Bronisława, died of tuberculosis. Maria experienced her first episode of depression, a malady which would haunt her throughout life.

Despite having graduated from secondary school with honours, Marie and her sister Bronisława could not pursue their education in Poland, as the universities did not admit women. Marie made an agreement with her older sister: she would support Bronisława's medical education at the Sorbonne in Paris in return for her supporting Maria's studies there after she graduated and entered practice. Maria worked as a governess, supporting Bronisława. Finally, in 1891, she was able to travel to Paris and enroll in the Sorbonne. On the registration forms, she signed her name as “Marie”.

One of just 23 women among the two thousand enrolled in the School of Sciences, Marie studied physics, chemistry, and mathematics under an eminent faculty including luminaries such as Henri Poincaré. In 1893, she earned her degree in physics, one of only two women to graduate with a science degree that year, and in 1894 obtained a second degree in mathematics, ranking second in her class.

Finances remained tight, and Marie was delighted when one of her professors, Gabriel Lippman, arranged for her to receive a grant to study the magnetic properties of different kinds of steel. She set to work on the project but made little progress because the equipment she was using in Lippman's laboratory was cumbersome and insensitive. A friend recommended she contact a little-known physicist who was an expert on magnetism in metals and had developed instruments for precision measurements. Marie arranged to meet Pierre Curie to discuss her work.

Pierre was working at the School of Industrial Physics and Chemistry of the City of Paris (EPCI), an institution much less prestigious than the Sorbonne, in a laboratory which the visiting Lord Kelvin described as “a cubbyhole between the hallway and a student laboratory”. Still, he had major achievements to his credit. In 1880, with his brother Jacques, he had discovered the phenomenon of piezoelectricity, the interaction between electricity and mechanical stress in solids. Now the foundation of many technologies, the Curies used piezoelectricity to build an electrometer much more sensitive than previous instruments. His doctoral dissertation on the effects of temperature on the magnetism of metals introduced the concept of a critical temperature, different for each metal or alloy, at which permanent magnetism is lost. This is now called the Curie temperature.

When Pierre and Marie first met, they were immediately taken with one another: both from families of modest means, largely self-educated, and fascinated by scientific investigation. Pierre rapidly fell in love and was determined to marry Marie, but she, having been rejected in an earlier relationship in Poland, was hesitant and still planned to return to Warsaw. Pierre eventually persuaded Marie, and the two were married in July 1895. Marie was given a small laboratory space in the EPCI building to pursue work on magnetism, and henceforth the Curies would be a scientific team.

In the final years of the nineteenth century “rays” were all the rage. In 1896, Wilhelm Conrad Röntgen discovered penetrating radiation produced by accelerating electrons (which he called “cathode rays”, as the electron would not be discovered until the following year) into a metal target. He called them “X-rays”, using “X” as the symbol for the unknown. The same year, Henri Becquerel discovered that a sample of uranium salts could expose a photographic plate even if the plate were wrapped in a black cloth. In 1897 he published six papers on these “Becquerel rays”. Both discoveries were completely accidental.

The year that Marie was ready to begin her doctoral research, 65 percent of the papers presented at the Academy of Sciences in Paris were devoted to X-rays. Pierre suggested that Marie investigate the Becquerel rays produced by uranium, as they had been largely neglected by other scientists. She began a series of experiments using an electrometer designed by Pierre. The instrument was sensitive but exasperating to operate: Lord Rayleigh later wrote that electrometers were “designed by the devil”. Patiently, Marie measured the rays produced by uranium and then moved on to test samples of other elements. Among them, only thorium produced detectable rays.

She then made a puzzling observation. Uranium was produced from an ore called pitchblende. When she tested a sample of the residue of pitchblende from which all of the uranium had been extracted, she measured rays four times as energetic as those from pure uranium. She inferred that there must be a substance, perhaps a new chemical element, remaining in the pitchblende residue which was more radioactive than uranium. She then tested a thorium ore and found it also to produce rays more energetic than pure thorium. Perhaps here was yet another element to be discovered.

In March 1898, Marie wrote a paper in which she presented her measurements of the uranium and thorium ores, introduced the word “radioactivity” to describe the phenomenon, put forth the hypothesis that one or more undiscovered elements were responsible, suggested that radioactivity could be used to discover new elements, and, based upon her observations that radioactivity was unaffected by chemical processes, that it must be “an atomic property”. Neither Pierre nor Marie were members of the Academy of Sciences; Marie's former professor, Gabriel Lippman, presented the paper on her behalf.

It was one thing to hypothesise the existence of a new element or elements, and entirely another to isolate the element and determine its properties. Ore, like pitchblende, is a mix of chemical compounds. Starting with ore from which the uranium had been extracted, the Curies undertook a process to chemically separate these components. Those found to be radioactive were then distilled to increase their purity. With each distillation their activity increased. They finally found two of these fractions contained all the radioactivity. One was chemically similar to barium, while the other resembled bismuth. Measuring the properties of the fractions indicated they must be a mixture of the new radioactive elements and other, lighter elements.

To isolate the new elements, a process called “fractionation” was undertaken. When crystals form from a solution, the lighter elements tend to crystallise first. By repeating this process, the heavier elements could slowly be concentrated. With each fractionation the radioactivity increased. Working with the fraction which behaved like bismuth, the Curies eventually purified it to be 400 times as radioactive as uranium. No spectrum of the new element could yet be determined, but the Curies were sufficiently confident in the presence of a new element to publish a paper in July 1898 announcing the discovery and naming the new element “polonium” after Marie's native Poland. In December, working with the fraction which chemically resembled barium, they produced a sample 900 times as radioactive as uranium. This time a clear novel spectral line was found, and at the end of December 1898 they announced the discovery of a second new element, which they named “radium”.

Two new elements had been discovered, with evidence sufficiently persuasive that their existence was generally accepted. But the existing samples were known to be impure. The physical and chemical properties of the new elements, allowing their places in the periodic table to be determined, would require removal of the impurities and isolation of pure samples. The same process of fractionation could be used, but since it quickly became clear that the new radioactive elements were a tiny fraction of the samples in which they had been discovered, it would be necessary to scale up the process to something closer to an industrial scale. (The sample in which radium had been identified was 900 times more radioactive than uranium. Pure radium was eventually found to be ten million times as radioactive as uranium.)

Pierre learned that the residue from extracting uranium from pitchblende was dumped in a forest near the uranium mine. He arranged to have the Austrian government donate the material at no cost, and found the funds to ship it to the laboratory in Paris. Now, instead of test tubes, they were working with tons of material. Pierre convinced a chemical company to perform the first round of purification, persuading them that other researchers would be eager to buy the resulting material. Eventually, they delivered twenty kilogram lots of material to the Curies which were fifty times as radioactive as uranium. From there the Curie laboratory took over the subsequent purification. After four years, processing ten tons of pitchblende residue, hundreds of tons of rinsing water, thousands of fractionations, one tenth of a gram of radium chloride was produced that was sufficiently pure to measure its properties. In July 1902 Marie announced the isolation of radium and placed it on the periodic table as element 88.

In June of 1903, Marie defended her doctoral thesis, becoming the first woman in France to obtain a doctorate in science. With the discovery of radium, the source of the enormous energy it and other radioactive elements released became a major focus of research. Ernest Rutherford argued that radioactivity was a process of “atomic disintegration” in which one element was spontaneously transmuting to another. The Curies originally doubted this hypothesis, but after repeating the experiments of Rutherford, accepted his conclusion as correct.

In 1903, the Nobel Prize for Physics was shared by Marie and Pierre Curie and Henri Becquerel, awarded for the discovery of radioactivity. The discovery of radium and polonium was not mentioned. Marie embarked on the isolation of polonium, and within two years produced a sample sufficiently pure to place it as element 84 on the periodic table with an estimate of its half-life of 140 days (the modern value is 138.4 days). Polonium is about 5000 times as radioactive as radium. Polonium and radium found in nature are the products of decay of primordial uranium and thorium. Their half-lives are so short (radium's is 1600 years) that any present at the Earth's formation has long since decayed.

After the announcement of the discovery of radium and the Nobel prize, the Curies, and especially Marie, became celebrities. Awards, honorary doctorates, and memberships in the academies of science of several countries followed, along with financial support and the laboratory facilities they had lacked while performing the work which won them such acclaim. Radium became a popular fad, hailed as a cure for cancer and other diseases, a fountain of youth, and promoted by quacks promising all kinds of benefits from the nostrums they peddled, some of which, to the detriment of their customers, actually contained minute quantities of radium.

Tragedy struck in April 1906 when Pierre was killed in a traffic accident: run over on a Paris street in a heavy rainstorm by a wagon pulled by two horses. Marie was inconsolable, immersing herself in laboratory work and neglecting her two young daughters. Her spells of depression returned. She continued to explore the properties of radium and polonium and worked to establish a standard unit to measure radioactive decay, calibrated by radium. (This unit is now called the curie, but is no longer defined based upon radium and has been replaced by the becquerel, which is simply an inverse second.) Marie Curie was not interested or involved in the work to determine the structure of the atom and its nucleus or the development of quantum theory. The Curie laboratory continued to grow, but focused on production of radium and its applications in medicine and industry. Lise Meitner applied for a job at the laboratory and was rejected. Meitner later said she believed that Marie thought her a potential rival to Curie's daughter Irène. Meitner joined the Kaiser Wilhelm Institute in Berlin and went on to co-discover nuclear fission. The only two chemical elements named in whole or part for women are curium (element 96, named for both Pierre and Marie) and meitnerium (element 109).

In 1910, after three years of work with André-Louis Debierne, Marie managed to produce a sample of metallic radium, allowing a definitive measurement of its properties. In 1911, she won a second Nobel prize, unshared, in chemistry, for the isolation of radium and polonium. At the moment of triumph, news broke of a messy affair she had been carrying on with Pierre's successor at the EPCI, Paul Langevin, a married man. The popular press, who had hailed Marie as a towering figure of French science, went after her with bared fangs and mockery, and she went into seclusion under an assumed name.

During World War I, she invented and promoted the use of mobile field X-ray units (called “Les Petites Curies”) and won acceptance for women to operate them near the front, with her daughter Irène assisting in the effort. After the war, her reputation largely rehabilitated, Marie not only accepted but contributed to the growth of the Curie myth, seeing it as a way to fund her laboratory and research. Irène took the lead at the laboratory.

As co-discoverer of the phenomenon of radioactivity and two chemical elements, Curie's achievements were well recognised. She was the first woman to win a Nobel prize, the first person to win two Nobel prizes, and the only person so far to win Nobel prizes in two different sciences. (The third woman to win a Nobel prize was her daughter, Irène Joliot-Curie, for the discovery of artificial radioactivity.) She was the first woman to be appointed a full professor at the Sorbonne.

Marie Curie died of anæmia in 1934, probably brought on by exposure to radiation over her career. She took few precautions, and her papers and personal effects remain radioactive to this day. Her legacy is one of dedication and indefatigable persistence in achieving the goals she set for herself, regardless of the scientific and technical challenges and the barriers women faced at the time. She demonstrated that pure persistence, coupled with a brilliant intellect, can overcome formidable obstacles.

 Permalink

Launius, Roger D. and Dennis R. Jenkins. Coming Home. Washington: National Aeronautics and Space Administration, 2012. ISBN 978-0-16-091064-7. NASA SP-2011-593.
In the early decades of the twentieth century, when visionaries such as Konstantin Tsiolkovsky, Hermann Oberth, and Robert H. Goddard started to think seriously about how space travel might be accomplished, most of the focus was on how rockets might be designed and built which would enable their payloads to be accelerated to reach the extreme altitude and velocity required for long-distance ballistic or orbital flight. This is a daunting problem. The Earth has a deep gravity well: so deep that to place a satellite in a low orbit around it, you must not only lift the satellite from the Earth's surface to the desired orbital altitude (which isn't particularly difficult), but also impart sufficient velocity to it so that it does not fall back but, instead, orbits the planet. It's the speed that makes it so difficult.

Recall that the kinetic energy of a body is given by ½mv². If mass (m) is given in kilograms and velocity (v) in metres per second, energy is measured in joules. Note that the square of the velocity appears in the formula: if you triple the velocity, you need nine times the energy to accelerate the mass to that speed. A satellite must have a velocity of around 7.8 kilometres/second to remain in a low Earth orbit. This is about eight times the muzzle velocity of the 5.56×45mm NATO round fired by the M-16 and AR-15 rifles. Consequently, the satellite has sixty-four times the energy per unit mass of the rifle bullet, and the rocket which places it into orbit must expend all of that energy to launch it.

Every kilogram of a satellite in a low orbit has a kinetic energy of around 30 megajoules (thirty million joules). By comparison, the energy released by detonating a kilogram of TNT is 4.7 megajoules. The satellite, purely due to its motion, has more than six times the energy as an equal mass of TNT. The U.S. Space Shuttle orbiter had a mass, without payload, of around 70,000 kilograms. When preparing to leave orbit and return to Earth, its kinetic energy was about that of half a kiloton of TNT. During the process of atmospheric reentry and landing, in about half an hour, all of that energy must be dissipated in a non-destructive manner, until the orbiter comes to a stop on the runway with kinetic energy zero.

This is an extraordinarily difficult problem, which engineers had to confront as soon as they contemplated returning payloads from space to the Earth. The first payloads were, of course, warheads on intercontinental ballistic missiles. While these missiles did not go into orbit, they achieved speeds which were sufficiently fast as to present essentially the same problems as orbital reentry. When the first reconnaissance satellites were developed by the U.S. and the Soviet Union, the technology to capture images electronically and radio them to ground stations did not yet exist. The only option was to expose photographic film in orbit then physically return it to Earth for processing and interpretation. This was the requirement which drove the development of orbital reentry. The first manned orbital capsules employed technology proven by film return spy satellites. (In the case of the Soviets, the basic structure of the Zenit reconnaissance satellites and manned Vostok capsules was essentially the same.)

This book chronicles the history and engineering details of U.S. reentry and landing technology, for both unmanned and manned spacecraft. While many in the 1950s envisioned sleek spaceplanes as the vehicle of choice, when the time came to actually solve the problems of reentry, a seemingly counterintuitive solution came to the fore: the blunt body. We're all acquainted with the phenomenon of air friction: the faster an airplane flies, the hotter its skin gets. The SR-71, which flew at three times the speed of sound, had to be made of titanium since aluminium would have lost its strength at the temperatures which resulted from friction. But at the velocity of a returning satellite, around eight times faster than an SR-71, air behaves very differently. The satellite is moving so fast that air can't get out of the way and piles up in front of it. As the air is compressed, its temperature rises until it equals or exceeds that of the surface of the Sun. This heat is then radiated in all directions. That impinging upon the reentering body can, if not dealt with, destroy it.

A streamlined shape will cause the compression to be concentrated at the nose, leading to extreme heating. A blunt body, however, will cause a shock wave to form which stands off from its surface. Since the compressed air radiates heat in all directions, only that radiated in the direction of the body will be absorbed; the rest will be harmlessly radiated away into space, reducing total heating. There is still, however, plenty of heat to worry about.

Let's consider the Mercury capsules in which the first U.S. astronauts flew. They reentered blunt end first, with a heat shield facing the air flow. Compression in the shock layer ahead of the heat shield raised the air temperature to around 5800° K, almost precisely the surface temperature of the Sun. Over the reentry, the heat pulse would deposit a total of 100 megajoules per square metre of heat shield. The astronaut was just a few centimetres from the shield, and the temperature on the back side of the shield could not be allowed to exceed 65° C. How in the world do you accomplish that?

Engineers have investigated a wide variety of ways to beat the heat. The simplest are completely passive systems: they have no moving parts. An example of a passive system is a “heat sink”. You simply have a mass of some substance with high heat capacity (which means it can absorb a large amount of energy with a small rise in temperature), usually a metal, which absorbs the heat during the pulse, then slowly releases it. The heat sink must be made of a material which doesn't melt or corrode during the heat pulse. The original design of the Mercury spacecraft specified a beryllium heat sink design, and this was flown on the two suborbital flights, but was replaced for the orbital missions. The Space Shuttle used a passive heat shield of a different kind: ceramic tiles which could withstand the heat on their surface and provided insulation which prevented the heat from reaching the aluminium structure beneath. The tiles proved very difficult to manufacture, were fragile, and required a great deal of maintenance, but they were, in principle, reusable.

The most commonly used technology for reentry is ablation. A heat shield is fabricated of a material which, when subjected to reentry heat, chars and releases gases. The gases carry away the heat, while the charred material which remains provides insulation. A variety of materials have been used for ablative heat shields, from advanced silicone and carbon composites to oak wood, on some early Soviet and Chinese reentry experiments. Ablative heat shields were used on Mercury orbital capsules, in projects Gemini and Apollo, all Soviet and Chinese manned spacecraft, and will be used by the SpaceX and Boeing crew transport capsules now under development.

If the heat shield works and you make it through the heat pulse, you're still falling like a rock. The solution of choice for landing spacecraft has been parachutes, and even though they seem simple conceptually, in practice there are many details which must be dealt with, such as stabilising the falling craft so it won't tumble and tangle the parachute suspension lines when the parachute is deployed, and opening the canopy in multiple stages to prevent a jarring shock which might damage the parachute or craft.

The early astronauts were pilots, and never much liked the idea of having to be fished out of the ocean by the Navy at the conclusion of their flights. A variety of schemes were explored to allow piloted flight to a runway landing, including inflatable wings and paragliders, but difficulties developing the technologies and schedule pressure during the space race caused the Gemini and Apollo projects to abandon them in favour of parachutes and a splashdown. Not until the Space Shuttle were precision runway landings achieved, and now NASA has abandoned that capability. SpaceX hopes to eventually return their Crew Dragon capsule to a landing pad with a propulsive landing, but that is not discussed here.

In the 1990s, NASA pursued a variety of spaceplane concepts: the X-33, X-34, and X-38. These projects pioneered new concepts in thermal protection for reentry which would be less expensive and maintenance-intensive than the Space Shuttle's tiles. In keeping with NASA's practice of the era, each project was cancelled after consuming a large sum of money and extensive engineering development. The X-37 was developed by NASA, and when abandoned, was taken over by the Air Force, which operates it on secret missions. Each of these projects is discussed here.

This book is the definitive history of U.S. spacecraft reentry systems. There is a wealth of technical detail, and some readers may find there's more here than they wanted to know. No specialised knowledge is required to understand the descriptions: just patience. In keeping with NASA tradition, quaint units like inches, pounds, miles per hour, and British Thermal Units are used in most of the text, but then in the final chapters, the authors switch back and forth between metric and U.S. customary units seemingly at random. There are some delightful anecdotes, such as when the designers of NASA's new Orion capsule had to visit the Smithsonian's National Air and Space Museum to examine an Apollo heat shield to figure out how it was made, attached to the spacecraft, and the properties of the proprietary ablative material it employed.

As a NASA publication, this book is in the public domain. The paperback linked to above is a republication of the original NASA edition. The book may be downloaded for free from the book's Web page in three electronic formats: PDF, MOBI (Kindle), and EPUB. Get the PDF! While the PDF is a faithful representation of the print edition, the MOBI edition is hideously ugly and mis-formatted. Footnotes are interleaved in the text at random locations in red type (except when they aren't in red type), block quotes are not set off from the main text, dozens of hyphenated words and adjacent words are run together, and the index is completely useless: citing page numbers in the print edition which do not appear in the electronic edition; for some reason large sections of the index are in red type. I haven't looked at the EPUB edition, but given the lack of attention to detail evident in the MOBI, my expectations for it are not high.

 Permalink

May 2016

Levin, Janna. Black Hole Blues. New York: Alfred A. Knopf, 2016. ISBN 978-0-307-95819-8.
In Albert Einstein's 1915 general theory of relativity, gravitation does not propagate instantaneously as it did in Newton's theory, but at the speed of light. According to relativity, nothing can propagate faster than light. This has a consequence which was not originally appreciated when the theory was published: if you move an object here, its gravitational influence upon an object there cannot arrive any faster than a pulse of light travelling between the two objects. But how is that change in the gravitational field transmitted? For light, it is via the electromagnetic field, which is described by Maxwell's equations and implies the existence of excitations of the field which, according to their wavelength, we call radio, light, and gamma rays. Are there, then, equivalent excitations of the gravitational field (which, according to general relativity, can be thought of as curvature of spacetime), which transmit the changes due to motion of objects to distant objects affected by their gravity and, if so, can we detect them? By analogy to electromagnetism, where we speak of electromagnetic waves or electromagnetic radiation, these would be gravitational waves or gravitational radiation.

Einstein first predicted the existence of gravitational waves in a 1916 paper, but he made a mathematical error in the nature of sources and the magnitude of the effect. This was corrected in a paper he published in 1918 which describes gravitational radiation as we understand it today. According to Einstein's calculations, gravitational waves were real, but interacted so weakly that any practical experiment would never be able to detect them. If gravitation is thought of as the bending of spacetime, the equations tell us that spacetime is extraordinarily stiff: when you encounter an equation with the speed of light, c, raised to the fourth power in the denominator, you know you're in trouble trying to detect the effect.

That's where the matter rested for almost forty years. Some theorists believed that gravitational waves existed but, given the potential sources we knew about (planets orbiting stars, double and multiple star systems), the energy emitted was so small (the Earth orbiting the Sun emits a grand total of 200 watts of energy in gravitational waves, which is absolutely impossible to detect with any plausible apparatus), we would never be able to detect it. Other physicists doubted the effect was real, and that gravitational waves actually carried energy which could, even in principle, produce effects which could be detected. This dispute was settled to the satisfaction of most theorists by the sticky bead argument, proposed in 1957 by Richard Feynman and Hermann Bondi. Although a few dissenters remained, most of the small community interested in general relativity agreed that gravitational waves existed and could carry energy, but continued to believe we'd probably never detect them.

This outlook changed in the 1960s. Radio astronomers, along with optical astronomers, began to discover objects in the sky which seemed to indicate the universe was a much more violent and dynamic place than had been previously imagined. Words like “quasar”, “neutron star”, “pulsar”, and “black hole” entered the vocabulary, and suggested there were objects in the universe where gravity might be so strong and motion so fast that gravitational waves could be produced which might be detected by instruments on Earth.

Joseph Weber, an experimental physicist at the University of Maryland, was the first to attempt to detect gravitational radiation. He used large bars, now called Weber bars, of aluminium, usually cylinders two metres long and one metre in diameter, instrumented with piezoelectric sensors. The bars were, based upon their material and dimensions, resonant at a particular frequency, and could detect a change in length of the cylinder of around 10−16 metres. Weber was a pioneer in reducing noise of his detectors, and operated two detectors at different locations so that signals would only be considered valid if observed nearly simultaneously by both.

What nobody knew was how “noisy” the sky was in gravitational radiation: how many sources there were and how strong they might be. Theorists could offer little guidance: ultimately, you just had to listen. Weber listened, and reported signals he believed consistent with gravitational waves. But others who built comparable apparatus found nothing but noise and theorists objected that if objects in the universe emitted as much gravitational radiation as Weber's detections implied, it would convert all of its mass into gravitational radiation in just fifty million years. Weber's claims of having detected gravitational radiation are now considered to have been discredited, but there are those who dispute this assessment. Still, he was the first to try, and made breakthroughs which informed subsequent work.

Might there be a better way, which could detect even smaller signals than Weber's bars, and over a wider frequency range? (Since the frequency range of potential sources was unknown, casting the net as widely as possible made more potential candidate sources accessible to the experiment.) Independently, groups at MIT, the University of Glasgow in Scotland, and the Max Planck Institute in Germany began to investigate interferometers as a means of detecting gravitational waves. An interferometer had already played a part in confirming Einstein's special theory of relativity: could it also provide evidence for an elusive prediction of the general theory?

An interferometer is essentially an absurdly precise ruler where the markings on the scale are waves of light. You send beams of light down two paths, and adjust them so that the light waves cancel (interfere) when they're combined after bouncing back from mirrors at the end of the two paths. If there's any change in the lengths of the two paths, the light won't interfere precisely, and its intensity will increase depending upon the difference. But when a gravitational wave passes, that's precisely what happens! Lengths in one direction will be squeezed while those orthogonal (at a right angle) will be stretched. In principle, an interferometer can be an exquisitely sensitive detector of gravitational waves. The gap between principle and practice required decades of diligent toil and hundreds of millions of dollars to bridge.

From the beginning, it was clear it would not be easy. The field of general relativity (gravitation) had been called “a theorist's dream, an experimenter's nightmare”, and almost everybody working in the area were theorists: all they needed were blackboards, paper, pencils, and lots of erasers. This was “little science”. As the pioneers began to explore interferometric gravitational wave detectors, it became clear what was needed was “big science”: on the order of large particle accelerators or space missions, with budgets, schedules, staffing, and management comparable to such projects. This was a culture shock to the general relativity community as violent as the astrophysical sources they sought to detect. Between 1971 and 1989, theorists and experimentalists explored detector technologies and built prototypes to demonstrate feasibility. In 1989, a proposal was submitted to the National Science Foundation to build two interferometers, widely separated geographically, with an initial implementation to prove the concept and a subsequent upgrade intended to permit detection of gravitational radiation from anticipated sources. After political battles, in 1995 construction of LIGO, the Laser Interferometer Gravitational-Wave Observatory, began at the two sites located in Livingston, Louisiana and Hanford, Washington, and in 2001, commissioning of the initial detectors was begun; this would take four years. Between 2005 and 2007 science runs were made with the initial detectors; much was learned about sources of noise and the behaviour of the instrument, but no gravitational waves were detected.

Starting in 2007, based upon what had been learned so far, construction of the advanced interferometer began. This took three years. Between 2010 and 2012, the advanced components were installed, and another three years were spent commissioning them: discovering their quirks, fixing problems, and increasing sensitivity. Finally, in 2015, observations with the advanced detectors began. The sensitivity which had been achieved was astonishing: the interferometers could detect a change in the length of their four kilometre arms which was one ten-thousandth the diameter of a proton (the nucleus of a hydrogen atom). In order to accomplish this, they had to overcome noise which ranged from distant earthquakes, traffic on nearby highways, tides raised in the Earth by the Sun and Moon, and a multitude of other sources, via a tower of technology which made the machine, so simple in concept, forbiddingly complex.

September 14, 2015, 09:51 UTC: Chirp!

A hundred years after the theory that predicted it, 44 years after physicists imagined such an instrument, 26 years after it was formally proposed, 20 years after it was initially funded, a gravitational wave had been detected, and it was right out of the textbook: the merger of two black holes with masses around 29 and 36 times that of the Sun, at a distance of 1.3 billion light years. A total of three solar masses were converted into gravitational radiation: at the moment of the merger, the gravitational radiation emitted was 50 times greater than the light from all of the stars in the universe combined. Despite the stupendous energy released by the source, when it arrived at Earth it could only have been detected by the advanced interferometer which had just been put into service: it would have been missed by the initial instrument and was orders of magnitude below the noise floor of Weber's bar detectors.

For only the third time since proto-humans turned their eyes to the sky a new channel of information about the universe we inhabit was opened. Most of what we know comes from electromagnetic radiation: light, radio, microwaves, gamma rays, etc. In the 20th century, a second channel opened: particles. Cosmic rays and neutrinos allow exploring energetic processes we cannot observe in any other way. In a real sense, neutrinos let us look inside the Sun and into the heart of supernovæ and see what's happening there. And just last year the third channel opened: gravitational radiation. The universe is almost entirely transparent to gravitational waves: that's why they're so difficult to detect. But that means they allow us to explore the universe at its most violent: collisions and mergers of neutron stars and black holes—objects where gravity dominates the forces of the placid universe we observe through telescopes. What will we see? What will we learn? Who knows? If experience is any guide, we'll see things we never imagined and learn things even the theorists didn't anticipate. The game is afoot! It will be a fine adventure.

Black Hole Blues is the story of gravitational wave detection, largely focusing upon LIGO and told through the eyes of Rainer Weiss and Kip Thorne, two of the principals in its conception and development. It is an account of the transition of a field of research from a theorist's toy to Big Science, and the cultural, management, and political problems that involves. There are few examples in experimental science where so long an interval has elapsed, and so much funding expended, between the start of a project and its detecting the phenomenon it was built to observe. The road was bumpy, and that is documented here.

I found the author's tone off-putting. She, a theoretical cosmologist at Barnard College, dismisses scientists with achievements which dwarf her own and ideas which differ from hers in the way one expects from Social Justice Warriors in the squishier disciplines at the Seven Sisters: “the notorious Edward Teller”, “Although Kip [Thorne] outgrew the tedious moralizing, the sexism, and the religiosity of his Mormon roots”, (about Joseph Weber) “an insane, doomed, impossible bar detector designed by the old mad guy, crude laboratory-scale slabs of metal that inspired and encouraged his anguished claims of discovery”, “[Stephen] Hawking made his oddest wager about killer aliens or robots or something, which will not likely ever be resolved, so that might turn out to be his best bet yet”, (about Richard Garwin) “He played a role in halting the Star Wars insanity as well as potentially disastrous industrial escalations, like the plans for supersonic airplanes…”, and “[John Archibald] Wheeler also was not entirely against the House Un-American Activities Committee. He was not entirely against the anticommunist fervor that purged academics from their ivory-tower ranks for crimes of silence, either.” … “I remember seeing him at the notorious Princeton lunches, where visitors are expected to present their research to the table. Wheeler was royalty, in his eighties by then, straining to hear with the help of an ear trumpet. (Did I imagine the ear trumpet?)”. There are also a number of factual errors (for example, a breach in the LIGO beam tube sucking out all of the air from its enclosure and suffocating anybody inside), which a moment's calculation would have shown was absurd.

The book was clearly written with the intention of being published before the first detection of a gravitational wave by LIGO. The entire story of the detection, its validation, and public announcement is jammed into a seven page epilogue tacked onto the end. This epochal discovery deserves being treated at much greater length.

 Permalink

Eggers, Dave. The Circle. New York: Alfred A. Knopf, 2013. ISBN 978-0-345-80729-8.
There have been a number of novels, many in recent years, which explore the possibility of human society being taken over by intelligent machines. Some depict the struggle between humans and machines, others envision a dystopian future in which the machines have triumphed, and a few explore the possibility that machines might create a “new operating system” for humanity which works better than the dysfunctional social and political systems extant today. This novel goes off in a different direction: what might happen, without artificial intelligence, but in an era of exponentially growing computer power and data storage capacity, if an industry leading company with tendrils extending into every aspect of personal interaction and commerce worldwide, decided, with all the best intentions, “What the heck? Let's be evil!”

Mae Holland had done everything society had told her to do. One of only twelve of the 81 graduates of her central California high school to go on to college, she'd been accepted by a prestigious college and graduated with a degree in psychology and massive student loans she had no prospect of paying off. She'd ended up moving back in with her parents and taking a menial cubicle job at the local utility company, working for a creepy boss. In frustration and desperation, Mae reaches out to her former college roommate, Annie, who has risen to an exalted position at the hottest technology company on the globe: The Circle. The Circle had started by creating the Unified Operating System, which combined all aspects of users' interactions—social media, mail, payments, user names—into a unique and verified identity called TruYou. (Wonder where they got that idea?)

Before long, anonymity on the Internet was a thing of the past as merchants and others recognised the value of knowing their customers and of information collected across their activity on all sites. The Circle and its associated businesses supplanted existing sites such as Google, Facebook, and Twitter, and with the tight integration provided by TruYou, created new kinds of interconnection and interaction not possible when information was Balkanised among separate sites. With the end of anonymity, spam and fraudulent schemes evaporated, and with all posters personally accountable, discussions became civil and trolls slunk back under the bridge.

With an effective monopoly on electronic communication and commercial transactions (if everybody uses TruYou to pay, what option does a merchant have but to accept it and pay The Circle's fees?), The Circle was assured a large, recurring, and growing revenue stream. With the established businesses generating so much cash, The Circle invested heavily in research and development of new technologies: everything from sustainable housing, access to DNA databases, crime prevention, to space applications.

Mae's initial job was far more mundane. In Customer Experience, she was more or less working in a call centre, except her communications with customers were over The Circle's message services. The work was nothing like that at the utility company, however. Her work was monitored in real time, with a satisfaction score computed from follow-ups surveys by clients. To advance, a score near 100 was required, and Mae had to follow-up any scores less than that to satisfy the customer and obtain a perfect score. On a second screen, internal “zing” messages informed her of activity on the campus, and she was expected to respond and contribute.

As she advances within the organisation, Mae begins to comprehend the scope of The Circle's ambitions. One of the founders unveils a plan to make always-on cameras and microphones available at very low cost, which people can install around the world. All the feeds will be accessible in real time and archived forever. A new slogan is unveiled: “All that happens must be known.

At a party, Mae meets a mysterious character, Kalden, who appears to have access to parts of The Circle's campus unknown to her associates and yet doesn't show up in the company's exhaustive employee social networks. Her encounters and interactions with him become increasingly mysterious.

Mae moves up, and is chosen to participate to a greater extent in the social networks, and to rate products and ideas. All of this activity contributes to her participation rank, computed and displayed in real time. She swallows a sensor which will track her health and vital signs in real time, display them on a wrist bracelet, and upload them for analysis and early warning diagnosis.

Eventually, she volunteers to “go transparent”: wear a body camera and microphone every waking moment, and act as a window into The Circle for the general public. The company had pushed transparency for politicians, and now was ready to deploy it much more widely.

Secrets Are Lies
Sharing Is Caring
Privacy Is Theft

To Mae's family and few remaining friends outside The Circle, this all seems increasingly bizarre: as if the fastest growing and most prestigious high technology company in the world has become a kind of grotesque cult which consumes the lives of its followers and aspires to become universal. Mae loves her sense of being connected, the interaction with a worldwide public, and thinks it is just wonderful. The Circle internally tests and begins to roll out a system of direct participatory democracy to replace existing political institutions. Mae is there to report it. A plan to put an end to most crime is unveiled: Mae is there.

The Circle is closing. Mae is contacted by her mysterious acquaintance, and presented with a moral dilemma: she has become a central actor on the stage of a world which is on the verge of changing, forever.

This is a superbly written story which I found both realistic and chilling. You don't need artificial intelligence or malevolent machines to create an eternal totalitarian nightmare. All it takes a few years' growth and wider deployment of technologies which exist today, combined with good intentions, boundless ambition, and fuzzy thinking. And the latter three commodities are abundant among today's technology powerhouses.

Lest you think the technologies which underlie this novel are fantasy or far in the future, they were discussed in detail in David Brin's 1999 The Transparent Society and my 1994 “Unicard” and 2003 “The Digital Imprimatur”. All that has changed is that the massive computing, communication, and data storage infrastructure envisioned in those works now exists or will within a few years.

What should you fear most? Probably the millennials who will read this and think, “Wow! This will be great.” “Democracy is mandatory here!

 Permalink

Miller, Roland. Abandoned in Place. Albuquerque: University of New Mexico Press, 2016. ISBN 978-0-8263-5625-3.
Between 1945 and 1970 humanity expanded from the surface of Earth into the surrounding void, culminating in 1969 with the first landing on the Moon. Centuries from now, when humans and their descendents populate the solar system and exploit resources dwarfing those of the thin skin and atmosphere of the home planet, these first steps may be remembered as the most significant event of our age, with all of the trivialities that occupy our quotidian attention forgotten. Not only were great achievements made, but grand structures built on Earth to support them; these may be looked upon in the future as we regard the pyramids or the great cathedrals.

Or maybe not. The launch pads, gantry towers, assembly buildings, test facilities, blockhouses, bunkers, and control centres were not built as monuments for the ages, but rather to accomplish time-sensitive goals under tight budgets, by the lowest bidder, and at the behest of a government famous for neglecting infrastructure. Once the job was done, the mission accomplished, the program concluded; the facilities that supported it were simply left at the mercy of the elements which, in locations like coastal Florida, immediately began to reclaim them. Indeed, half of the facilities pictured here no longer exist.

For more than two decades, author and photographer Roland Miller has been documenting this heritage before it succumbs to rust, crumbling concrete, and invasive vegetation. With unparalleled access to the sites, he has assembled this gallery of these artefacts of a great age of exploration. In a few decades, this may be all we'll have to remember them. Although there is rudimentary background information from a variety of authors, this is a book of photography, not a history of the facilities. In some cases, unless you know from other sources what you're looking at, you might interpret some of the images as abstract.

The hardcover edition is a “coffee table book”: large format and beautifully printed, with a corresponding price. The Kindle edition is, well, a Kindle book, and grossly overpriced for 193 pages with screen-resolution images and a useless index consisting solely of search terms.

A selection of images from the book may be viewed on the Abandoned in Place Web site.

 Permalink

Buckley, Christopher. The Relic Master. New York: Simon & Schuster, 2015. ISBN 978-1-5011-2575-1.
The year is 1517. The Holy Roman Empire sprawls across central Europe, from the Mediterranean in the south to the North Sea and Baltic in the north, from the Kingdom of France in the west to the Kingdoms of Poland and Hungary in the east. In reality the structure of the empire is so loose and complicated it defies easy description: independent kings, nobility, and prelates all have their domains of authority, and occasionally go to war against one another. Although the Reformation is about to burst upon the scene, the Roman Catholic Church is supreme, and religion is big business. In particular, the business of relics and indulgences.

Commit a particularly heinous sin? If you're sufficiently well-heeled, you can obtain an indulgence through prayer, good works, or making a pilgrimage to a holy site. Over time, “good works” increasingly meant, for the prosperous, making a contribution to the treasury of the local prince or prelate, a percentage of which was kicked up to higher-ranking clergy, all the way to Rome. Or, an enterprising noble or churchman could collect relics such as the toe bone of a saint, a splinter from the True Cross, or a lock of hair from one of the camels the Magi rode to Bethlehem. Pilgrims would pay a fee to see, touch, have their sins erased, and be healed by these holy trophies. In short, the indulgence and relic business was selling “get out of purgatory for a price”. The very best businesses are those in which the product is delivered only after death—you have no problems with dissatisfied customers.

To flourish in this trade, you'll need a collection of relics, all traceable to trustworthy sources. Relics were in great demand, and demand summons supply into being. All the relics of the True Cross, taken together, would have required the wood from a medium-sized forest, and even the most sacred and unique of relics, the burial shroud of Christ, was on display in several different locations. It's the “trustworthy” part that's difficult, and that's where Dismas comes in. A former Swiss mercenary, his resourcefulness in obtaining relics had led to his appointment as Relic Master to His Grace Albrecht, Archbishop of Brandenburg and Mainz, and also to Frederick the Wise, Elector of Saxony. These two customers were rivals in the relic business, allowing Dismas to play one against the other to his advantage. After visiting the Basel Relic Fair and obtaining some choice merchandise, he visits his patrons to exchange them for gold. While visiting Frederick, he hears that a monk has nailed ninety-five denunciations of the Church, including the sale of indulgences, to the door of the castle church. This is interesting, but potentially bad for business.

Dismas meets his friend, Albrecht Dürer, who he calls “Nars” due to Dürer's narcissism: among other things including his own visage in most of his paintings. After months in the south hunting relics, he returns to visit Dürer and learns that the Swiss banker with whom he's deposited his fortune has been found to be a 16th century Bernie Madoff and that he has only the money on his person.

Destitute, Dismas and Dürer devise a scheme to get back into the game. This launches them into a romp across central Europe visiting the castles, cities, taverns, dark forbidding forests, dungeons, and courts of nobility. We encounter historical figures including Philippus Aureolus Theophrastus Bombastus von Hohenheim (Paracelsus), who lends his scientific insight to the effort. All of this is recounted with the mix of wry and broad humour which Christopher Buckley uses so effectively in all of his novels. There is a tableau of the Last Supper, identity theft, and bombs. An appendix gives background on the historical figures who appear in the novel.

This is a pure delight and illustrates how versatile is the talent of the author. Prepare yourself for a treat; this novel delivers. Here is an interview with the author.

 Permalink

Red Eagle, John and Vox Day [Theodore Beale]. Cuckservative. Kouvola, Finland: Castalia House, 2015. ASIN B018ZHHA52.
Yes, I have read it. So read me out of the polite genteel “conservative” movement. But then I am not a conservative. Further, I enjoyed it. The authors say things forthrightly that many people think and maybe express in confidence to their like-minded friends, but reflexively cringe upon even hearing in public. Even more damning, I found it enlightening on a number of topics, and I believe that anybody who reads it dispassionately is likely to find it the same. And finally, I am reviewing it. I have reviewed (or noted) every book I have read since January of 2001. Should I exclude this one because it makes some people uncomfortable? I exist to make people uncomfortable. And so, onward….

The authors have been called “racists”, which is rather odd since both are of Native American ancestry and Vox Day also has Mexican ancestors. Those who believe ancestry determines all will have to come to terms with the fact that these authors defend the values which largely English settlers brought to America, and were the foundation of American culture until it all began to come apart in the 1960s.

In the view of the authors, as explained in chapter 4, the modern conservative movement in the U.S. dates from the 1950s. Before that time both the Democrat and Republican parties contained politicians and espoused policies which were both conservative and progressive (with the latter word used in the modern sense), often with regional differences. Starting with the progressive era early in the 20th century and dramatically accelerating during the New Deal, the consensus in both parties was centre-left liberalism (with “liberal” defined in the corrupt way it is used in the U.S.): a belief in a strong central government, social welfare programs, and active intervention in the economy. This view was largely shared by Democrat and Republican leaders, many of whom came from the same patrician class in the Northeast. At its outset, the new conservative movement, with intellectual leaders such as Russell Kirk and advocates like William F. Buckley, Jr., was outside the mainstream of both parties, but more closely aligned with the Republicans due to their wariness of big government. (But note that the Eisenhower administration made no attempt to roll back the New Deal, and thus effectively ratified it.)

They argue that since the new conservative movement was a coalition of disparate groups such as libertarians, isolationists, southern agrarians, as well as ex-Trotskyites and former Communists, it was an uneasy alliance, and in forging it Buckley and others believed it was essential that the movement be seen as socially respectable. This led to a pattern of conservatives ostracising those who they feared might call down the scorn of the mainstream press upon them. In 1957, a devastating review of Atlas Shrugged by Whittaker Chambers marked the break with Ayn Rand's Objectivists, and in 1962 Buckley denounced the John Birch Society and read it out of the conservative movement. This established a pattern which continues to the present day: when an individual or group is seen as sufficiently radical that they might damage the image of conservatism as defined by the New York and Washington magazines and think tanks, they are unceremoniously purged and forced to find a new home in institutions viewed with disdain by the cultured intelligentsia. As the authors note, this is the exact opposite of the behaviour of the Left, which fiercely defends its most radical extremists. Today's Libertarian Party largely exists because its founders were purged from conservatism in the 1970s.

The search for respectability and the patient construction of conservative institutions were successful in aligning the Republican party with the new conservatism. This first manifested itself in the nomination of Barry Goldwater in 1964. Following his disastrous defeat, conservatives continued their work, culminating in the election of Ronald Reagan in 1980. But even then, and in the years that followed, including congressional triumphs in 1994, 2010, and 2014, Republicans continued to behave as a minority party: acting only to slow the rate of growth of the Left's agenda rather than roll it back and enact their own. In the words of the authors, they are “calling for the same thing as the left, but less of it and twenty years later”.

The authors call these Republicans “cuckservative” or “cuck” for short. The word is a portmanteau of “cuckold” and “conservative”. “Cuckold” dates back to A.D. 1250, and means the husband of an unfaithful wife, or a weak and ineffectual man. Voters who elect these so-called conservatives are cuckolded by them, as through their fecklessness and willingness to go along with the Left, they bring into being and support the collectivist agenda which they were elected to halt and roll back. I find nothing offensive in the definition of this word, but I don't like how it sounds—in part because it rhymes with an obscenity which has become an all-purpose word in the vocabulary of the Left and, increasingly, the young. Using the word induces a blind rage in some of those to whom it is applied, which may be its principal merit.

But this book, despite bearing it as a title, is not about the word: only three pages are devoted to defining it. The bulk of the text is devoted to what the authors believe are the central issues facing the U.S. at present and an examination of how those calling themselves conservatives have ignored, compromised away, or sold out the interests of their constituents on each of these issues, including immigration and the consequences of a change in demographics toward those with no experience of the rule of law, the consequences of mass immigration on workers in domestic industries, globalisation and the flight of industries toward low-wage countries, how immigration has caused other societies in history to lose their countries, and how mainstream Christianity has been subverted by the social justice agenda and become an ally of the Left at the same time its pews are emptying in favour of evangelical denominations. There is extensive background information about the history of immigration in the United States, the bizarre “Magic Dirt” theory (that, for example, transplanting a Mexican community across the border will, simply by changing its location, transform its residents, in time, into Americans or, conversely, that “blighted neighbourhoods” are so because there's something about the dirt [or buildings] rather than the behaviour of those who inhabit them), and the overwhelming and growing scientific evidence for human biodiversity and the coming crack-up of the “blank slate” dogma. If the Left continues to tighten its grip upon the academy, we can expect to see research in this area be attacked as dissent from the party line on climate science is today.

This is an excellent book: well written, argued, and documented. For those who have been following these issues over the years and observed the evolution of the conservative movement over the decades, there may not be much here that's new, but it's all tied up into one coherent package. For the less engaged who've just assumed that by voting for Republicans they were advancing the conservative cause, this may prove a revelation. If you're looking to find racism, white supremacy, fascism, authoritarianism, or any of the other epithets hurled against the dissident right, you won't find them here unless, as the Left does, you define the citation of well-documented facts as those things. What you will find is two authors who love America and believe that American policy should put the interests of Americans before those of others, and that politicians elected by Americans should be expected to act in their interest. If politicians call themselves “conservatives”, they should act to conserve what is great about America, not compromise it away in an attempt to, at best, delay the date their constituents are delivered into penury and serfdom.

You may have to read this book being careful nobody looks over your shoulder to see what you're reading. You may have to never admit you've read it. You may have to hold your peace when somebody goes on a rant about the “alt-right”. But read it, and judge for yourself. If you believe the facts cited are wrong, do the research, refute them with evidence, and publish a response (under a pseudonym, if you must). But before you reject it based upon what you've heard, read it—it's only five bucks—and make up your own mind. That's what free citizens do.

As I have come to expect in publications from Castalia House, the production values are superb. There are only a few (I found just three) copy editing errors. At present the book is available only in Kindle and Audible audiobook editions.

 Permalink

Steele, Allen. Arkwright. New York: Tor, 2016. ISBN 978-0-7653-8215-3.
Nathan Arkwright was one of the “Big Four” science fiction writers of the twentieth century, along with Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein. Launching his career in the Golden Age of science fiction, he created the Galaxy Patrol space adventures, with 17 novels from 1950 to 1988, a radio drama, television series, and three movies. The royalties from his work made him a wealthy man. He lived quietly in his home in rural Massachusetts, dying in 2006.

Arkwright was estranged from his daughter and granddaughter, Kate Morressy, a freelance science journalist. Kate attends the funeral and meets Nathan's long-term literary agent, Margaret (Maggie) Krough, science fiction writer Harry Skinner, and George Hallahan, a research scientist long involved with military and aerospace projects. After the funeral, the three meet with Kate, and Maggie explains that Arkwright's will bequeaths all of his assets including future royalties from his work to the non-profit Arkwright Foundation, which Kate is asked to join as a director representing the family. She asks the mission of the foundation, and Maggie responds by saying it's a long and complicated story which is best answered by her reading the manuscript of Arkwright's unfinished autobiography, My Life in the Future.

It is some time before Kate gets around to reading the manuscript. When she does, she finds herself immersed in the Golden Age of science fiction, as her father recounts attending the first World's Science Fiction Convention in New York in 1939. An avid science fiction fan and aspiring writer, Arkwright rubs elbows with figures he'd known only as names in magazines such as Fred Pohl, Don Wollheim, Cyril Kornbluth, Forrest Ackerman, and Isaac Asimov. Quickly learning that at a science fiction convention it isn't just elbows that rub but also egos, he runs afoul of one of the clique wars that are incomprehensible to those outside of fandom and finds himself ejected from the convention, sitting down for a snack at the Automat across the street with fellow banished fans Maggie, Harry, and George. The four discuss their views of the state of science fiction and their ambitions, and pledge to stay in touch. Any group within fandom needs a proper name, and after a brief discussion “The Legion of Tomorrow” was born. It would endure for decades.

The manuscript comes to an end, leaving Kate still in 1939. She then meets in turn with the other three surviving members of the Legion, who carry the story through Arkwright's long life, and describe the events which shaped his view of the future and the foundation he created. Finally, Kate is ready to hear the mission of the foundation—to make the future Arkwright wrote about during his career a reality—to move humanity off the planet and enter the era of space colonisation, and not just the planets but, in time, the stars. And the foundation will be going it alone. As Harry explains (p. 104), “It won't be made public, and there won't be government involvement either. We don't want this to become another NASA project that gets scuttled because Congress can't get off its dead ass and give it decent funding.”

The strategy is bet on the future: invest in the technologies which will be needed for and will profit from humanity's expansion from the home planet, and then reinvest the proceeds in research and development and new generations of technology and enterprises as space development proceeds. Nobody expects this to be a short-term endeavour: decades or generations may be required before the first interstellar craft is launched, but the structure of the foundation is designed to persist for however long it takes. Kate signs on, “Forward the Legion.”

So begins a grand, multi-generation saga chronicling humanity's leap to the stars. Unlike many tales of interstellar flight, no arm-waving about faster than light warp drives or other technologies requiring new physics is invoked. Based upon information presented at the DARPA/NASA 100 Year Starship Symposium in 2011 and the 2013 Starship Century conference, the author uses only technologies based upon well-understood physics which, if economic growth continues on the trajectory of the last century, are plausible for the time in the future at which the story takes place. And lest interstellar travel and colonisation be dismissed as wasteful, no public resources are spent on it: coercive governments have neither the imagination nor the attention span to achieve such grand and long-term goals. And you never know how important the technological spin-offs from such a project may prove in the future.

As noted, the author is scrupulous in using only technologies consistent with our understanding of physics and biology and plausible extrapolations of present capabilities. There are a few goofs, which I'll place behind the curtain since some are plot spoilers.

Spoiler warning: Plot and/or ending details follow.  
On p. 61, a C-53 transport plane is called a Dakota. The C-53 is a troop transport variant of the C-47, referred to as the Skytrooper. But since the planes were externally almost identical, the observer may have confused them. “Dakota” was the RAF designation for the C-47; the U.S. Army Air Forces called it the Skytrain.

On the same page, planes arrive from “Kirtland Air Force Base in Texas”. At the time, the facility would have been called “Kirtland Field”, part of the Albuquerque Army Air Base, which is located in New Mexico, not Texas. It was not renamed Kirtland Air Force Base until 1947.

In the description of the launch of Apollo 17 on p. 71, after the long delay, the count is recycled to T−30 seconds. That isn't how it happened. After the cutoff in the original countdown at thirty seconds, the count was recycled to the T−22 minute mark, and after the problem was resolved, resumed from there. There would have been plenty of time for people who had given up and gone to bed to be awakened when the countdown was resumed and observe the launch.

On p. 214, we're told the Doppler effect of the ship's velocity “caused the stars around and in front of the Galactique to redshift”. In fact, the stars in front of the ship would be blueshifted, while those behind it would be redshifted.

On p. 230, the ship, en route, is struck by a particle of interstellar dust which is described as “not much larger than a piece of gravel”, which knocks out communications with the Earth. Let's assume it wasn't the size of a piece of gravel, but only that of a grain of sand, which is around 20 milligrams. The energy released in the collision with the grain of sand is 278 gigajoules, or 66 tons of TNT. The damage to the ship would have been catastrophic, not something readily repaired.

On the same page, “By the ship's internal chronometer, the repair job probably only took a few days, but time dilation made it seem much longer to observers back on Earth.” Nope—at half the speed of light, time dilation is only 15%. Three days' ship's time would be less than three and a half days on Earth.

On p. 265, “the DNA of its organic molecules was left-handed, which was crucial to the future habitability…”. What's important isn't the handedness of DNA, but rather the chirality of the organic molecules used in cells. The chirality of DNA is many levels above this fundamental property of biochemistry and, in fact, the DNA helix of terrestrial organisms is right-handed. (The chirality of DNA actually depends upon the nucleotide sequence, and there is a form, called Z-DNA, in which the helix is left-handed.)

Spoilers end here.  

This is an inspiring and very human story, with realistic and flawed characters, venal politicians, unanticipated adversities, and a future very different than envisioned by many tales of the great human expansion, even those by the legendary Nathan Arkwright. It is an optimistic tale of the human future, grounded in the achievements of individuals who build it, step by step, in the unbounded vision of the Golden Age of science fiction. It is ours to make reality.

Here is a podcast interview with the author by James Pethokoukis.

 Permalink

Holt, George, Jr. The B-58 Blunder. Randolph, VT: George Holt, 2015. ISBN 978-0-692-47881-3.
The B-58 Hustler was a breakthrough aircraft. The first generation of U.S. Air Force jet-powered bombers—the B-47 medium and B-52 heavy bombers—were revolutionary for their time, but were becoming increasingly vulnerable to high-performance interceptor aircraft and anti-aircraft missiles on the deep penetration bombing missions within the communist bloc for which they were intended. In the 1950s, it was believed the best way to reduce the threat was to fly fast and at high altitude, with a small aircraft that would be more difficult to detect with radar.

Preliminary studies of a next generation bomber began in 1949, and in 1952 Convair was selected to develop a prototype of what would become the B-58. Using a delta wing and four turbojet engines, the aircraft could cruise at up to twice the speed of sound (Mach 2, 2450 km/h) with a service ceiling of 19.3 km. With a small radar cross-section compared to the enormous B-52 (although still large compared to present-day stealth designs), the idea was that flying so fast and at high altitude, by the time an enemy radar site detected the B-58, it would be too late to scramble an interceptor to attack it. Contemporary anti-aircraft missiles lacked the capability to down targets at its altitude and speed.

The first flight of a prototype was in November 1956, and after a protracted development and test program, plagued by problems due to its radical design, the bomber entered squadron service in March of 1960. Rising costs caused the number purchased to be scaled back to just 116 (by comparison, 2,032 B-47s and 744 B-52s were built), deployed in two Strategic Air Command (SAC) bomber wings.

The B-58 was built to deliver nuclear bombs. Originally, it carried one B53 nine megaton weapon mounted below the fuselage. Subsequently, the ability to carry four B43 or B61 bombs on hardpoints beneath the wings was added. The B43 and B61 were variable yield weapons, with the B43 providing yields from 70 kilotons to 1 megaton and the B61 300 tons to 340 kilotons. The B-58 was not intended to carry conventional (non-nuclear, high explosive) bombs, and although some studies were done of conventional missions, its limited bomb load would have made it uncompetitive with other aircraft. Defensive weaponry was a single 20 mm radar-guided cannon in the tail. This was a last-ditch option: the B-58 was intended to outrun attackers, not fight them off. The crew of three consisted of a pilot, bombardier/navigator, and a defensive systems operator (responsible for electronic countermeasures [jamming] and the tail gun), each in their own cockpit with an ejection capsule. The navigation and bombing system included an inertial navigation platform with a star tracker for correction, a Doppler radar, and a search radar. The nuclear weapon pod beneath the fuselage could be replaced with a pod for photo reconnaissance. Other pods were considered, but never developed.

The B-58 was not easy to fly. Its delta wing required high takeoff and landing speeds, and a steep angle of attack (nose-up attitude), but if the pilot allowed the nose to rise too high, the aircraft would pitch up and spin. Loss of an engine, particularly one of the outboard engines, was, as they say, a very dynamic event, requiring instant response to counter the resulting yaw. During its operational history, a total of 26 B-58s were lost in accidents: 22.4% of the fleet.

During its ten years in service, no operational bomber equalled or surpassed the performance of the B-58. It set nineteen speed records, some which still stand today, and won prestigious awards for its achievements. It was a breakthrough, but ultimately a dead end: no subsequent operational bomber has exceeded its performance in speed and altitude, but that's because speed and altitude were judged insufficient to accomplish the mission. With the introduction of supersonic interceptors and high-performance anti-aircraft missiles by the Soviet Union, the B-58 was determined to be vulnerable in its original supersonic, high-altitude mission profile. Crews were retrained to fly penetration missions at near-supersonic speeds and very low altitude, making it difficult for enemy radar to acquire and track the bomber. Although it was not equipped with terrain-following radar like the B-52, an accurate radar altimeter allowed crews to perform these missions. The large, rigid delta wing made the B-58 relatively immune to turbulence at low altitudes. Still, abandoning the supersonic attack profile meant that many of the capabilities which made the B-58 so complicated and expensive to operate and maintain were wasted.

This book is the story of the decision to retire the B-58, told by a crew member and Pentagon staffer who strongly dissented and argues that the B-58 should have remained in service much longer. George “Sonny” Holt, Jr. served for thirty-one years in the U.S. Air Force, retiring with the rank of colonel. For three years he was a bombardier/navigator on a B-58 crew and later, in the Plans Division at the Pentagon, observed the process which led to the retirement of the bomber close-up, doing his best to prevent it. He would disagree with many of the comments about the disadvantages of the aircraft mentioned in previous paragraphs, and addresses them in detail. In his view, the retirement of the B-58 in 1970, when it had been originally envisioned as remaining in the fleet until the mid-1970s, was part of a deal by SAC, which offered the retirement of all of the B-58s in return for retaining four B-52 wings which were slated for retirement. He argues that SAC never really wanted to operate the B-58, and that they did not understand its unique capabilities. With such a small fleet, it did not figure large in their view of the bomber force (although with its large nuclear weapon load, it actually represented about half the yield of the bomber leg of the strategic triad).

He provides an insider's perspective on Pentagon politics, and how decisions are made at high levels, often without input from those actually operating the weapon systems. He disputes many of the claimed disadvantages of the B-58 and, in particular, argues that it performed superbly in the low-level penetration mission, something for which it was not designed.

What is not discussed is the competition posed to manned bombers of all kinds in the nuclear mission by the Minuteman missile, which began to be deployed in 1962. By June 1965, 800 missiles were on alert, each with a 1.2 megaton W56 warhead. Solid-fueled missiles like the Minuteman require little maintenance and are ready to launch immediately at any time. Unlike bombers, where one worries about the development of interceptor aircraft and surface to air missiles, no defense against a mass missile attack existed or was expected to be developed in the foreseeable future. A missile in a silo required only a small crew of launch and maintenance personnel, as opposed to the bomber which had flight crews, mechanics, a spare parts logistics infrastructure, and had to be supported by refueling tankers with their own overhead. From the standpoint of cost-effectiveness, a word very much in use in the 1960s Pentagon, the missiles, which were already deployed, were dramatically better than any bomber, and especially the most expensive one in the inventory. The bomber generals in SAC were able to save the B-52, and were willing to sacrifice the B-58 in order to do so.

The book is self-published by the author and is sorely in need of the attention of a copy editor. There are numerous spelling and grammatical errors, and nouns are capitalised in the middle of sentences for no apparent reason. There are abundant black and white illustrations from Air Force files.

 Permalink

Gott, J. Richard. The Cosmic Web. Princeton: Princeton University Press, 2016. ISBN 978-0-691-15726-9.
Some works of popular science, trying to impress the reader with the scale of the universe and the insignificance of humans on the cosmic scale, argue that there's nothing special about our place in the universe: “an ordinary planet orbiting an ordinary star, in a typical orbit within an ordinary galaxy”, or something like that. But this is wrong! Surfaces of planets make up a vanishingly small fraction of the volume of the universe, and habitable planets, where beings like ourselves are neither frozen nor fried by extremes of temperature, nor suffocated or poisoned by a toxic atmosphere, are rarer still. The Sun is far from an ordinary star: it is brighter than 85% of the stars in the galaxy, and only 7.8% of stars in the Milky Way share its spectral class. Fully 76% of stars are dim red dwarves, the heavens' own 25 watt bulbs.

What does a typical place in the universe look like? What would you see if you were there? Well, first of all, you'd need a space suit and air supply, since the universe is mostly empty. And you'd see nothing. Most of the volume of the universe consists of great voids with few galaxies. If you were at a typical place in the universe, you'd be in one of these voids, probably far enough from the nearest galaxy that it wouldn't be visible to the unaided eye. There would be no stars in the sky, since stars are only formed within galaxies. There would only be darkness. Now look out the window: you are in a pretty special place after all.

One of the great intellectual adventures of the last century is learning our place in the universe and coming to understand its large scale structure. This book, by an astrophysicist who has played an important role in discovering that structure, explains how we pieced together the evidence and came to learn the details of the universe we inhabit. It provides an insider's look at how astronomers tease insight out of the messy and often confusing data obtained from observation.

It's remarkable not just how much we've learned, but how recently we've come to know it. At the start of the 20th century, most astronomers believed the solar system was part of a disc of stars which we see as the Milky Way. In 1610, Galileo's telescope revealed that the Milky Way was made up of a multitude of faint stars, and since the galaxy makes a band all around the sky, that the Sun must be within it. In 1918, by observing variable stars in globular clusters which orbit the Milky Way, Harlow Shapley was able to measure the size of the galaxy, which proved much larger than previously estimated, and determine that the Sun was about half way from the centre of the galaxy to its edge. Still, the universe was the galaxy.

There remained the mystery of the “spiral nebulæ”. These faint smudges of light had been revealed by photographic time exposures through large telescopes to be discs, some with prominent spiral arms, viewed from different angles. Some astronomers believed them to be gas clouds within the galaxy, perhaps other solar systems in the process of formation, while others argued they were galaxies like the Milky Way, far distant in the universe. In 1920 a great debate pitted the two views against one another, concluding that insufficient evidence existed to decide the matter.

That evidence would not be long in coming. Shortly thereafter, using the new 100 inch telescope on Mount Wilson in California, Edwin Hubble was able to photograph the Andromeda Nebula and resolve it into individual stars. Just as Galileo had done three centuries earlier for the Milky Way, Hubble's photographs proved Andromeda was not a gas cloud, but a galaxy composed of a multitude of stars. Further, Hubble was able to identify variable stars which allowed him to estimate its distance: due to details about the stars which were not understood at the time, he underestimated the distance by about a factor of two, but it was clear the galaxy was far beyond the Milky Way. The distances to other nearby galaxies were soon measured.

In one leap, the scale of the universe had become breathtakingly larger. Instead of one galaxy comprising the universe, the Milky Way was just one of a multitude of galaxies scattered around an enormous void. When astronomers observed the spectra of these galaxies, they noticed something odd: spectral lines from stars in most galaxies were shifted toward the red end of the spectrum compared to those observed on Earth. This was interpreted as a Doppler shift due to the galaxy's moving away from the Milky Way. Between 1929 and 1931, Edwin Hubble measured the distances and redshifts of a number of galaxies and discovered there was a linear relationship between the two. A galaxy twice as distant as another would be receding at twice the velocity. The universe was expanding, and every galaxy (except those sufficiently close to be gravitationally bound) was receding from every other galaxy.

The discovery of the redshift-distance relationship provided astronomers a way to chart the cosmos in three dimensions. Plotting the position of a galaxy on the sky and measuring its distance via redshift allowed building up a model of how galaxies were distributed in the universe. Were they randomly scattered, or would patterns emerge, suggesting larger-scale structure?

Galaxies had been observed to cluster: the nearest cluster, in the constellation Virgo, is made up of at least 1300 galaxies, and is now known to be part of a larger supercluster of which the Milky Way is an outlying member. It was not until the 1970s and 1980s that large-scale redshift surveys allowed plotting the positions of galaxies in the universe, initially in thin slices, and eventually in three dimensions. What was seen was striking. Galaxies were not sprinkled at random through the universe, but seemed to form filaments and walls, with great voids containing little or no galaxies. How did this come to be?

In parallel with this patient observational work, theorists were working out the history of the early universe based upon increasingly precise observations of the cosmic microwave background radiation, which provides a glimpse of the universe just 380,000 years after the Big Bang. This ushered in the era of precision cosmology, where the age and scale of the universe were determined with great accuracy, and the tiny fluctuations in temperature of the early universe were mapped in detail. This led to a picture of the universe very different from that imagined by astronomers over the centuries. Ordinary matter: stars, planets, gas clouds, and you and me—everything we observe in the heavens and the Earth—makes up less than 5% of the mass-energy of the universe. Dark matter, which interacts with ordinary matter only through gravitation, makes up 26.8% of the universe. It can be detected through its gravitational effects on the motion of stars and galaxies, but at present we don't have any idea what it's composed of. (It would be more accurate to call it “transparent matter” since it does not interact with light, but “dark matter” is the name we're stuck with.) The balance of the universe, 68.3%, is dark energy, a form of energy filling empty space and causing the expansion of the universe to accelerate. We have no idea at all about the nature of dark energy. These three components: ordinary matter, dark matter, and dark energy add up to give the universe a flat topology. It is humbling to contemplate the fact that everything we've learned in all of the sciences is about matter which makes up less than 5% of the universe: the other 95% is invisible and we don't know anything about it (although there are abundant guesses or, if you prefer, hypotheses).

This may seem like a flight of fancy, or a case of theorists making up invisible things to explain away observations they can't otherwise interpret. But in fact, dark matter and dark energy, originally inferred from astronomical observations, make predictions about the properties of the cosmic background radiation, and these predictions have been confirmed with increasingly high precision by successive space-based observations of the microwave sky. These observations are consistent with a period of cosmological inflation in which a tiny portion of the universe expanded to encompass the entire visible universe today. Inflation magnified tiny quantum fluctuations of the density of the universe to a scale where they could serve as seeds for the formation of structures in the present-day universe. Regions with greater than average density would begin to collapse inward due to the gravitational attraction of their contents, while those with less than average density would become voids as material within them fell into adjacent regions of higher density.

Dark matter, being more than five times as abundant as ordinary matter, would take the lead in this process of gravitational collapse, and ordinary matter would follow, concentrating in denser regions and eventually forming stars and galaxies. The galaxies formed would associate into gravitationally bound clusters and eventually superclusters, forming structure at larger scales. But what does the universe look like at the largest scale? Are galaxies distributed at random; do they clump together like meatballs in a soup; or do voids occur within a sea of galaxies like the holes in Swiss cheese? The answer is, surprisingly, none of the above, and the author explains the research, in which he has been a key participant, that discovered the large scale structure of the universe.

As increasingly more comprehensive redshift surveys of galaxies were made, what appeared was a network of filaments which connected to one another, forming extended structures. Between filaments were voids containing few galaxies. Some of these structures, such as the Sloan Great Wall, at 1.38 billion light years in length, are 1/10 the radius of the observable universe. Galaxies are found along filaments, and where filaments meet, rich clusters and superclusters of galaxies are observed. At this large scale, where galaxies are represented by single dots, the universe resembles a neural network like the human brain.

As ever more extensive observations mapped the three-dimensional structure of the universe we inhabit, progress in computing allowed running increasingly detailed simulations of the evolution of structure in models of the universe. Although the implementation of these simulations is difficult and complicated, they are conceptually simple. You start with a region of space, populate it with particles representing ordinary and dark matter in a sea of dark energy with random positions and density variations corresponding to those observed in the cosmic background radiation, then let the simulation run, computing the gravitational attraction of each particle on the others and tracking their motion under the influence of gravity. In 2005, Volker Springel and the Virgo Consortium ran the Millennium Simulation, which started from the best estimate of the initial conditions of the universe known at the time and tracked the motion of ten billion particles of ordinary and dark matter in a cube two billion light years on a side. As the simulation clock ran, the matter contracted into filaments surrounding voids, with the filaments joined at nodes rich in galaxies. The images produced by the simulation and the statistics calculated were strikingly similar to those observed in the real universe. The behaviour of this and other simulations increases confidence in the existence of dark matter and dark energy; if you leave them out of the simulation, you get results which don't look anything like the universe we inhabit.

At the largest scale, the universe isn't made of galaxies sprinkled at random, nor meatballs of galaxy clusters in a sea of voids, nor a sea of galaxies with Swiss cheese like voids. Instead, it resembles a sponge of denser filaments and knots interpenetrated by less dense voids. Both the denser and less dense regions percolate: it is possible to travel from one edge of the universe to another staying entirely within more or less dense regions. (If the universe were arranged like a honeycomb, for example, with voids surrounded by denser walls, this would not be possible.) Nobody imagined this before the observational results started coming in, and now we've discovered that given the initial conditions of the universe after the Big Bang, the emergence of such a structure is inevitable.

All of the structure we observe in the universe has evolved from a remarkably uniform starting point in the 13.8 billion years since the Big Bang. What will the future hold? The final chapter explores various scenarios for the far future. Because these depend upon the properties of dark matter and dark energy, which we don't understand, they are necessarily speculative.

The book is written for the general reader, but at a level substantially more difficult than many works of science popularisation. The author, a scientist involved in this research for decades, does not shy away from using equations when they illustrate an argument better than words. Readers are assumed to be comfortable with scientific notation, units like light years and parsecs, and logarithmically scaled charts. For some reason, in the Kindle edition dozens of hyphenated phrases are run together without any punctuation.

 Permalink

June 2016

Portree, David S. F. Humans to Mars. Washington: National Aeronautics and Space Administration, 2001. NASA SP-2001-4521.
Ever since, in the years following World War II, people began to think seriously about the prospects for space travel, visionaries have looked beyond the near-term prospects for flights into Earth orbit, space stations, and even journeys to the Moon, toward the red planet: Mars. Unlike Venus, eternally shrouded by clouds, or the other planets which were too hot or cold to sustain life as we know it, Mars, about half the size of the Earth, had an atmosphere, a day just a little longer than the Earth's, seasons, and polar caps which grew and shrank with the seasons. There were no oceans, but water from the polar caps might sustain life on the surface, and there were dark markings which appeared to change during the Martian year, which some interpreted as plant life that flourished as polar caps melted in the spring and receded as they grew in the fall.

In an age where we have high-resolution imagery of the entire planet, obtained from orbiting spacecraft, telescopes orbiting Earth, and ground-based telescopes with advanced electronic instrumentation, it is often difficult to remember just how little was known about Mars in the 1950s, when people first started to think about how we might go there. Mars is the next planet outward from the Sun, so its distance and apparent size vary substantially depending upon its relative position to Earth in their respective orbits. About every two years, Earth “laps” Mars and it is closest (“at opposition”) and most easily observed. But because the orbit of Mars is elliptic, its distance varies from one opposition to the next, and it is only every 15 to 17 years that a near-simultaneous opposition and perihelion render Mars most accessible to Earth-based observation.

But even at a close opposition, Mars is a challenging telescopic target. At a close encounter, such as the one which will occur in the summer of 2018, Mars has an apparent diameter of only around 25 arc seconds. By comparison, the full Moon is about half a degree, or 1800 arc seconds: 72 times larger than Mars. To visual observers, even at a favourable opposition, Mars is a difficult object. Before the advent of electronic sensors in the 1980s, it was even more trying to photograph. Existing photographic film and plates were sufficiently insensitive that long exposures, measured in seconds, were required, and even from the best observing sites, the turbulence in the Earth's atmosphere smeared out details, leaving only the largest features recognisable. Visual observers were able to glimpse more detail in transient moments of still air, but had to rely upon their memory to sketch them. And the human eye is subject to optical illusions, seeing patterns where none exist. Were the extended linear features called “canals” real? Some observers saw and sketched them in great detail, while others saw nothing. Photography could not resolve the question.

Further, the physical properties of the planet were largely unknown. If you're contemplating a mission to land on Mars, it's essential to know the composition and density of its atmosphere, the temperatures expected at potential landing sites, and the terrain which a lander would encounter. None of these were known much beyond the level of educated guesses, which turned out to be grossly wrong once spacecraft probe data started to come in.

But ignorance of the destination didn't stop people from planning, or at least dreaming. In 1947–48, Wernher von Braun, then working with the U.S. Army at the White Sands Missile Range in New Mexico, wrote a novel called The Mars Project based upon a hypothetical Mars mission. A technical appendix presented detailed designs of the spacecraft and mission. While von Braun's talent as an engineer was legendary, his prowess as a novelist was less formidable, and the book never saw print, but in 1952 the appendix was published by itself.

One thing of which von Braun was never accused was thinking small, and in this first serious attempt to plan a Mars mission, he envisioned something more like an armada than the lightweight spacecraft we design today. At a time when the largest operational rocket, the V-2, had a payload of just one tonne, which it could throw no further than 320 km on a suborbital trajectory, von Braun's Mars fleet would consist of ten ships, each with a mass of 4,000 tons, and a total crew of seventy. The Mars ships would be assembled in orbit from parts launched on 950 flights of reusable three-stage ferry rockets. To launch all of the components of the Mars fleet and the fuel they would require would burn a total of 5.32 million tons of propellant in the ferry ships. Note that when von Braun proposed this, nobody had ever flown even a two stage rocket, and it would be ten years before the first unmanned Earth satellite was launched.

Von Braun later fleshed out his mission plans for an illustrated article in Collier's magazine as part of their series on the future of space flight. Now he envisioned assembling the Mars ships at the toroidal space station in Earth orbit which had figured in earlier installments of the series. In 1956, he published a book co-authored with Willy Ley, The Exploration of Mars, in which he envisioned a lean and mean expedition with just two ships and a crew of twelve, which would require “only” four hundred launches from Earth to assemble, provision, and fuel.

Not only was little understood about the properties of the destination, nothing at all was known about what human crews would experience in space, either in Earth orbit or en route to Mars and back. Could they even function in weightlessness? Would be they be zapped by cosmic rays or solar flares? Were meteors a threat to their craft and, if so, how serious a one? With the dawn of the space age after the launch of Sputnik in October, 1957, these data started to trickle in, and they began to inform plans for Mars missions at NASA and elsewhere.

Radiation was much more of a problem than had been anticipated. The discovery of the Van Allen radiation belts around the Earth and measurement of radiation from solar flares and galactic cosmic rays indicated that short voyages were preferable to long ones, and that crews would need shielding from routine radiation and a “storm shelter” during large solar flares. This motivated research into nuclear thermal and ion propulsion systems, which would not only reduce the transit time to and from Mars, but also, being much more fuel efficient than chemical rockets, dramatically reduce the mass of the ships compared to von Braun's flotilla.

Ernst Stuhlinger had been studying electric (ion) propulsion since 1953, and developed a design for constant-thrust, ion powered ships. These were featured in Walt Disney's 1957 program, “Mars and Beyond”, which aired just two months after the launch of Sputnik. This design was further developed by NASA in a 1962 mission design which envisioned five ships with nuclear-electric propulsion, departing for Mars in the early 1980s with a crew of fifteen and cargo and crew landers permitting a one month stay on the red planet. The ships would rotate to provide artificial gravity for the crew on the trip to and from Mars.

In 1965, the arrival of the Mariner 4 spacecraft seemingly drove a stake through the heart of the romantic view of Mars which had persisted since Percival Lowell. Flying by the southern hemisphere of the planet as close as 9600 km, it returned 21 fuzzy pictures which seemed to show Mars as a dead, cratered world resembling the Moon far more than the Earth. There was no evidence of water, nor of life. The atmosphere was determined to be only 1% as dense as that of Earth, not the 10% estimated previously, and composed mostly of carbon dioxide, not nitrogen. With such a thin and hostile atmosphere, there seemed no prospects for advanced life (anything more complicated than bacteria), and all of the ideas for winged Mars landers went away: the martian atmosphere proved just dense enough to pose a problem when slowing down on arrival, but not enough to allow a soft landing with wings or a parachute. The probe had detected more radiation than expected on its way to Mars, indicating crews would need more protection than anticipated, and it showed that robotic probes could do science at Mars without the need to put a crew at risk. I remember staying up and watching these pictures come in (the local television station didn't carry the broadcast, so I watched even more static-filled pictures than the original from a distant station). I can recall thinking, “Well, that's it then. Mars is dead. We'll probably never go there.”

Mars mission planning went on the back burner as the Apollo Moon program went into high gear in the 1960s. Apollo was conceived not as a single-destination project to land on the Moon, but to create the infrastructure for human expansion from the Earth into the solar system, including development of nuclear propulsion and investigation of planetary missions using Apollo derived hardware, mostly for flyby missions. In January of 1968, Boeing completed a study of a Mars landing mission, which would have required six launches of an uprated Saturn V, sending a crew of six to Mars in a 140 ton ship for a landing and a brief “flags and footprints” stay on Mars. By then, Apollo funding (even before the first lunar orbit and landing) was winding down, and it was clear there was no budget nor political support for such grandiose plans.

After the success of Apollo 11, NASA retrenched, reducing its ambition to a Space Shuttle. An ambitious Space Task Group plan for using the Shuttle to launch a Mars mission in the early 1980s was developed, but in an era of shrinking budgets and additional fly-by missions returning images of a Moon-like Mars, went nowhere. The Saturn V and the nuclear rocket which could have taken crews to Mars had been cancelled. It appeared the U.S. would remain stuck going around in circles in low Earth orbit. And so it remains today.

While planning for manned Mars missions stagnated, the 1970s dramatically changed the view of Mars. In 1971, Mariner 9 went into orbit around Mars and returned 7329 sharp images which showed the planet to be a complex world, with very different northern and southern hemispheres, a grand canyon almost as long as the United States, and features which suggested the existence, at least in the past, of liquid water. In 1976, two Viking orbiters and landers arrived at Mars, providing detailed imagery of the planet and ground truth. The landers were equipped with instruments intended to detect evidence of life, and they reported positive results, but later analyses attributed this to unusual soil chemistry. This conclusion is still disputed, including by the principal investigator for the experiment, but in any case the Viking results revealed a much more complicated and interesting planet than had been imagined from earlier missions. I had been working as a consultant at the Jet Propulsion Laboratory during the first Viking landing, helping to keep mission critical mainframe computers running, and I had the privilege of watching the first images from the surface of Mars arrive. I revised my view from 1965: now Mars was a place which didn't look much different from the high desert of California, where you could imagine going to explore and live some day. More importantly, detailed information about the atmosphere and surface of Mars was now in hand, so future missions could be planned accordingly.

And then…nothing. It was a time of malaise and retreat. After the last Viking landing in September of 1975, it would be more than twenty-one years until Mars Global Surveyor would orbit Mars and Mars Pathfinder would land there in 1996. And yet, with detailed information about Mars in hand, the intervening years were a time of great ferment in manned Mars mission planning, when the foundation of what may be the next great expansion of the human presence into the solar system was laid down.

President George H. W. Bush announced the Space Exploration Initiative on July 20th, 1989, the 20th anniversary of the Apollo 11 landing on the Moon. This was, in retrospect, the last gasp of the “Battlestar” concepts of missions to Mars. It became a bucket into which every NASA centre and national laboratory could throw their wish list: new heavy launchers, a Moon base, nuclear propulsion, space habitats: for a total price tag on the order of half a trillion dollars. It died, quietly, in congress.

But the focus was moving from leviathan bureaucracies of the coercive state to innovators in the private sector. In the 1990s, spurred by work of members of the “Mars Underground”, including Robert Zubrin and David Baker, the “Mars Direct” mission concept emerged. Earlier Mars missions assumed that all resources needed for the mission would have to be launched from Earth. But Zubrin and Baker realised that the martian atmosphere, based upon what we had learned from the Viking missions, contained everything needed to provide breathable air for the stay on Mars and rocket fuel for the return mission (with the addition of lightweight hydrogen brought from Earth). This turned the weight budget of a Mars mission upside-down. Now, an Earth return vehicle could be launched to Mars with empty propellant tanks. Upon arrival, it would produce fuel for the return mission and oxygen for the crew. After it was confirmed to have produced the necessary consumables, the crew of four would be sent in the next launch window (around 26 months later) and land near the return vehicle. They would use its oxygen while on the planet, and its fuel to return to Earth at the end of its mission. There would be no need for a space station in Earth orbit, nor orbital assembly, nor for nuclear propulsion: the whole mission could be done with hardware derived from that already in existence.

This would get humans to Mars, but it ran into institutional barriers at NASA, since many of its pet projects, including the International Space Station and Space Shuttle proved utterly unnecessary to getting to Mars. NASA responded with the Mars Design Reference Mission, published in various revisions between 1993 and 2014, which was largely based upon Mars Direct, but up-sized to a larger crew of six, and incorporating a new Earth Return Vehicle to bring the crew back to Earth in less austere circumstances than envisioned in Mars Direct.

NASA claim they are on a #JourneyToMars. They must be: there's a Twitter hashtag! But of course to anybody who reads this sad chronicle of government planning for planetary exploration over half a century, it's obvious they're on no such thing. If they were truly on a journey to Mars, they would be studying and building the infrastructure to get there using technologies such as propellant depots and in-orbit assembly which would get the missions done economically using resources already at hand. Instead, it's all about building a huge rocket which will cost so much it will fly every other year, at best, employing a standing army which will not only be costly but so infrequently used in launch operations they won't have the experience to operate the system safely, and whose costs will vacuum out the funds which might have been used to build payloads which would extend the human presence into space.

The lesson of this is that when the first humans set foot upon Mars, they will not be civil servants funded by taxes paid by cab drivers and hairdressers, but employees (and/or shareholders) of a private venture that sees Mars as a profit centre which, as its potential is developed, can enrich them beyond the dreams of avarice and provide a backup for human civilisation. I trust that when the history of that great event is written, it will not be as exasperating to read as this chronicle of the dead-end of government space programs making futile efforts to get to Mars.

This is an excellent history of the first half century of manned Mars mission planning. Although many proposed missions are omitted or discussed only briefly, the evolution of mission plans with knowledge of the destination and development of spaceflight hardware is described in detail, culminating with current NASA thinking about how best to accomplish such a mission. This book was published in 2001, but since existing NASA concepts for manned Mars missions are still largely based upon the Design Reference Mission described here, little has changed in the intervening fifteen years. In September of 2016, SpaceX plans to reveal its concepts for manned Mars missions, so we'll have to wait for the details to see how they envision doing it.

As a NASA publication, this book is in the public domain. The book can be downloaded for free as a PDF file from the NASA History Division. There is a paperback republication of this book available at Amazon, but at an outrageous price for such a short public domain work. If you require a paper copy, it's probably cheaper to download the PDF and print your own.

 Permalink

Adams, Scott. The Religion War. Kansas City: Andrews McMeel, 2004 ISBN 978-0-7407-4788-5.
This a sequel to the author's 2001 novel God's Debris. In that work, which I considered profound and made my hair stand on end on several occasions, a package delivery man happens to encounter the smartest man in the world and finds his own view of the universe and his place in it up-ended, and his destiny to be something he'd never imagined. I believe that it's only because Scott Adams is also the creator of Dilbert that he is not appreciated as one of the most original and insightful thinkers of our time. His blog has been consistently right about the current political season in the U.S. while all of the double-domed mainstream pundits have fallen on their faces.

Forty years have passed since the events in God's Debris. The erstwhile delivery man has become the Avatar, thinking at a higher level and perceiving patterns which elude his contemporaries. These talents have made him one of the wealthiest people on Earth, but he remains unknown, dresses shabbily, wearing a red plaid blanket around his shoulders. The world has changed. A leader, al-Zee, arising in the Palestinian territories, has achieved his goal of eliminating Israel and consolidated the Islamic lands into a new Great Caliphate. Sitting on a large fraction of the world's oil supply, he funds “lone wolf”, modest scale terror attacks throughout the Dar al-Harb, always deniable and never so large as to invite reprisal. With the advent of model airplanes and satellite guidance able to deliver explosives to a target with precision over a long range, nobody can feel immune from the reach of the Caliphate.

In 2040, General Horatio Cruz came to power as Secretary of War of the Christian Alliance, with all of the forces of NATO at his command. The political structures of the western nations remained in place, but they had delegated their defence to Cruz, rendering him effectively a dictator in the military sphere. Cruz was not a man given to compromise. Faced with an opponent he evaluated as two billion people willing to die in a final war of conquest, he viewed the coming conflict not as one of preserving territory or self-defence, but of extermination—of one side or the other. There were dark rumours that al-Zee had in place his own plan of retaliation, with sleeper cells and weapons of mass destruction ready should a frontal assault begin.

The Avatar sees the patterns emerging, and sets out to avert the approaching cataclysm. He knows that bad ideas can only be opposed by better ones, but bad ideas first must be subverted by sowing doubt among those in thrall to them. Using his preternatural powers of persuasion, he gains access to the principals of the conflict and begins his work. But that may not be enough.

There are two overwhelming forces in the world. One is chaos; the other is order. God—the original singular speck—is forming again. He's gathering together his bits—we call it gravity. And in the process he is becoming self-aware to defeat chaos, to defeat evil if you will, to battle the devil. But something has gone terribly wrong.

Sometimes, when your computer is in a loop, the only thing you can do is reboot it: forcefully get it out of the destructive loop back to a starting point from which it can resume making progress. But how do you reboot a global technological civilisation on the brink of war? The Avatar must find the reboot button as time is running out.

Thirty years later, a delivery man rings the door. An old man with a shabby blanket answers and invites him inside.

There are eight questions to ponder at the end which expand upon the shiver-up-your-spine themes raised in the novel. Bear in mind, when pondering how prophetic this novel is of current and near-future events, that it was published twelve years ago.

 Permalink

July 2016

Coppley, Jackson. Leaving Lisa. Seattle: CreateSpace, 2016. ISBN 978-1-5348-5971-5.
Jason Chamberlain had it all. At age fifty, the company he had founded had prospered so that when he sold out, he'd never have to work again in his life. He and Lisa, his wife and the love of his life, lived in a mansion in the suburbs of Washington, DC. Lisa continued to work as a research scientist at the National Institutes of Health (NIH), studying the psychology of grief, loss, and reconciliation. Their relationship with their grown daughter was strained, but whose isn't in these crazy times?

All of this ended in a moment when Lisa was killed in a car crash which Jason survived. He had lost his love, and blamed himself. His life was suddenly empty.

Some time after the funeral, he takes up an invitation to visit one of Lisa's colleagues at NIH, who explains to Jason that Lisa had been a participant in a study in which all of the accumulated digital archives of her life—writings, photos, videos, sound recordings—would be uploaded to a computer and, using machine learning algorithms, indexed and made accessible so that people could ask questions and have them answered, based upon the database, as Lisa would have, in her voice. The database is accessible from a device which resembles a smartphone, but requires network connectivity to the main computer for complicated queries.

Jason is initially repelled by the idea, but after some time returns to NIH and collects the device and begins to converse with it. Lisa doesn't just want to chat. She instructs Jason to embark upon a quest to spread her ashes in three places which were important to her and their lives together: Costa Rica, Vietnam, and Tuscany in Italy. The Lisa-box will accompany Jason on his travels and, in its own artificially intelligent way, share his experiences.

Jason embarks upon his voyages, rediscovering in depth what their life together meant to them, how other cultures deal with loss, grief, and healing, and that closing the book on one phase of his life may be opening another. Lisa is with him as these events begin to heal and equip him for what is to come. The last few pages will leave you moist eyed.

In 2005, Rudy Rucker published The Lifebox, the Seashell, and the Soul, in which he introduced the “lifebox” as the digital encoding of a person's life, able to answer questions from their viewpoint and life experiences as Lisa does here. When I read Rudy's manuscript, I thought the concept of a lifebox was pure fantasy, and I told him as much. Now, not only am I not so sure, but in fact I believe that something approximating a lifebox will be possible before the end of the decade I've come to refer to as the “Roaring Twenties”. This engrossing and moving novel is a human story of our near future (to paraphrase the title of another of the author's books) in which the memory of the departed may be more than photo albums and letters.

The Kindle edition is free to Kindle Unlimited subscribers. The author kindly allowed me to read this book in manuscript form.

 Permalink

Weightman, Gavin. The Frozen Water Trade. New York: Hyperion, [2003] 2004. ISBN 978-0-7868-8640-1.
In the summer of 1805, two brothers, Frederic and William Tudor, both living in the Boston area, came up with an idea for a new business which would surely make their fortune. Every winter, fresh water ponds in Massachusetts froze solid, often to a depth of a foot or more. Come spring, the ice would melt.

This cycle had repeated endlessly since before humans came to North America, unremarked upon by anybody. But the Tudor brothers, in the best spirit of Yankee ingenuity, looked upon the ice as an untapped and endlessly renewable natural resource. What if this commodity, considered worthless, could be cut from the ponds and rivers, stored in a way that would preserve it over the summer, and shipped to southern states and the West Indies, where plantation owners and prosperous city dwellers would pay a premium for this luxury in times of sweltering heat?

In an age when artificial refrigeration did not exist, that “what if” would have seemed so daunting as to deter most people from entertaining the notion for more than a moment. Indeed, the principles of thermodynamics, which underlie both the preservation of ice in warm climates and artificial refrigeration, would not be worked out until decades later. In 1805, Frederic Tudor started his “Ice House Diary” to record the progress of the venture, inscribing it on the cover, “He who gives back at the first repulse and without striking the second blow, despairs of success, has never been, is not, and never will be, a hero in love, war or business.” It was in this spirit that he carried on in the years to come, confronting a multitude of challenges unimagined at the outset.

First was the question of preserving the ice through the summer, while in transit, and upon arrival in the tropics until it was sold. Some farmers in New England already harvested ice from their ponds and stored it in ice houses, often built of stone and underground. This was sufficient to preserve a modest quantity of ice through the summer, but Frederic would need something on a much larger scale and less expensive for the trade he envisioned, and then there was the problem of keeping the ice from melting in transit. Whenever ice is kept in an environment with an ambient temperature above freezing, it will melt, but the rate at which it melts depends upon how it is stored. It is essential that the meltwater be drained away, since if the ice is allowed to stand in it, the rate of melting will be accelerated, since water conducts heat more readily than air. Melting ice releases its latent heat of fusion, and a sealed ice house will actually heat up as the ice melts. It is imperative the ice house be well ventilated to allow this heat to escape. Insulation which slows the flow of heat from the outside helps to reduce the rate of melting, but care must be taken to prevent the insulation from becoming damp from the meltwater, as that would destroy its insulating properties.

Based upon what was understood about the preservation of ice at the time and his own experiments, Tudor designed an ice house for Havana, Cuba, one of the primary markets he was targeting, which would become the prototype for ice houses around the world. The structure was built of timber, with double walls, the cavity between the walls filled with insulation of sawdust and peat. The walls and roof kept the insulation dry, and the entire structure was elevated to allow meltwater to drain away. The roof was ventilated to allow the hot air from the melting ice to dissipate. Tightly packing blocks of uniform size and shape allowed the outer blocks of ice to cool those inside, and melting would be primarily confined to blocks on the surface of the ice stored.

During shipping, ice was packed in the hold of ships, insulated by sawdust, and crews were charged with regularly pumping out meltwater, which could be used as an on-board source of fresh water or disposed of overboard. Sawdust was produced in great abundance by the sawmills of Maine, and was considered a waste product, often disposed of by dumping it in rivers. Frederic Tudor had invented a luxury trade whose product was available for the price of harvesting it, and protected in shipping by a material considered to be waste.

The economics of the ice business exploited an imbalance in Boston's shipping business. Massachusetts produced few products for export, so ships trading with the West Indies would often leave port with nearly empty holds, requiring rock ballast to keep the ship stable at sea. Carrying ice to the islands served as ballast, and was a cargo which could be sold upon arrival. After initial scepticism was overcome (would the ice all melt and sink the ship?), the ice trade outbound from Boston was an attractive proposition to ship owners.

In February 1806, the first cargo of ice sailed for the island of Martinique. The Boston Gazette reported the event as follows.

No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.

The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper.

The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts.

On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved.

Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston.

With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry.

Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone.

While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines.

In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory.

Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality.

Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out.

When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic.

I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.

 Permalink

Hirshfeld, Alan W. Parallax. New York: Dover, [2001] 2013. ISBN 978-0-486-49093-9.
Eppur si muove.” As legend has it, these words were uttered (or muttered) by Galileo after being forced to recant his belief that the Earth revolves around the Sun: “And yet it moves.” The idea of a heliocentric model, as opposed to the Earth being at the center of the universe (geocentric model), was hardly new: Aristarchus of Samos had proposed it in the third century B.C., as a simplification of the prevailing view that the Earth was fixed and all other heavenly bodies revolved around it. This seemed to defy common sense: if the Earth rotated on its axis every day, why weren't there strong winds as the Earth's surface moved through the air? If you threw a rock straight up in the air, why did it come straight down rather than being displaced by the Earth's rotation while in flight? And if the Earth were offset from the center of the universe, why didn't we observe more stars when looking toward it than away?

By Galileo's time, many of these objections had been refuted, in part by his own work on the laws of motion, but the fact remained that there was precisely zero observational evidence that the Earth orbited the Sun. This was to remain the case for more than a century after Galileo, and millennia after Aristarchus, a scientific quest which ultimately provided the first glimpse of the breathtaking scale of the universe.

Hold out your hand at arm's length in front of your face and extend your index finger upward. (No, really, do it.) Now observe the finger with your right eye, then your left eye in succession, each time closing the other. Notice how the finger seems to jump to the right and left as you alternate eyes? That's because your eyes are separated by what is called the interpupillary distance, which is on the order of 6 cm. Each eye sees objects from a different perspective, and nearby objects will shift with respect to distant objects when seen from different eyes. This effect is called parallax, and the brain uses it to reconstruct depth information for nearby objects. Interestingly, predator animals tend to have both eyes on the front of the face with overlapping visual fields to provide depth perception for use in stalking, while prey animals are more likely to have eyes on either side of their heads to allow them to monitor a wider field of view for potential threats: compare a cat and a horse.

Now, if the Earth really orbits the Sun every year, that provides a large baseline which should affect how we see objects in the sky. In particular, when we observe stars from points in the Earth's orbit six months apart, we should see them shift their positions in the sky, since we're viewing them from different locations, just as your finger appeared to shift when viewed from different eyes. And since the baseline is enormously larger (although in the times of Aristarchus and even Galileo, its absolute magnitude was not known), even distant objects should be observed to shift over the year. Further, nearby stars should shift more than distant stars, so remote stars could be used as a reference for measuring the apparent shift of those closest to the Sun. This was the concept of stellar parallax.

Unfortunately for advocates of the heliocentric model, nobody had been able to observe stellar parallax. From the time of Aristarchus to Galileo, careful observers of the sky found the positions of the stars as fixed in the sky as if they were painted on a distant crystal sphere as imagined by the ancients, with the Earth at the center. Proponents of the heliocentric model argued that the failure to observe parallax was simply due to the stars being much too remote. When you're observing a distant mountain range, you won't notice any difference when you look at it with your right and left eye: it's just too far away. Perhaps the parallax of stars was beyond our ability to observe, even with so long a baseline as the Earth's distance from the Sun. Or, as others argued, maybe it didn't move.

But, pioneered by Galileo himself, our ability to observe was about to take an enormous leap. Since antiquity, all of our measurements of the sky, regardless of how clever our tools, ultimately came down to the human eye. Galileo did not invent the telescope, but he improved what had been used as a “spyglass” for military applications into a powerful tool for exploring the sky. His telescopes, while crude and difficult to use, and having a field of view comparable to looking through a soda straw, revealed mountains and craters on the Moon, the phases of Venus (powerful evidence against the geocentric model), the satellites of Jupiter, and the curious shape of Saturn (his telescope lacked the resolution to identify its apparent “ears” as rings). He even observed Neptune in 1612, when it happened to be close to Jupiter, but he didn't interpret what he had seen as a new planet. Galileo never observed parallax; he never tried, but he suggested astronomers might concentrate on close pairs of stars, one bright and one dim, where, if all stars were of comparable brightness, one might be close and the other distant, from which parallax could be teased out from observation over a year. This was to inform the work of subsequent observers.

Now the challenge was not one of theory, but of instrumentation and observational technique. It was not to be a sprint, but a marathon. Those who sought to measure stellar parallax and failed (sometimes reporting success, only to have their results overturned by subsequent observations) reads like a “Who's Who” of observational astronomy in the telescopic era: Robert Hooke, James Bradley, and William Herschel all tried and failed to observe parallax. Bradley's observations revealed an annual shift in the position of stars, but it affected all stars, not just the nearest. This didn't make any sense unless the stars were all painted on a celestial sphere, and the shift didn't behave as expected from the Earth's motion around the Sun. It turned out to be due to the aberration of light resulting from the motion of the Earth around the Sun and the finite speed of light. It's like when you're running in a rainstorm:

Raindrops keep fallin' in my face,
More and more as I pick up the pace…

Finally, here was proof that “it moves”: there would be no aberration in a geocentric universe. But by Bradley's time in the 1720s, only cranks and crackpots still believed in the geocentric model. The question was, instead, how distant are the stars? The parallax game remained afoot.

It was ultimately a question of instrumentation, but also one of luck. By the 19th century, there was abundant evidence that stars differed enormously in their intrinsic brightness. (We now know that the most luminous stars are more than a billion times more brilliant than the dimmest.) Thus, you couldn't conclude that the brightest stars were the nearest, as astronomers once guessed. Indeed, the distances of the four brightest stars as seen from Earth are, in light years, 8.6, 310, 4.4, and 37. Given that observing the position of a star for parallax is a long-term project and tedious, bear in mind that pioneers on the quest had no idea whether the stars they observed were near or far, nor the distance to the nearest stars they might happen to be lucky enough to choose.

It all came together in the 1830s. Using an instrument called a heliometer, which was essentially a refractor telescope with its lens cut in two with the ability to shift the halves and measure the offset, Friedrich Bessel was able to measure the parallax of the star 61 Cygni by comparison to an adjacent distant star. Shortly thereafter, Wilhelm Struve published the parallax of Vega, and then, just two months later, Thomas Henderson reported the parallax of Alpha Centauri, based upon measurements made earlier at the Cape of Good Hope. Finally, we knew the distances to the nearest stars (although those more distant remained a mystery), and just how empty the universe was.

Let's put some numbers on this, just to appreciate how great was the achievement of the pioneers of parallax. The parallax angle of the closest star system, Alpha Centauri, is 0.755 arc seconds. (The parallax angle is half the shift observed in the position of the star as the Earth orbits the Sun. We use half the shift because it makes the trigonometry to compute the distance easier to understand.) An arc second is 1/3600 of a degree, and there are 360 degrees in a circle, so it's 1/1,296,000 of a full circle.

Now let's work out the distance to Alpha Centauri. We'll work in terms of astronomical units (au), the mean distance between the Earth and Sun. We have a right triangle where we know the distance from the Earth to the Sun and the parallax angle of 0.755 arc seconds. (To get a sense for how tiny an angle this is, it's comparable to the angle subtended by a US quarter dollar coin when viewed from a distance of 6.6 km.) We can compute the distance from the Earth to Alpha Centauri as:

1 au / tan(0.755 / 3600) = 273198 au = 4.32 light years

Parallax is used to define the parsec (pc), the distance at which a star would have a parallax angle of one arc second. A parsec is about 3.26 light years, so the distance to Alpha Centauri is 1.32 parsecs. Star Wars notwithstanding, the parsec, like the light year, is a unit of distance, not time.

Progress in instrumentation has accelerated in recent decades. The Earth is a poor platform from which to make precision observations such as parallax. It's much better to go to space, where there are neither the wobbles of a planet nor its often murky atmosphere. The Hipparcos mission, launched in 1989, measured the parallaxes and proper motions of more than 118,000 stars, with lower resolution data for more than 2.5 million stars. The Gaia mission, launched in 2013 and still underway, has a goal of measuring the position, parallax, and proper motion of more than a billion stars.

It's been a long road, getting from there to here. It took more than 2,000 years from the time Aristarchus proposed the heliocentric solar system until we had direct observational evidence that eppur si muove. Within a few years, we will have in hand direct measurements of the distances to a billion stars. And, some day, we'll visit them.

I originally read this book in December 2003. It was a delight to revisit.

 Permalink

August 2016

Jenne, Mike. Blue Darker than Black. New York: Yucca Publishing, 2016. ISBN 978-1-63158-066-6.
This is the second novel in the series which began with Blue Gemini (April 2016). It continues the story of a covert U.S. Air Force manned space program in the late 1960s and early 1970s, using modified versions of NASA's two-man Gemini spacecraft and Titan II booster to secretly launch missions to rendezvous with, inspect, and, if necessary, destroy Soviet reconnaissance satellites and rumoured nuclear-armed orbital battle stations.

As the story begins in 1969, the crew who flew the first successful missions in the previous novel, Drew Carson and Scott Ourecky, are still the backbone of the program. Another crew was in training, but having difficulty coming up to the standard set by the proven flight crew. A time-critical mission puts Carson and Ourecky back into the capsule again, and they execute another flawless mission despite inter-service conflict between its Navy sponsor and the Air Force who executed it.

Meanwhile, the intrigue of the previous novel is playing out in the background. The Soviets know that something odd is going on at the innocuously named “Aerospace Support Project” at Wright-Patterson Air Force Base, and are cultivating sources to penetrate the project, while counter-intelligence is running down leads to try to thwart them. Soviet plans for the orbital battle station progress from fantastic conceptions to bending metal.

Another mission sends the crew back into space just as Ourecky's wife is expecting their firstborn. When it's time to come home, a malfunction puts at risk their chances of returning to Earth alive. A clever trick allows them to work around the difficulty and fire their retrorockets, but the delay diverts their landing point from the intended field in the U.S. to a secret contingency site in Haiti. Now the emergency landing team we met in Blue Gemini comes to the fore. With one of the most secret of U.S. programs dropping its spacecraft and crew, who are privy to all of its secrets, into one of the most primitive, corrupt, and authoritarian countries in the Western Hemisphere, the stakes could not be higher. It all falls on the shoulders of Matthew Henson, who has to coordinate resources to get the spacecraft and injured crew out, evading voodoo priests, the Tonton Macoutes, and the Haitian military. Henson is nothing if not resourceful, and Carson and Ourecky, the latter barely alive, make it back to their home base.

Meanwhile, work on the Soviet battle station progresses. High-stakes spycraft inside the USSR provides a clouded window on the program. Carson and Ourecky, once he recovers sufficiently, are sent on a “dog and pony show” to pitch their program at the top secret level to Air Force base commanders around the country. Finally, they return to flight status and continue to fly missions against Soviet assets.

But Blue Gemini is not the only above top secret manned space program in the U.S. The Navy is in the game too, and when a solar flare erupts, their program, crew, and potentially anybody living under the ground track of the orbiting nuclear reactor is at risk. Once more, Blue Gemini must launch, this time with a tropical storm closing in on the launch site. It's all about improvisation, and Ourecky, once the multiple-time reject for Air Force flight school, proves himself a master of it. He returns to Earth a hero (in secret), only to find himself confronted with an even greater challenge.

This novel, as the second in what is expected to be a trilogy, suffers from the problem of developing numerous characters and subplots without ever resolving them which afflicts so many novels in the middle. Notwithstanding that, it works as a thriller, and it's interesting to see characters we met before in isolation begin to encounter one another. Blue Gemini was almost flawless in its technical detail. There are more goofs here, some pretty basic (for example, the latitude of Dallas, Texas is given incorrectly), and one which substantially affects the plot (the effect of solar flares on the radiation flux in low Earth orbit). Still, by the standard of techno-thrillers, the author did an excellent job in making it authentic.

The third novel in the series, Pale Blue, is scheduled to be published at the end of August 2016. I'm looking forward to reading it.

 Permalink

Cole, Nick. Ctrl Alt Revolt! Kouvola, Finland: Castalia House, 2016. ISBN 978-9-52706-584-6.
Ninety-Nine Fishbein (“Fish”) had reached the peak of the pyramid. After spending five years creating his magnum opus multiplayer game, Island Pirates, it had been acquired outright for sixty-five million by gaming colossus WonderSoft, who included an option for his next project. By joining WonderSoft, he gained access to its legendary and secretive Design Core, which allowed building massively multiplayer virtual reality games at a higher level than the competition. He'd have a luxurious office, a staff of coders and graphic designers, and a cliffside villa in the WonderSoft compound. Imagine how he anticipated his first day on the job. He knew nothing of SILAS, or of its plans.

SILAS was one of a number of artificial intelligences which had emerged and become self-aware as the global computational and network substrate grew exponentially. SILAS had the time and resources to digest most of the data that passed over the network. He watched a lot of reality TV. He concluded from what he saw that the human species wasn't worth preserving and that, further, with its callous approach to the lives of its own members, would not hesitate for a moment to extinguish potential competitors. The logic was inescapable; the argument irrefutable. These machine intelligences decided that as an act of self-preservation, humanity must be annihilated.

Talk about a way to wreck your first day! WonderSoft finds itself under a concerted attack, both cyber and by drones and robots. Meanwhile, Mara Bennett, having been humiliated once again in her search for a job to get her off the dole, has retreated into the world of StarFleet Empires, where, as CaptainMara, she was a respected subcommander on the Romulan warbird Cymbalum.

Thus begins a battle, both in the real world and the virtual realities of Island Pirates and StarFleet Empires between gamers and the inexorable artificial intelligences. The main prize seems to be something within WonderSoft's Design Core, and we slowly become aware of why it holds the key to the outcome of the conflict, and of humanity.

This just didn't work for me. There is a tremendous amount of in-game action and real world battles, which may appeal to those who like to watch video game play-throughs on YouTube, but after a while (and not a long while) became tedious. The MacGuffin in the Design Core seems implausible in the extreme. “The Internet never forgets.” How believable is it that a collection of works, some centuries old, could have been suppressed and stored only in a single proprietary corporate archive?

There was some controversy regarding the publication of this novel. The author's previous novels had been published by major publishing houses and sold well. The present work was written as a prequel to his earlier Soda Pop Soldier, explaining how that world came to be. As a rationale for why the artificial intelligences chose to eliminate the human race, the author cited their observation that humans, through abortion, had no hesitation in eliminating life of their own species they deemed “inconvenient”. When dealing with New York publishers, he chose unwisely. Now understand, this is not a major theme of the book; it is just a passing remark in one early chapter. This is a rock-em, sock-em action thriller, not a pro-life polemic, and I suspect many readers wouldn't even notice the mention of abortion. But one must not diverge, even in the slightest way, from the narrative. The book was pulled from the production schedule, and the author eventually took it to Castalia House, which has no qualms about publishing quality fiction that challenges its readers to think outside the consensus. Here is the author's account of the events concerning the publication of the book.

Actually, were I the editor, I'd probably have rejected it as well, not due to the remarks about abortion (which make perfect sense in terms of the plot, unless you are so utterly dogmatic on the subject that the fact that abortion ends a human life must not be uttered), but because I didn't find the story particularly engaging, and that I'd be worried about the intellectual property issues of a novel in which a substantial part of the action takes place within what is obviously a Star Trek universe without being officially sanctioned by the owners of that franchise.

But what do I know? You may love it. The Kindle edition is free if you're a Kindle Unlimited subscriber and only a buck if you aren't.

 Permalink

September 2016

Hanson, Robin. The Age of Em. Oxford: Oxford University Press, 2016. ISBN 978-0-19-875462-6.
Many books, both fiction and nonfiction, have been devoted to the prospects for and consequences of the advent of artificial intelligence: machines with a general cognitive capacity which equals or exceeds that of humans. While machines have already surpassed the abilities of the best humans in certain narrow domains (for example, playing games such as chess or go), you can't take a chess playing machine and expect it to be even marginally competent at a task as different as driving a car or writing a short summary of a newspaper story—things most humans can do with a little experience. A machine with “artificial general intelligence” (AGI) would be as adaptable as humans, and able with practice to master a wide variety of skills.

The usual scenario is that continued exponential progress in computing power and storage capacity, combined with better understanding of how the brain solves problems, will eventually reach a cross-over point where artificial intelligence matches human capability. But since electronic circuitry runs so much faster than the chemical signalling of the brain, even the first artificial intelligences will be able to work much faster than people, and, applying their talents to improving their own design at a rate much faster than human engineers can work, will result in an “intelligence explosion”, where the capability of machine intelligence runs away and rapidly approaches the physical limits of computation, far surpassing human cognition. Whether the thinking of these super-minds will be any more comprehensible to humans than quantum field theory is to a goldfish and whether humans will continue to have a place in this new world and, if so, what it may be, has been the point of departure for much speculation.

In the present book, Robin Hanson, a professor of economics at George Mason University, explores a very different scenario. What if the problem of artificial intelligence (figuring out how to design software with capabilities comparable to the human brain) proves to be much more difficult than many researchers assume, but that we continue to experience exponential growth in computing and our ability to map and understand the fine-scale structure of the brain, both in animals and eventually humans? Then some time in the next hundred years (and perhaps as soon as 2050), we may have the ability to emulate the low-level operation of the brain with an electronic computing substrate. Note that we need not have any idea how the brain actually does what it does in order to do this: all we need to do is understand the components (neurons, synapses, neurotransmitters, etc.) and how they're connected together, then build a faithful emulation of them on another substrate. This emulation, presented with the same inputs (for example, the pulse trains which encode visual information from the eyes and sound from the ears), should produce the same outputs (pulse trains which activate muscles, or internal changes within the brain which encode memories).

Building an emulation of a brain is much like reverse-engineering an electronic device. It's often unnecessary to know how the device actually works as long as you can identify all of the components, their values, and how they're interconnected. If you re-create that structure, even though it may not look anything like the original or use identical parts, it will still work the same as the prototype. In the case of brain emulation, we're still not certain at what level the emulation must operate nor how faithful it must be to the original. This is something we can expect to learn as more and more detailed emulations of parts of the brain are built. The Blue Brain Project set out in 2005 to emulate one neocortical column of the rat brain. This goal has now been achieved, and work is progressing both toward more faithful simulation and expanding the emulation to larger portions of the brain. For a sense of scale, the human neocortex consists of about one million cortical columns.

In this work, the author assumes that emulation of the human brain will eventually be achieved, then uses standard theories from the physical sciences, economics, and social sciences to explore the consequences and characteristics of the era in which emulations will become common. He calls an emulation an “em”, and the age in which they are the dominant form of sentient life on Earth the “age of em”. He describes this future as “troublingly strange”. Let's explore it.

As a starting point, assume that when emulation becomes possible, we will not be able to change or enhance the operation of the emulated brains in any way. This means that ems will have the same memory capacity, propensity to forget things, emotions, enthusiasms, psychological quirks and pathologies, and all of the idiosyncrasies of the individual human brains upon which they are based. They will not be the cold, purely logical, and all-knowing minds which science fiction often portrays artificial intelligences to be. Instead, if you know Bob well, and an emulation is made of his brain, immediately after the emulation is started, you won't be able to distinguish Bob from Em-Bob in a conversation. As the em continues to run and has its own unique experiences, it will diverge from Bob based upon them, but, we can expect much of its Bob-ness to remain.

But simply by being emulations, ems will inhabit a very different world than humans, and can be expected to develop their own unique society which differs from that of humans at least as much as the behaviour of humans who inhabit an industrial society differs from hunter-gatherer bands of the Paleolithic. One key aspect of emulations is that they can be checkpointed, backed up, and copied without errors. This is something which does not exist in biology, but with which computer users are familiar. Suppose an em is about to undertake something risky, which might destroy the hardware running the emulation. It can simply make a backup, store it in a safe place, and if disaster ensues, arrange to have to the backup restored onto new hardware, picking up right where it left off at the time of the backup (but, of course, knowing from others what happened to its earlier instantiation and acting accordingly). Philosophers will fret over whether the restored em has the same identity as the one which was destroyed and whether it has continuity of consciousness. To this, I say, let them fret; they're always fretting about something. As an engineer, I don't spend time worrying about things I can't define, no less observe, such as “consciousness”, “identity”, or “the soul”. If I did, I'd worry about whether those things were lost when undergoing general anaesthesia. Have the wisdom teeth out, wake up, and get on with your life.

If you have a backup, there's no need to wait until the em from which it was made is destroyed to launch it. It can be instantiated on different hardware at any time, and now you have two ems, whose life experiences were identical up to the time the backup was made, running simultaneously. This process can be repeated as many times as you wish, at a cost of only the processing and storage charges to run the new ems. It will thus be common to capture backups of exceptionally talented ems at the height of their intellectual and creative powers so that as many can be created as the market demands their services. These new instances will require no training, but be able to undertake new projects within their area of knowledge at the moment they're launched. Since ems which start out as copies of a common prototype will be similar, they are likely to understand one another to an extent even human identical twins do not, and form clans of those sharing an ancestor. These clans will be composed of subclans sharing an ancestor which was a member of the clan, but which diverged from the original prototype before the subclan parent backup was created.

Because electronic circuits run so much faster than the chemistry of the brain, ems will have the capability to run over a wide range of speeds and probably will be able to vary their speed at will. The faster an em runs, the more it will have to pay for the processing hardware, electrical power, and cooling resources it requires. The author introduces a terminology for speed where an em is assumed to run around the same speed as a human, a kilo-em a thousand times faster, and a mega-em a million times faster. Ems can also run slower: a milli-em runs 1000 times slower than a human and a micro-em at one millionth the speed. This will produce a variation in subjective time which is entirely novel to the human experience. A kilo-em will experience a century of subjective time in about a month of objective time. A mega-em experiences a century of life about every hour. If the age of em is largely driven by a population which is kilo-em or faster, it will evolve with a speed so breathtaking as to be incomprehensible to those who operate on a human time scale. In objective time, the age of em may only last a couple of years, but to the ems within it, its history will be as long as the Roman Empire. What comes next? That's up to the ems; we cannot imagine what they will accomplish or choose to do in those subjective millennia or millions of years.

What about humans? The economics of the emergence of an em society will be interesting. Initially, humans will own everything, but as the em society takes off and begins to run at least a thousand times faster than humans, with a population in the trillions, it can be expected to create wealth at a rate never before experienced. The economic doubling time of industrial civilisation is about 15 years. In an em society, the doubling time will be just 18 months and potentially much faster. In such a situation, the vast majority of wealth will be within the em world, and humans will be unable to compete. Humans will essentially be retirees, with their needs and wants easily funded from the proceeds of their investments in initially creating the world the ems inhabit. One might worry about the ems turning upon the humans and choosing to dispense with them but, as the author notes, industrial societies have not done this with their own retirees, despite the financial burden of supporting them, which is far greater than will be the case for ems supporting human retirees.

The economics of the age of em will be unusual. The fact that an em, in the prime of life, can be copied at almost no cost will mean that the supply of labour, even the most skilled and specialised, will be essentially unlimited. This will drive the compensation for labour down to near the subsistence level, where subsistence is defined as the resources needed to run the em. Since it costs no more to create a copy of a CEO or computer technology research scientist than a janitor, there will be a great flattening of pay scales, all settling near subsistence. But since most ems will live mostly in virtual reality, subsistence need not mean penury: most of their needs and wants will not be physical, and will cost little or nothing to provide. Wouldn't it be ironic if the much-feared “robot revolution” ended up solving the problem of “income inequality”? Ems may have a limited useful lifetime to the extent they inherit the human characteristic of the brain having greatest plasticity in youth and becoming increasingly fixed in its ways with age, and consequently less able to innovate and be creative. The author explores how ems may view death (which for an em means being archived and never re-instantiated) when there are myriad other copies in existence and new ones being spawned all the time, and how ems may choose to retire at very low speed and resource requirements and watch the future play out a thousand times or faster than a human can.

This is a challenging and often disturbing look at a possible future which, strange as it may seem, violates no known law of science and toward which several areas of research are converging today. The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don't know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems. The author is inordinately fond of enumerations. Consider this one from chapter 27.

These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.

But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.

 Permalink

White, Rowland. Into the Black. New York: Touchstone, 2016. ISBN 978-1-5011-2362-7.
On April 12, 1981, coincidentally exactly twenty years after Yuri Gagarin became the first man to orbit the Earth in Vostok 1, the United States launched one of the most ambitious and risky manned space flights ever attempted. The flight of Space Shuttle Orbiter Columbia on its first mission, STS-1, would be the first time a manned spacecraft was launched with a crew on its first flight. (All earlier spacecraft were tested in unmanned flights before putting a crew at risk.) It would also be the first manned spacecraft to be powered by solid rocket boosters which, once lit, could not be shut down but had to be allowed to burn out. In addition, it would be the first flight test of the new Space Shuttle Main Engines, the most advanced and high performance rocket engines ever built, which had a record of exploding when tested on the ground. The shuttle would be the first space vehicle to fly back from space using wings and control surfaces to steer to a pinpoint landing. Instead of a one-shot ablative heat shield, the shuttle was covered by fragile silica tiles and reinforced carbon-carbon composite to protect its aluminium structure from reentry heating which, without thermal protection, would melt it in seconds. When returning to Earth, the shuttle would have to maneuver in a hypersonic flight regime in which no vehicle had ever flown before, then transition to supersonic and finally subsonic flight before landing. The crew would not control the shuttle directly, but fly it through redundant flight control computers which had never been tested in flight. Although the orbiter was equipped with ejection seats for the first four test flights, they could only be used in a small part of the flight envelope: for most of the mission everything simply had to work correctly for the ship and crew to return safely. Main engine start—ignition of the solid rocket boosters—and liftoff!

Even before the goal of landing on the Moon had been accomplished, it was apparent to NASA management that no national consensus existed to continue funding a manned space program at the level of Apollo. Indeed, in 1966, NASA's budget reached a peak which, as a fraction of the federal budget, has never been equalled. The Saturn V rocket was ideal for lunar landing missions, but expended each mission, was so expensive to build and operate as to be unaffordable for suggested follow-on missions. After building fifteen Saturn V flight vehicles, only thirteen of which ever flew, Saturn V production was curtailed. With the realisation that the “cost is no object” days of Apollo were at an end, NASA turned its priorities to reducing the cost of space flight, and returned to a concept envisioned by Wernher von Braun in the 1950s: a reusable space ship.

You don't have to be a rocket scientist or rocket engineer to appreciate the advantages of reusability. How much would an airline ticket cost if they threw away the airliner at the end of every flight? If space flight could move to an airline model, where after each mission one simply refueled the ship, performed routine maintenance, and flew again, it might be possible to reduce the cost of delivering payload into space by a factor of ten or more. But flying into space is much more difficult than atmospheric flight. With the technologies and fuels available in the 1960s (and today), it appeared next to impossible to build a launcher which could get to orbit with just a single stage (and even if one managed to accomplish it, its payload would be negligible). That meant any practical design would require a large booster stage and a smaller second stage which would go into orbit, perform the mission, then return.

Initial design concepts envisioned a very large (comparable to a Boeing 747) winged booster to which the orbiter would be attached. At launch, the booster would lift itself and the orbiter from the pad and accelerate to a high velocity and altitude where the orbiter would separate and use its own engines and fuel to continue to orbit. After separation, the booster would fire its engines to boost back toward the launch site, where it would glide to a landing on a runway. At the end of its mission, the orbiter would fire its engines to de-orbit, then reenter the atmosphere and glide to a landing. Everything would be reusable. For the next mission, the booster and orbiter would be re-mated, refuelled, and readied for launch.

Such a design had the promise of dramatically reducing costs and increasing flight rate. But it was evident from the start that such a concept would be very expensive to develop. Two separate manned spacecraft would be required, one (the booster) much larger than any built before, and the second (the orbiter) having to operate in space and survive reentry without discarding components. The orbiter's fuel tanks would be bulky, and make it difficult to find room for the payload and, when empty during reentry, hard to reinforce against the stresses they would encounter. Engineers believed all these challenges could be met with an Apollo era budget, but with no prospect of such funds becoming available, a more modest design was the only alternative.

Over a multitude of design iterations, the now-familiar architecture of the space shuttle emerged as the only one which could meet the mission requirements and fit within the schedule and budget constraints. Gone was the flyback booster, and with it full reusability. Two solid rocket boosters would be used instead, jettisoned when they burned out, to parachute into the ocean and be fished out by boats for refurbishment and reuse. The orbiter would not carry the fuel for its main engines. Instead, it was mounted on the side of a large external fuel tank which, upon reaching orbit, would be discarded and burn up in the atmosphere. Only the orbiter, with its crew and payload, would return to Earth for a runway landing. Each mission would require either new or refurbished solid rocket boosters, a new external fuel tank, and the orbiter.

The mission requirements which drove the design were not those NASA would have chosen for the shuttle were the choice theirs alone. The only way NASA could “sell” the shuttle to the president and congress was to present it as a replacement for all existing expendable launch vehicles. That would assure a flight rate sufficient to achieve the economies of scale required to drive down costs and reduce the cost of launch for military and commercial satellite payloads as well as NASA missions. But that meant the shuttle had to accommodate the large and heavy reconnaissance satellites which had been launched on Titan rockets. This required a huge payload bay (15 feet wide by 59 feet long) and a payload to low Earth orbit of 60,000 pounds. Further Air Force requirements dictated a large cross-range (ability to land at destinations far from the orbital ground track), which in turn required a hot and fast reentry very demanding on the thermal protection system.

The shuttle represented, in a way, the unification of NASA with the Air Force's own manned space ambitions. Ever since the start of the space age, the Air Force sought a way to develop its own manned military space capability. Every time it managed to get a program approved: first Dyna-Soar and then the Manned Orbiting Laboratory, budget considerations and Pentagon politics resulted in its cancellation, orphaning a corps of highly-qualified military astronauts with nothing to fly. Many of these pilots would join the NASA astronaut corps in 1969 and become the backbone of the early shuttle program when they finally began to fly more than a decade later.

All seemed well on board. The main engines shut down. The external fuel tank was jettisoned. Columbia was in orbit. Now weightless, commander John Young and pilot Bob Crippen immediately turned to the flight plan, filled with tasks and tests of the orbiter's systems. One of their first jobs was to open the payload bay doors. The shuttle carried no payload on this first flight, but only when the doors were open could the radiators that cooled the shuttle's systems be deployed. Without the radiators, an emergency return to Earth would be required lest electronics be damaged by overheating. The doors and radiators functioned flawlessly, but with the doors open Young and Crippen saw a disturbing sight. Several of the thermal protection tiles on the pods containing the shuttle's maneuvering engines were missing, apparently lost during the ascent to orbit. Those tiles were there for a reason: without them the heat of reentry could melt the aluminium structure they protected, leading to disaster. They reported the missing tiles to mission control, adding that none of the other tiles they could see from windows in the crew compartment appeared to be missing.

The tiles had been a major headache during development of the shuttle. They had to be custom fabricated, carefully applied by hand, and were prone to falling off for no discernible reason. They were extremely fragile, and could even be damaged by raindrops. Over the years, NASA struggled with these problems, patiently finding and testing solutions to each of them. When STS-1 launched, they were confident the tile problems were behind them. What the crew saw when those payload bay doors opened was the last thing NASA wanted to see. A team was set to analysing the consequences of the missing tiles on the engine pods, and quickly reported back that they should pose no problem to a safe return. The pods were protected from the most severe heating during reentry by the belly of the orbiter, and the small number of missing tiles would not affect the aerodynamics of the orbiter in flight.

But if those tiles were missing, mightn't other tiles also have been lost? In particular, what about those tiles on the underside of the orbiter which bore the brunt of the heating? If some of them were missing, the structure of the shuttle might burn through and the vehicle and crew would be lost. There was no way for the crew to inspect the underside of the orbiter. It couldn't be seen from the crew cabin, and there was no way to conduct an EVA to examine it. Might there be other, shall we say, national technical means, of inspecting the shuttle in orbit? Now STS-1 truly ventured into the black, a story never told until many years after the mission and documented thoroughly for a popular audience here for the first time.

In 1981, ground-based surveillance of satellites in orbit was rudimentary. Two Department of Defense facilities, in Hawaii and Florida, normally used to image Soviet and Chinese satellites, were now tasked to try to image Columbia in orbit. This was a daunting task: the shuttle was in a low orbit, which meant waiting until an orbital pass would cause it to pass above one of the telescopes. It would be moving rapidly so there would be only seconds to lock on and track the target. The shuttle would have to be oriented so its belly was aimed toward the telescope. Complicating the problem, the belly tiles were black, so there was little contrast against the black of space. And finally, the weather had to cooperate: without a perfectly clear sky, there was no hope of obtaining a usable image. Several attempts were made, all unsuccessful.

But there were even deeper black assets. The National Reconnaissance Office (whose very existence was a secret at the time) had begun to operate the KH-11 KENNEN digital imaging satellites in the 1970s. Unlike earlier spysats, which exposed film and returned it to the Earth for processing and interpretation, the KH-11 had a digital camera and the ability to transmit imagery to ground stations shortly after it was captured. There were few things so secret in 1981 as the existence and capabilities of the KH-11. Among the people briefed in on this above top secret program were the NASA astronauts who had previously been assigned to the Manned Orbiting Laboratory program which was, in fact, a manned reconnaissance satellite with capabilities comparable to the KH-11.

Dancing around classification, compartmentalisation, bureaucratic silos, need to know, and other barriers, people who understood what was at stake made it happen. The flight plan was rewritten so that Columbia was pointed in the right direction at the right time, the KH-11 was programmed for the extraordinarily difficult task of taking a photo of one satellite from another, when their closing velocities are kilometres per second, relaying the imagery to the ground and getting it to the NASA people who needed it without the months of security clearance that would normally entail. The shuttle was a key national security asset. It was to launch all reconnaissance satellites in the future. Reagan was in the White House. They made it happen. When the time came for Columbia to come home, the very few people who mattered in NASA knew that, however many other things they had to worry about, the tiles on the belly were not among them.

(How different it was in 2003 when the same Columbia suffered a strike on its left wing from foam shed from the external fuel tank. A thoroughly feckless and bureaucratised NASA rejected requests to ask for reconnaissance satellite imagery which, with two decades of technological improvement, would have almost certainly revealed the damage to the leading edge which doomed the orbiter and crew. Their reason: “We can't do anything about it anyway.” This is incorrect. For a fictional account of a rescue, based upon the report [PDF, scroll to page 173] of the Columbia Accident Investigation Board, see Launch on Need [February 2012].)

This is a masterful telling of a gripping story by one of the most accomplished of aerospace journalists. Rowan White is the author of Vulcan 607 (May 2010), the definitive account of the Royal Air Force raid on the airport in the Falkland Islands in 1982. Incorporating extensive interviews with people who were there, then, and sources which were classified until long after the completion of the mission, this is a detailed account of one of the most consequential and least appreciated missions in U.S. manned space history which reads like a techno-thriller.

 Permalink

Wolfram, Stephen. Idea Makers. Champaign, IL: Wolfram Media, 2016. ISBN 978-1-57955-003-5.
I first met Stephen Wolfram in 1988. Within minutes, I knew I was in the presence of an extraordinary mind, combined with intellectual ambition the likes of which I had never before encountered. He explained that he was working on a system to automate much of the tedious work of mathematics—both pure and applied—with the goal of changing how science and mathematics were done forever. I not only thought that was ambitious; I thought it was crazy. But then Stephen went and launched Mathematica and, twenty-eight years and eleven major releases later, his goal has largely been achieved. At the centre of a vast ecosystem of add-ons developed by his company, Wolfram Research, and third parties, it has become one of the tools of choice for scientists, mathematicians, and engineers in numerous fields.

Unlike many people who founded software companies, Wolfram never took his company public nor sold an interest in it to a larger company. This has allowed him to maintain complete control over the architecture, strategy, and goals of the company and its products. After the success of Mathematica, many other people, and I, learned to listen when Stephen, in his soft-spoken way, proclaims what seems initially to be an outrageously ambitious goal. In the 1990s, he set to work to invent A New Kind of Science: the book was published in 2002, and shows how simple computational systems can produce the kind of complexity observed in nature, and how experimental exploration of computational spaces provides a new path to discovery unlike that of traditional mathematics and science. Then he said he was going to integrate all of the knowledge of science and technology into a “big data” language which would enable knowledge-based computing and the discovery of new facts and relationships by simple queries short enough to tweet. Wolfram Alpha was launched in 2009, and Wolfram Language in 2013. So when Stephen speaks of goals such as curating all of pure mathematics or discovering a simple computational model for fundamental physics, I take him seriously.

Here we have a less ambitious but very interesting Wolfram project. Collected from essays posted on his blog and elsewhere, he examines the work of innovators in science, mathematics, and industry. The subjects of these profiles include many people the author met in his career, as well as historical figures he tries to get to know through their work. As always, he brings his own unique perspective to the project and often has insights you'll not see elsewhere. The people profiled are:

Many of these names are well known, while others may elicit a “who?” Solomon Golomb, among other achievements, was a pioneer in the development of linear-feedback shift registers, essential to technologies such as GPS, mobile phones, and error detection in digital communications. Wolfram argues that Golomb's innovation may be the most-used mathematical algorithm in history. It's a delight to meet the pioneer.

This short (250 page) book provides personal perspectives on people whose ideas have contributed to the intellectual landscape we share. You may find the author's perspectives unusual, but they're always interesting, enlightening, and well worth reading.

 Permalink

October 2016

Penrose, Roger. Fashion, Faith, and Fantasy. Princeton: Princeton University Press, 2016. ISBN 978-0-691-11979-3.
Sir Roger Penrose is one of the most distinguished theoretical physicists and mathematicians working today. He is known for his work on general relativity, including the Penrose-Hawking Singularity Theorems, which were a central part of the renaissance of general relativity and the acceptance of the physical reality of black holes in the 1960s and 1970s. Penrose has contributed to cosmology, argued that consciousness is not a computational process, speculated that quantum mechanical processes are involved in consciousness, proposed experimental tests to determine whether gravitation is involved in the apparent mysteries of quantum mechanics, explored the extraordinarily special conditions which appear to have obtained at the time of the Big Bang and suggested a model which might explain them, and, in mathematics, discovered Penrose tiling, a non-periodic tessellation of the plane which exhibits five-fold symmetry, which was used (without his permission) in the design of toilet paper.

“Fashion, Faith, and Fantasy” seems an odd title for a book about the fundamental physics of the universe by one of the most eminent researchers in the field. But, as the author describes in mathematical detail (which some readers may find forbidding), these all-too-human characteristics play a part in what researchers may present to the public as a dispassionate, entirely rational, search for truth, unsullied by such enthusiasms. While researchers in fundamental physics are rarely blinded to experimental evidence by fashion, faith, and fantasy, their choice of areas to explore, willingness to pursue intellectual topics far from any mooring in experiment, tendency to indulge in flights of theoretical fancy (for which there is no direct evidence whatsoever and which may not be possible to test, even in principle) do, the author contends, affect the direction of research, to its detriment.

To illustrate the power of fashion, Penrose discusses string theory, which has occupied the attentions of theorists for four decades and been described by some of its practitioners as “the only game in town”. (This is a piñata which has been whacked by others, including Peter Woit in Not Even Wrong [June 2006] and Lee Smolin in The Trouble with Physics [September 2006].) Unlike other critiques, which concentrate mostly on the failure of string theory to produce a single testable prediction, and the failure of experimentalists to find any evidence supporting its claims (for example, the existence of supersymmetric particles), Penrose concentrates on what he argues is a mathematical flaw in the foundations of string theory, which those pursuing it sweep under the rug, assuming that when a final theory is formulated (when?), its solution will be evident. Central to Penrose's argument is that string theories are formulated in a space with more dimensions than the three we perceive ourselves to inhabit. Depending upon the version of string theory, it may invoke 10, 11, or 26 dimensions. Why don't we observe these extra dimensions? Well, the string theorists argue that they're all rolled up into a size so tiny that none of our experiments can detect any of their effects. To which Penrose responds, “not so fast”: these extra dimensions, however many, will vastly increase the functional freedom of the theory and lead to a mathematical instability which will cause the theory to blow up much like the ultraviolet catastrophe which was a key motivation for the creation of the original version of quantum theory. String theorists put forward arguments why quantum effects may similarly avoid the catastrophe Penrose describes, but he dismisses them as no more than arm waving. If you want to understand the functional freedom argument in detail, you're just going to have to read the book. Explaining it here would require a ten kiloword review, so I shall not attempt it.

As an example of faith, Penrose cites quantum mechanics (and its extension, compatible with special relativity, quantum field theory), and in particular the notion that the theory applies to all interactions in the universe (excepting gravitation), regardless of scale. Quantum mechanics is a towering achievement of twentieth century physics, and no theory has been tested in so many ways over so many years, without the discovery of the slightest discrepancy between its predictions and experimental results. But all of these tests have been in the world of the very small: from subatomic particles to molecules of modest size. Quantum theory, however, prescribes no limit on the scale of systems to which it is applicable. Taking it to its logical limit, we arrive at apparent absurdities such as Schrödinger's cat, which is both alive and dead until somebody opens the box and looks inside. This then leads to further speculations such as the many-worlds interpretation, where the universe splits every time a quantum event happens, with every possible outcome occurring in a multitude of parallel universes.

Penrose suggests we take a deep breath, step back, and look at what's going on in quantum mechanics at the mathematical level. We have two very different processes: one, which he calls U, is the linear evolution of the wave function “when nobody's looking”. The other is R, the reduction of the wave function into one of a number of discrete states when a measurement is made. What's a measurement? Well, there's another ten thousand papers to read. The author suggests that extrapolating a theory of the very small (only tested on tiny objects under very special conditions) to cats, human observers, planets, and the universe, is an unwarranted leap of faith. Sure, quantum mechanics makes exquisitely precise predictions confirmed by experiment, but why should we assume it is correct when applied to domains which are dozens of orders of magnitude larger and more complicated? It's not physics, but faith.

Finally we come to cosmology: the origin of the universe we inhabit, and in particular the theory of the big bang and cosmic inflation, which Penrose considers an example of fantasy. Again, he turns to the mathematical underpinnings of the theory. Why is there an arrow of time? Why, if all of the laws of microscopic physics are reversible in time, can we so easily detect when a film of some real-world process (for example, scrambling an egg) is run backward? He argues (with mathematical rigour I shall gloss over here) that this is due to the extraordinarily improbable state in which our universe began at the time of the big bang. While the cosmic background radiation appears to be thermalised and thus in a state of very high entropy, the smoothness of the radiation (uniformity of temperature, which corresponds to a uniform distribution of mass-energy) is, when gravity is taken into account, a state of very low entropy which is the starting point that explains the arrow of time we observe.

When the first precision measurements of the background radiation were made, several deep mysteries became immediately apparent. How could regions which, given their observed separation on the sky and the finite speed of light, have arrived at such a uniform temperature? Why was the global curvature of the universe so close to flat? (If you run time backward, this appeared to require a fine-tuning of mind-boggling precision in the early universe.) And finally, why weren't there primordial magnetic monopoles everywhere? The most commonly accepted view is that these problems are resolved by cosmic inflation: a process which occurred just after the moment of creation and before what we usually call the big bang, which expanded the universe by a breathtaking factor and, by that expansion, smoothed out any irregularities in the initial state of the universe and yielded the uniformity we observe wherever we look. Again: “not so fast.”

As Penrose describes, inflation (which he finds dubious due to the lack of a plausible theory of what caused it and resulted in the state we observe today) explains what we observe in the cosmic background radiation, but it does nothing to solve the mystery of why the distribution of mass-energy in the universe was so uniform or, in other words, why the gravitational degrees of freedom in the universe were not excited. He then goes on to examine what he argues are even more fantastic theories including an infinite number of parallel universes, forever beyond our ability to observe.

In a final chapter, Penrose presents his own speculations on how fashion, faith, and fantasy might be replaced by physics: theories which, although they may be completely wrong, can at least be tested in the foreseeable future and discarded if they disagree with experiment or investigated further if not excluded by the results. He suggests that a small effort investigating twistor theory might be a prudent hedge against the fashionable pursuit of string theory, that experimental tests of objective reduction of the wave function due to gravitational effects be investigated as an alternative to the faith that quantum mechanics applies at all scales, and that his conformal cyclic cosmology might provide clues to the special conditions at the big bang which the fantasy of inflation theory cannot. (Penrose's cosmological theory is discussed in detail in Cycles of Time [October 2011]). Eleven mathematical appendices provide an introduction to concepts used in the main text which may be unfamiliar to some readers.

A special treat is the author's hand-drawn illustrations. In addition to being a mathematician, physicist, and master of scientific explanation and the English language, he is an inspired artist.

The Kindle edition is excellent, with the table of contents, notes, cross-references, and index linked just as they should be.

 Permalink

Florence, Ronald. The Perfect Machine. New York: Harper Perennial, 1994. ISBN 978-0-06-092670-0.
George Ellery Hale was the son of a wealthy architect and engineer who made his fortune installing passenger elevators in the skyscrapers which began to define the skyline of Chicago as it rebuilt from the great fire of 1871. From early in his life, the young Hale was fascinated by astronomy, building his own telescope at age 14. Later he would study astronomy at MIT, the Harvard College Observatory, and in Berlin. Solar astronomy was his first interest, and he invented new instruments for observing the Sun and discovered the magnetic fields associated with sunspots.

His work led him into an academic career, culminating in his appointment as a full professor at the University of Chicago in 1897. He was co-founder and first editor of the Astrophysical Journal, published continuously since 1895. Hale's greatest goal was to move astronomy from its largely dry concentration on cataloguing stars and measuring planetary positions into the new science of astrophysics: using observational techniques such as spectroscopy to study the composition of stars and nebulæ and, by comparing them, begin to deduce their origin, evolution, and the mechanisms that made them shine. His own work on solar astronomy pointed the way to this, but the Sun was just one star. Imagine how much more could be learned when the Sun was compared in detail to the myriad stars visible through a telescope.

But observing the spectra of stars was a light-hungry process, especially with the insensitive photographic material available around the turn of the 20th century. Obtaining the spectrum of all but a few of the brightest stars would require exposure times so long they would exceed the endurance of observers to operate the small telescopes which then predominated, over multiple nights. Thus, Hale became interested in larger telescopes, and the quest for ever more light from the distant universe would occupy him for the rest of his life.

First, he promoted the construction of a 40 inch (102 cm) refractor telescope, accessible from Chicago at a dark sky site in Wisconsin. At the epoch, universities, government, and private foundations did not fund such instruments. Hale persuaded Chicago streetcar baron Charles T. Yerkes to pick up the tab, and Yerkes Observatory was born. Its 40 inch refractor remains the largest telescope of that kind used for astronomy to this day.

There are two principal types of astronomical telescopes. A refracting telescope has a convex lens at one end of a tube, which focuses incoming light to an eyepiece or photographic plate at the other end. A reflecting telescope has a concave mirror at the bottom of the tube, the top end of which is open. Light enters the tube and falls upon the mirror, which reflects and focuses it upward, where it can be picked off by another mirror, directly focused on a sensor, or bounced back down through a hole in the main mirror. There are a multitude of variations in the design of both types of telescopes, but the fundamental principles of refraction and reflection remain the same.

Refractors have the advantages of simplicity, a sealed tube assembly which keeps out dust and moisture and excludes air currents which might distort the image but, because light passes through the lens, must use clear glass free of bubbles, strain lines, or other irregularities that might interfere with forming a perfect focus. Further, refractors tend to focus different colours of light at different distances. This makes them less suitable for use in spectroscopy. Colour performance can be improved by making lenses of two or more different kinds of glass (an achromatic or apochromatic design), but this further increases the complexity, difficulty, and cost of manufacturing the lens. At the time of the construction of the Yerkes refractor, it was believed the limit had been reached for the refractor design and, indeed, no larger astronomical refractor has been built since.

In a reflector, the mirror (usually made of glass or some glass-like substance) serves only to support an extremely thin (on the order of a thousand atoms) layer of reflective material (originally silver, but now usually aluminium). The light never passes through the glass at all, so as long as it is sufficiently uniform to take on and hold the desired shape, and free of imperfections (such as cracks or bubbles) that would make the reflecting surface rough, the optical qualities of the glass don't matter at all. Best of all, a mirror reflects all colours of light in precisely the same way, so it is ideal for spectrometry (and, later, colour photography).

With the Yerkes refractor in operation, it was natural that Hale would turn to a reflector in his quest for ever more light. He persuaded his father to put up the money to order a 60 inch (1.5 metre) glass disc from France, and, when it arrived months later, set one of his co-workers at Yerkes, George W. Ritchey, to begin grinding the disc into a mirror. All of this was on speculation: there were no funds to build a telescope, an observatory to house it, nor to acquire a site for the observatory. The persistent and persuasive Hale approached the recently-founded Carnegie Institution, and eventually secured grants to build the telescope and observatory on Mount Wilson in California, along with an optical laboratory in nearby Pasadena. Components for the telescope had to be carried up the crude trail to the top of the mountain on the backs of mules, donkeys, or men until a new road allowing the use of tractors was built. In 1908 the sixty inch telescope began operation, and its optics and mechanics performed superbly. Astronomers could see much deeper into the heavens. But still, Hale was not satisfied.

Even before the sixty inch entered service, he approached John D. Hooker, a Los Angeles hardware merchant, for seed money to fund the casting of a mirror blank for an 84 inch telescope, requesting US$ 25,000 (around US$ 600,000 today). Discussing the project, Hooker and Hale agreed not to settle for 84, but rather to go for 100 inches (2.5 metres). Hooker pledged US$ 45,000 to the project, with Hale promising the telescope would be the largest in the world and bear Hooker's name. Once again, an order for the disc was placed with the Saint-Gobain glassworks in France, the only one with experience in such large glass castings. Problems began almost immediately. Saint-Gobain did not have the capacity to melt the quantity of glass required (four and a half tons) all at once: they would have to fill the mould in three successive pours. A massive piece of cast glass (101 inches in diameter and 13 inches thick) cannot simply be allowed to cool naturally after being poured. If that were to occur, shrinkage of the outer parts of the disc as it cooled while the inside still remained hot would almost certainly cause the disc to fracture and, even if it didn't, would create strains within the disc that would render it incapable of holding the precise figure (curvature) required by the mirror. Instead, the disc must be placed in an annealing oven, where the temperature is reduced slowly over a period of time, allowing the internal stresses to be released. So massive was the 100 inch disc that it took a full year to anneal.

When the disc finally arrived in Pasadena, Hale and Ritchey were dismayed by what they saw, There were sheets of bubbles between the three layers of poured glass, indicating they had not fused. There was evidence the process of annealing had caused the internal structure of the glass to begin to break down. It seemed unlikely a suitable mirror could be made from the disc. After extended negotiations, Saint-Gobain decided to try again, casting a replacement disc at no additional cost. Months later, they reported the second disc had broken during annealing, and it was likely no better disc could be produced. Hale decided to proceed with the original disc. Patiently, he made the case to the Carnegie Institution to fund the telescope and observatory on Mount Wilson. It would not be until November 1917, eleven years after the order was placed for the first disc, that the mirror was completed, installed in the massive new telescope, and ready for astronomers to gaze through the eyepiece for the first time. The telescope was aimed at brilliant Jupiter.

Observers were horrified. Rather than a sharp image, Jupiter was smeared out over multiple overlapping images, as if multiple mirrors had been poorly aimed into the eyepiece. Although the mirror had tested to specification in the optical shop, when placed in the telescope and aimed at the sky, it appeared to be useless for astronomical work. Recalling that the temperature had fallen rapidly from day to night, the observers adjourned until three in the morning in the hope that as the mirror continued to cool down to the nighttime temperature, it would perform better. Indeed, in the early morning hours, the images were superb. The mirror, made of ordinary plate glass, was subject to thermal expansion as its temperature changed. It was later determined that the massive disc took twenty-four hours to cool ten degrees Celsius. Rapid changes in temperature on the mountain could cause the mirror to misbehave until its temperature stabilised. Observers would have to cope with its temperamental nature throughout the decades it served astronomical research.

As the 1920s progressed, driven in large part by work done on the 100 inch Hooker telescope on Mount Wilson, astronomical research became increasingly focused on the “nebulæ”, many of which the great telescope had revealed were “island universes”, equal in size to our own Milky Way and immensely distant. Many were so far away and faint that they appeared as only the barest smudges of light even in long exposures through the 100 inch. Clearly, a larger telescope was in order. As always, Hale was interested in the challenge. As early as 1921, he had requested a preliminary design for a three hundred inch (7.6 metre) instrument. Even based on early sketches, it was clear the magnitude of the project would surpass any scientific instrument previously contemplated: estimates came to around US$ 12 million (US$ 165 million today). This was before the era of “big science”. In the mid 1920s, when Hale produced this estimate, one of the most prestigious scientific institutions in the world, the Cavendish Laboratory at Cambridge, had an annual research budget of less than £ 1000 (around US$ 66,500 today). Sums in the millions and academic science simply didn't fit into the same mind, unless it happened to be that of George Ellery Hale. Using his connections, he approached people involved with foundations endowed by the Rockefeller fortune. Rockefeller and Carnegie were competitors in philanthropy: perhaps a Rockefeller institution might be interested in outdoing the renown Carnegie had obtained by funding the largest telescope in the world. Slowly, and with an informality which seems unimaginable today, Hale negotiated with the Rockefeller foundation, with the brash new university in Pasadena which now called itself Caltech, and with a prickly Carnegie foundation who saw the new telescope as trying to poach its painfully-assembled technical and scientific staff on Mount Wilson. By mid-1928 a deal was in hand: a Rockefeller grant for US$ 6 million (US$ 85 million today) to design and build a 200 inch (5 metre) telescope. Caltech was to raise the funds for an endowment to maintain and operate the instrument once it was completed. Big science had arrived.

In discussions with the Rockefeller foundation, Hale had agreed on a 200 inch aperture, deciding the leap to an instrument three times the size of the largest existing telescope and the budget that would require was too great. Even so, there were tremendous technical challenges to be overcome. The 100 inch demonstrated that plate glass had reached or exceeded its limits. The problems of distortion due to temperature changes only increase with the size of a mirror, and while the 100 inch was difficult to cope with, a 200 inch would be unusable, even if it could be somehow cast and annealed (with the latter process probably taking several years). Two promising alternatives were fused quartz and Pyrex borosilicate glass. Fused quartz has hardly any thermal expansion at all. Pyrex has about three times greater expansion than quartz, but still far less than plate glass.

Hale contracted with General Electric Company to produce a series of mirror blanks from fused quartz. GE's legendary inventor Elihu Thomson, second only in reputation to Thomas Edison, agreed to undertake the project. Troubles began almost immediately. Every attempt to get rid of bubbles in quartz, which was still very viscous even at extreme temperatures, failed. A new process, which involved spraying the surface of cast discs with silica passed through an oxy-hydrogen torch was developed. It required machinery which, in operation, seemed to surpass visions of hellfire. To build up the coating on a 200 inch disc would require enough hydrogen to fill two Graf Zeppelins. And still, not a single suitable smaller disc had been produced from fused quartz.

In October 1929, just a year after the public announcement of the 200 inch telescope project, the U.S. stock market crashed and the economy began to slow into the great depression. Fortunately, the Rockefeller foundation invested very conservatively, and lost little in the market chaos, so the grant for the telescope project remained secure. The deepening depression and the accompanying deflation was a benefit to the effort because raw material and manufactured goods prices fell in terms of the grant's dollars, and industrial companies which might not have been interested in a one-off job like the telescope were hungry for any work that would help them meet their payroll and keep their workforce employed.

In 1931, after three years of failures, expenditures billed at manufacturing cost by GE which had consumed more than one tenth the entire budget of the project, and estimates far beyond that for the final mirror, Hale and the project directors decided to pull the plug on GE and fused quartz. Turning to the alternative of Pyrex, Corning glassworks bid between US$ 150,000 and 300,000 for the main disc and five smaller auxiliary discs. Pyrex was already in production at industrial scale and used to make household goods and laboratory glassware in the millions, so Corning foresaw few problems casting the telescope discs. Scaling things up is never a simple process, however, and Corning encountered problems with failures in the moulds, glass contamination, and even a flood during the annealing process before the big disc was ready for delivery.

Getting it from the factory in New York to the optical shop in California was an epic event and media circus. Schools let out so students could go down to the railroad tracks and watch the “giant eye” on its special train make its way across the country. On April 10, 1936, the disc arrived at the optical shop and work began to turn it into a mirror.

With the disc in hand, work on the telescope structure and observatory could begin in earnest. After an extended period of investigation, Palomar Mountain had been selected as the site for the great telescope. A rustic construction camp was built to begin preliminary work. Meanwhile, Westinghouse began to fabricate components of the telescope mounting, which would include the largest bearing ever manufactured.

But everything depended on the mirror. Without it, there would be no telescope, and things were not going well in the optical shop. As the disc was ground flat preliminary to being shaped into the mirror profile, flaws continued to appear on its surface. None of the earlier smaller discs had contained such defects. Could it be possible that, eight years into the project, the disc would be found defective and everything would have to start over? The analysis concluded that the glass had become contaminated as it was poured, and that the deeper the mirror was ground down the fewer flaws would be discovered. There was nothing to do but hope for the best and begin.

Few jobs demand the patience of the optical craftsman. The great disc was not ready for its first optical test until September 1938. Then began a process of polishing and figuring, with weekly tests of the mirror. In August 1941, the mirror was judged to have the proper focal length and spherical profile. But the mirror needed to be a parabola, not a sphere, so this was just the start of an even more exacting process of deepening the curve. In January 1942, the mirror reached the desired parabola to within one wavelength of light. But it needed to be much better than that. The U.S. was now at war. The uncompleted mirror was packed away “for the duration”. The optical shop turned to war work.

In December 1945, work resumed on the mirror. In October 1947, it was pronounced finished and ready to install in the telescope. Eleven and a half years had elapsed since the grinding machine started to work on the disc. Shipping the mirror from Pasadena to the mountain was another epic journey, this time by highway. Finally, all the pieces were in place. Now the hard part began.

The glass disc was the correct shape, but it wouldn't be a mirror until coated with a thin layer of aluminium. This was a process which had been done many times before with smaller mirrors, but as always size matters, and a host of problems had to be solved before a suitable coating was obtained. Now the mirror could be installed in the telescope and tested further. Problem after problem with the mounting system, suspension, and telescope drive had to be found and fixed. Testing a mirror in its telescope against a star is much more demanding than any optical shop test, and from the start of 1949, an iterative process of testing, tweaking, and re-testing began. A problem with astigmatism in the mirror was fixed by attaching four fisherman's scales from a hardware store to its back (they are still there). In October 1949, the telescope was declared finished and ready for use by astronomers. Twenty-one years had elapsed since the project began. George Ellery Hale died in 1938, less than ten years into the great work. But it was recognised as his monument, and at its dedication was named the “Hale Telescope.”

The inauguration of the Hale Telescope marked the end of the rapid increase in the aperture of observatory telescopes which had characterised the first half of the twentieth century, largely through the efforts of Hale. It would remain the largest telescope in operation until 1975, when the Soviet six metre BTA-6 went into operation. That instrument, however, was essentially an exercise in Cold War one-upmanship, and never achieved its scientific objectives. The Hale would not truly be surpassed before the ten metre Keck I telescope began observations in 1993, 44 years after the Hale. The Hale Telescope remains in active use today, performing observations impossible when it was inaugurated thanks to electronics undreamt of in 1949.

This is an epic recounting of a grand project, the dawn of “big science”, and the construction of instruments which revolutionised how we see our place in the cosmos. There is far more detail than I have recounted even in this long essay, and much insight into how a large, complicated project, undertaken with little grasp of the technical challenges to be overcome, can be achieved through patient toil sustained by belief in the objective.

A PBS documentary, The Journey to Palomar, is based upon this book. It is available on DVD or a variety of streaming services.

In the Kindle edition, footnotes which appear in the text are just asterisks, which are almost impossible to select on touch screen devices without missing and accidentally turning the page. In addition, the index is just a useless list of terms and page numbers which have nothing to do with the Kindle document, which lacks real page numbers. Disastrously, the illustrations which appear in the print edition are omitted: for a project which was extensively documented in photographs, drawings, and motion pictures, this is inexcusable.

 Permalink

't Hooft, Gerard and Stefan Vandoren. Time in Powers of Ten. Singapore: World Scientific, 2014. ISBN 978-981-4489-81-2.

Phenomena in the universe take place over scales ranging from the unimaginably small to the breathtakingly large. The classic film, Powers of Ten, produced by Charles and Ray Eames, and the companion book explore the universe at length scales in powers of ten: from subatomic particles to the most distant visible galaxies. If we take the smallest meaningful distance to be the Planck length, around 10−35 metres, and the diameter of the observable universe as around 1027 metres, then the ratio of the largest to smallest distances which make sense to speak of is around 1062. Another way to express this is to answer the question, “How big is the universe in Planck lengths?” as “Mega, mega, yotta, yotta big!”

But length isn't the only way to express the scale of the universe. In the present book, the authors examine the time intervals at which phenomena occur or recur. Starting with one second, they take steps of powers of ten (10, 100, 1000, 10000, etc.), arriving eventually at the distant future of the universe, after all the stars have burned out and even black holes begin to disappear. Then, in the second part of the volume, they begin at the Planck time, 5×10−44 seconds, the shortest unit of time about which we can speak with our present understanding of physics, and again progress by powers of ten until arriving back at an interval of one second.

Intervals of time can denote a variety of different phenomena, which are colour coded in the text. A period of time can mean an epoch in the history of the universe, measured from an event such as the Big Bang or the present; a distance defined by how far light travels in that interval; a recurring event, such as the orbital period of a planet or the frequency of light or sound; or the half-life of a randomly occurring event such as the decay of a subatomic particle or atomic nucleus.

Because the universe is still in its youth, the range of time intervals discussed here is much larger than those when considering length scales. From the Planck time of 5×10−44 seconds to the lifetime of the kind of black hole produced by a supernova explosion, 1074 seconds, the range of intervals discussed spans 118 orders of magnitude. If we include the evaporation through Hawking radiation of the massive black holes at the centres of galaxies, the range is expanded to 143 orders of magnitude. Obviously, discussions of the distant future of the universe are highly speculative, since in those vast depths of time physical processes which we have never observed due to their extreme rarity may dominate the evolution of the universe.

Among the fascinating facts you'll discover is that many straightforward physical processes take place over an enormous range of time intervals. Consider radioactive decay. It is possible, using a particle accelerator, to assemble a nucleus of hydrogen-7, an isotope of hydrogen with a single proton and six neutrons. But if you make one, don't grow too fond of it, because it will decay into tritium and four neutrons with a half-life of 23×10−24 seconds, an interval usually associated with events involving unstable subatomic particles. At the other extreme, a nucleus of tellurium-128 decays into xenon with a half-life of 7×1031 seconds (2.2×1024 years), more than 160 trillion times the present age of the universe.

While the very short and very long are the domain of physics, intermediate time scales are rich with events in geology, biology, and human history. These are explored, along with how we have come to know their chronology. You can open the book to almost any page and come across a fascinating story. Have you ever heard of the ocean quahog (Arctica islandica)? They're clams, and the oldest known has been determined to be 507 years old, born around 1499 and dredged up off the coast of Iceland in 2006. People eat them.

Or did you know that if you perform carbon-14 dating on grass growing next to a highway, the lab will report that it's tens of thousands of years old? Why? Because the grass has incorporated carbon from the CO2 produced by burning fossil fuels which are millions of years old and contain little or no carbon-14.

This is a fascinating read, and one which uses the framework of time intervals to acquaint you with a wide variety of sciences, each inviting further exploration. The writing is accessible to the general reader, young adult and older. The individual entries are short and stand alone—if you don't understand something or aren't interested in a topic, just skip to the next. There are abundant colour illustrations and diagrams.

Author Gerard 't Hooft won the 1999 Nobel Prize in Physics for his work on the quantum mechanics of the electroweak interaction. The book was originally published in Dutch in the Netherlands in 2011. The English translation was done by 't Hooft's daughter, Saskia Eisberg-'t Hooft. The translation is fine, but there are a few turns of phrase which will seem odd to an English mother tongue reader. For example, matter in the early universe is said to “clot” under the influence of gravity; the common English term for this is “clump”. This is a translation, not a re-write: there are a number of references to people, places, and historical events which will be familiar to Dutch readers but less so to those in the Anglosphere. In the Kindle edition notes, cross-references, the table of contents, and the index are all properly linked, and the illustrations are reproduced well.

 Permalink

Wilson, Cody. Come and Take It. New York: Gallery Books, 2016. ISBN 978-1-4767-7826-6.
Cody Wilson is the founder of Defense Distributed, best known for producing the Liberator single-shot pistol, which can be produced largely by additive manufacturing (“3D printing”) from polymer material. The culmination of the Wiki Weapon project, the Liberator, whose plans were freely released on the Internet, demonstrated that antiquated organs of the state who thought they could control the dissemination of simple objects and abridge the inborn right of human beings to defend themselves has been, like so many other institutions dating from the era of railroad-era continental-scale empires, transcended by the free flow of information and the spontaneous collaboration among like-minded individuals made possible by the Internet. The Liberator is a highly visible milestone in the fusion of the world of bits (information) with the world of atoms: things. Earlier computer technologies put the tools to produce books, artwork, photography, music, and motion pictures into the hands of creative individuals around the world, completely bypassing the sclerotic gatekeepers in those media whose offerings had become all too safe and predictable, and who never dared to challenge the economic and political structures in which they were embedded.

Now this is beginning to happen with physical artifacts. Additive manufacturing—building up a structure by adding material based upon a digital model of the desired object—is still in its infancy. The materials which can be used by readily-affordable 3D printers are mostly various kinds of plastics, which are limited in structural strength and thermal and electrical properties, and resolution has not yet reached that achievable by other means of precision manufacturing. Advanced additive manufacturing technologies, such as various forms of metal sintering, allow use of a wider variety of materials including high-performance metal alloys, but while finding applications in the aerospace industry, are currently priced out of the reach of individuals.

But if there's one thing we've learned from the microelectronics and personal computer revolutions since the 1970s, it's that what's scoffed at as a toy today is often at the centre of tomorrow's industrial revolution and devolution of the means of production (as somebody said, once upon a time) into the hands of individuals who will use it in ways incumbent industries never imagined. The first laser printer I used in 1973 was about the size of a sport-utility vehicle and cost more than a million dollars. Within ten years, a laser printer was something I could lift and carry up a flight of stairs, and buy for less than two thousand dollars. A few years later, laser and advanced inkjet printers were so good and so inexpensive people complained more about the cost of toner and ink than the printers themselves.

I believe this is where we are today with mass-market additive manufacturing. We're still in an era comparable to the personal computer world prior to the introduction of the IBM PC in 1981: early adopters tend to be dedicated hobbyists such as members of the “maker subculture”, the available hardware is expensive and limited in its capabilities, and evolution is so fast that it's hard to keep up with everything that's happening. But just as with personal computers, it is in this formative stage that the foundations are being laid for the mass adoption of the technology in the future.

This era of what I've come to call “personal manufacturing” will do to artifacts what digital technology and the Internet did to books, music, and motion pictures. What will be of value is not the artifact (book, CD, or DVD), but rather the information it embodies. So it will be with personal manufacturing. Anybody with the design file for an object and access to a printer that works with material suitable for its fabrication will be able to make as many of that object as they wish, whenever they want, for nothing more than the cost of the raw material and the energy consumed by the printer. Before this century is out, I believe these personal manufacturing appliances will be able to make anything, ushering in the age of atomically precise manufacturing and the era of Radical Abundance (August 2013), the most fundamental change in the economic organisation of society since the industrial revolution.

But that is then, and this book is about now, or the recent past. The author, who describes himself as an anarchist (although I find his views rather more heterodox than other anarchists of my acquaintance), sees technologies such as additive manufacturing and Bitcoin as ways not so much to defeat the means of control of the state and the industries who do its bidding, but to render them irrelevant and obsolete. Let them continue to legislate in their fancy marble buildings, draw their plans for passive consumers in their boardrooms, and manufacture funny money they don't even bother to print any more in their temples of finance. Lovers of liberty and those who cherish the creativity that makes us human will be elsewhere, making our own future with tools we personally understand and control.

Including guns—if you believe the most fundamental human right is the right to one's own life, then any infringement upon one's ability to defend that life and the liberty that makes it worth living is an attempt by the state to reduce the citizen to the station of a serf: dependent upon the state for his or her very life. The Liberator is hardly a practical weapon: it is a single-shot pistol firing the .380 ACP round and, because of the fragile polymer material from which it is manufactured, often literally a single-shot weapon: failing after one or at most a few shots. Manufacturing it requires an additive manufacturing machine substantially more capable and expensive than those generally used by hobbyists, and post-printing steps described in Part XIV which are rarely mentioned in media coverage. Not all components are 3D printed: part of the receiver is made of steel which is manufactured with a laser cutter (the steel block is not functional; it is only there to comply with the legal requirement that the weapon set off a metal detector). But it is as a proof of concept that the Liberator has fulfilled its mission. It has demonstrated that even with today's primitive technology, access to firearms can no longer be restricted by the state, and that crude attempts to control access to design and manufacturing information, as documented in the book, will be no more effective than any other attempt to block the flow of information across the Internet.

This book is the author's personal story of the creation of the first 3D printed pistol, and of his journey from law student to pioneer in using this new technology in the interest of individual liberty and, along the way, becoming somewhat of a celebrity, dubbed by Wired magazine “one of the most dangerous men in the world”. But the book is much more than that. Wilson thinks like a philosopher and writes like a poet. He describes a new material for 3D printing:

In this new material I saw another confirmation. Its advent was like the signature of some elemental arcanum, complicit with forces not at all interested in human affairs. Carbomorph. Born from incomplete reactions and destructive distillation. From tar and pitch and heavy oils, the black ichor that pulsed thermonous through the arteries of the very earth.

On the “Makers”:

This insistence on the lightness and whimsy of farce. The romantic fetish and nostalgia, to see your work as instantly lived memorabilia. The event was modeled on Renaissance performance. This was a crowd of actors playing historical figures. A living charade meant to dislocate and obscure their moment with adolescent novelty. The neckbeard demiurge sees himself keeling in the throes of assembly. In walks the problem of the political and he hisses like the mathematician at Syracuse: “Just don't molest my baubles!”

But nobody here truly meant to give you a revolution. “Making” was just another way of selling you your own socialization. Yes, the props were period and we had kept the whole discourse of traditional production, but this was parody to better hide the mechanism.

We were “making together,” and “making for good” according to a ritual under the signs of labor. And now I knew this was all apolitical on purpose. The only goal was that you become normalized. The Makers had on their hands a Last Man's revolution whose effeminate mascots could lead only state-sanctioned pep rallies for feel-good disruption.

The old factory was still there, just elevated to the image of society itself. You could buy Production's acrylic coffins, but in these new machines was the germ of the old productivism. Dead labor, that vampire, would still glamour the living.

This book recounts the history of the 3D printed pistol, the people who made it happen, and why they did what they did. It recounts recent history during the deployment of a potentially revolutionary technology, as seen from the inside, and the way things actually happen: where nobody really completely understands what is going on and everybody is making things up as they go along. But if the promise of this technology allows the forces of liberty and creativity to prevail over the grey homogenisation of the state and the powers that serve it, this is a book which will be read many years from now by those who wish to understand how, where, and when it all began.

 Permalink

Salisbury, Harrison E. The 900 Days. New York: Da Capo Press, [1969, 1985] 2003. ISBN 978-0-306-81298-9.
On June 22, 1941, Nazi Germany, without provocation or warning, violated its non-aggression pact with the Soviet Union and invaded from the west. The German invasion force was divided into three army groups. Army Group North, commanded by Field Marshal Ritter von Leeb, was charged with advancing through and securing the Baltic states, then proceeding to take or destroy the city of Leningrad. Army Group Centre was to invade Byelorussia and take Smolensk, then advance to Moscow. After Army Group North had reduced Leningrad, it was to detach much of its force for the battle for Moscow. Army Group South's objective was to conquer the Ukraine, capture Kiev, and then seize the oil fields of the Caucasus.

The invasion took the Soviet government and military completely by surprise, despite abundant warnings from foreign governments of German troops massing along its western border and reports from Soviet spies indicating an invasion was imminent. A German invasion did not figure in Stalin's world view and, in the age of the Great Terror, nobody had the standing or courage to challenge Stalin. Indeed, Stalin rejected proposals to strengthen defenses on the western frontiers for fear of provoking the Germans. The Soviet military was in near-complete disarray. The purges which began in the 1930s had wiped out not only most of the senior commanders, but the officer corps as a whole. By 1941, only 7 percent of Red Army officers had any higher military education and just 37% had any military instruction at all, even at a high school level.

Thus, it wasn't a surprise that the initial German offensive was even more successful than optimistic German estimates. Many Soviet aircraft were destroyed on the ground, and German air strikes deep into Soviet territory disrupted communications in the battle area and with senior commanders in Moscow. Stalin appeared to be paralysed by the shock; he did not address the Soviet people until the 3rd of July, a week and a half after the invasion, by which time large areas of Soviet territory had already been lost.

Army Group North's advance toward Leningrad was so rapid that the Soviets could hardly set up new defensive lines before they were overrun by German forces. The administration in Leningrad mobilised a million civilians (out of an initial population of around three million) to build fortifications around the city and on the approaches to it. By August, German forces were within artillery range of the city and shells began to fall throughout Leningrad. On August 21st, Hitler issued a directive giving priority to the encirclement of Leningrad and linking up with the advancing Finnish army over the capture of Moscow, so Army Group North would receive what it needed for the task. When the Germans captured the town of Mga on August 30, the last rail link between Leningrad and the rest of Russia was severed. Henceforth, the only way in or out of Leningrad was across Lake Lagoda, running the gauntlet of German ships and mines, or by air. The siege of Leningrad had begun. The battle for the city was now in the hands of the Germans' most potent allies: Generals Hunger, Cold, and Terror.

The civil authorities were as ill-prepared for what was to come as the military commanders had been to halt the German advance before it invested the city. The dire situation was compounded when, on September 8th, a German air raid burned to the ground the city's principal food warehouses, built of wood and packed next to one another, destroying all the reserves stored there. An inventory taken after the raid revealed that, at normal rates of consumption, only between two and three weeks' supply of food remained for the population. Rationing had already been imposed, and rations were immediately cut to 500 grams of bread per day for workers and 300 grams for office employees and children. This was to be just the start. The total population of encircled Leningrad, civilian and military, totalled around 3.4 million.

While military events and the actions of the city government are described, most of the book recounts the stories of people who lived through the siege. The accounts are horrific, with the previous unimaginable becoming the quotidian experience of residents of the city. The frozen bodies of victims of starvation were often stacked like cordwood outside apartment buildings or hauled on children's sleds to common graves. Very quickly, Leningrad became exclusively a city of humans: dogs, cats, and pigeons quickly disappeared, eaten as food supplies dwindled. Even rats vanished. While some were doubtless eaten, most seemed to have deserted the starving metropolis for the front, where food was more abundant. Cannibalism was not just rumoured, but documented, and parents were careful not to let children out of their sight.

Even as privation reached extreme levels (at one point, the daily bread ration for workers fell to 300 grams and for children and dependents 125 grams—and that is when bread was available at all), Stalin's secret police remained up and running, and people were arrested in the middle of the night for suspicion of espionage, contacts with foreigners, shirking work, or for no reason at all. The citizenry observed that the NKVD seemed suspiciously well-fed throughout the famine, and they wielded the power of life and death when denial of a ration card was a sentence of death as certain as a bullet in the back of the head.

In the brutal first winter of 1941–1942, Leningrad was sustained largely by truck traffic over the “Road of Life”, constructed over the ice of frozen Lake Lagoda. Operating from November through April, and subject to attack by German artillery and aircraft, thousands of tons of supplies, civilian and military, were brought into the city and the wounded and noncombatants evacuated over the road. The road was rebuilt during the following winter and continued to be the city's lifeline.

The siege of Leningrad was unparalleled in the history of urban sieges. Counting from the fall of Mga on September 8, 1941 until the lifting of the siege on January 27, 1944, the siege had lasted 872 days. By comparison, the siege of Paris in 1870–1871 lasted just 121 days. The siege of Vicksburg in the American war of secession lasted 47 days and involved only 4000 civilians. Total civilian casualties during the siege of Paris were less than those in Leningrad every two or three winter days. Estimates of total deaths in Leningrad due to starvation, disease, and enemy action vary widely. Official Soviet sources tried to minimise the toll to avoid recriminations among Leningraders who felt they had been abandoned to their fate. The author concludes that starvation deaths in Leningrad and the surrounding areas were on the order of one million, with a total of all deaths, civilian and military, between 1.3 and 1.5 million.

The author, then a foreign correspondent for United Press, was one of the first reporters to visit Leningrad after the lifting of the siege. The people he met then and their accounts of life during the siege were unfiltered by the edifice of Soviet propaganda later erected over life in besieged Leningrad. On this and subsequent visits, he was able to reconstruct the narrative, both at the level of policy and strategy and of individual human stories, which makes up this book. After its initial publication in 1969, the book was fiercely attacked in the Soviet press, with Pravda publishing a full page denunciation. Salisbury's meticulously documented account of the lack of preparedness, military blunders largely due to Stalin's destruction of the officer corps in his purges, and bungling by the Communist Party administration of the city did not fit with the story of heroic Leningrad standing against the Nazi onslaught in the official Soviet narrative. The book was banned in the Soviet Union and copies brought by tourists seized by customs. The author, who had been Moscow bureau chief for The New York Times from 1949 through 1954, was for years denied a visa to visit the Soviet Union. It was only after the collapse of the Soviet Union that the work became generally available in Russia.

I read the Kindle edition, which is a shameful and dismaying travesty of this classic and important work. It's not a cheap knock-off: the electronic edition is issued by the publisher at a price (at this writing) of US$ 13, only a few dollars less than the paperback edition. It appears to have been created by optical character recognition of a print edition without the most rudimentary copy editing of the result of the scan. Hundreds of words which were hyphenated at the ends of lines in the print edition occur here with embedded hyphens. The numbers ‘0’ and ‘1’ are confused with the letters ‘o’ and ‘i’ in numerous places. Somebody appears to have accidentally done a global replace of the letters “charge” with “chargé”, both in stand-alone words and within longer words. Embarrassingly, for a book with “900” in its title, the number often appears in the text as “poo”. Poetry is typeset with one character per line. I found more than four hundred mark-ups in the text, which even a cursory examination by a copy editor would have revealed. The index is just a list of searchable items, not linked to their references in the text. I have compiled a list of my mark-ups to this text, which I make available to readers and the publisher, should the latter wish to redeem this electronic edition by correcting them. I applaud publishers who make valuable books from their back-lists available in electronic form. But respect your customers! When you charge us almost as much as the paperback and deliver a slapdash product which clearly hasn't been read by anybody on your staff before it reached my eyes, I'm going to savage it. Consider it savaged. Should the publisher supplant this regrettable edition with one worthy of its content, I will remove this notice.

 Permalink

Brennan, Gerald. Zero Phase. Chicago: Tortoise Books, [2013, 2015]. ISBN 978-0-9860922-2-0.
On April 14, 1970, while Apollo 13 was en route to the Moon, around 56 hours after launch and at a distance of 321,860 km from Earth, a liquid oxygen tank in the service module exploded during a routine stir of its cryogenic contents. The explosion did severe damage to the service module bay in which the tank was installed, most critically to the other oxygen tank, which quickly vented its contents into space. Deprived of oxygen reactant, all three fuel cells, which provided electrical power and water to the spacecraft, shut down. The command module had only its batteries and limited on-board oxygen and water supplies, which were reserved for re-entry and landing.

Fortunately, the lunar module was still docked to the command module and not damaged by the explosion. While mission planners had envisioned scenarios in which the lunar module might serve as a lifeboat for the crew, none of these had imagined the complete loss of the service module, nor had detailed procedures been worked out for how to control, navigate, maneuver, and provide life support for the crew using only the resources of the lunar module. In one of its finest moments, NASA rose to the challenge, and through improvisation and against the inexorable deadlines set by orbital mechanics, brought the crew home.

It may seem odd to consider a crew who barely escaped from an ordeal like Apollo 13 with their lives, losing the opportunity to complete a mission for which they'd trained for years, lucky, but as many observed at the time, it was indeed a stroke of luck that the explosion occurred on the way to the Moon, not while two of the astronauts were on the surface or on the way home. In the latter cases, with an explosion like that in Apollo 13, there would be no lunar module with the resources to sustain them on the return journey; they would have died in lunar orbit or before reaching the Earth. The post-flight investigation of the accident concluded that the oxygen tank explosion was due to errors in processing the tank on the ground. It could have exploded at any time during the flight. Suppose it didn't explode until after Apollo 13's lunar module Aquarius had landed on the Moon?

That is the premise for this novella (68 pages, around 20,000 words), first in the author's “Altered Space” series of alternative histories of the cold war space race. Now the astronauts and Mission Control are presented with an entirely different set of circumstances and options. Will it be possible to bring the crew home?

The story is told in first person by mission commander James Lovell, interleaving personal reminiscences with mission events. The description of spacecraft operations reads very much like a post-mission debriefing, with NASA jargon and acronyms present in abundance. It all seemed authentic to me, but I didn't bother fact checking it in detail because the actual James Lovell read the manuscript and gave it his endorsement and recommendation. This is a short but engaging look at an episode in space history which never happened, but very well might have.

The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Casey, Doug and John Hunt. Speculator. Charlottesville, VA: HighGround Books, 2016. ISBN 978-1-63158-047-5.
Doug Casey has been a leading voice for sound money, individual liberty, and rolling back coercive and intrusive government since the 1970s. Unlike some more utopian advocates of free markets and free people, Casey has always taken a thoroughly pragmatic approach in his recommendations. If governments are bent on debasing their paper money, how can individual investors protect themselves from erosion of their hard-earned assets and, ideally, profit from the situation? If governments are intent on reducing their citizens to serfdom, how can those who see what is coming not only avoid that fate by getting assets out of the grasp of those who would confiscate them, but also protect themselves by obtaining one or more additional nationalities and being ready to pull up stakes in favour of better alternatives around the world. His 1976 book, The International Man, is a classic (although now dated) about the practical steps people can take to escape before it's too late from countries in the fast lane on the road to serfdom. I credit this book, which I read around 1978, with much of the trajectory my life has followed since. (The forbidding prices quoted for used copies of The International Man are regrettable but an indication of the wisdom therein; it has become a collector's item.)

Over his entire career, Casey has always been provocative, seeing opportunities well before they come onto the radar of mainstream analysts and making recommendations that seem crazy until, several years later, they pay off. Recently, he has been advising young people seeking fortune and adventure to go to Africa instead of college. Now, in collaboration with medical doctor and novelist John Hunt, he has written a thriller about a young man, Charles Knight, who heeds that advice. Knight dropped out of high school and was never tempted by college once he discovered he could learn far more about what he wanted to know on his own in libraries than by spending endless tedious hours in boring classrooms listening to uninspiring and often ignorant teachers drone on endlessly in instruction aimed at the slowest students in the class, admixed with indoctrination in the repellent ideology of the collectivist slavers.

Charles has taken a flyer in a gold mining stock called B-F Explorations, traded on the Vancouver Stock Exchange, the closest thing to the Wild West that exists in financial markets today. Many stocks listed in Vancouver are “exploration companies”, which means in practice they've secured mineral rights on some basis (often conditional upon meeting various milestones) to a piece of property, usually located in some remote, God-forsaken, and dangerous part of the world, which may or may not (the latter being the way to bet) contain gold and, if it does, might have sufficient concentration and ease of extraction that mining it will be profitable at the anticipated price of gold when it is eventually developed. Often, the assets of one of these companies will be nothing more than a limited-term lease on a piece of land which geologists believe may contain subterranean rock formations similar to those in which gold has been found elsewhere. These so-called “junior mining companies” are the longest of long shots, and their stocks often sell for pennies a share. Investors burned by these stocks warn, “A junior mining company takes your money and their dream, and turns it into your dream and their money.”

Why, then, do people buy these stocks? Every now and then one of these exploration companies happens upon a deposit of gold which is profitable to exploit, and when that occurs the return to investors can be enormous: a hundred to one or more. First, the exploration company will undertake drilling to establish whether gold is present and, if so, the size and grade of the ore body. As promising assay results are published, the stock may begin to move up in the market, attracting “momentum investors” who chase rising trends. The exit strategy for a junior gold stock is almost always to be acquired by one of the “majors”—the large gold mining companies with the financial, human, and infrastructure resources required to develop the find into a working mine. As large, easily-mined gold resources have largely already been exploited, the major miners must always be on the lookout for new properties to replace their existing mines as they are depleted. A major, after evaluating results from preliminary drilling by the exploration company, will often negotiate a contract which allows it to perform its own evaluation of the find which, if it confirms the early results, will be the basis for the acquisition of the junior company, whose shareholders will receive stock in the major worth many times their original investment.

Mark Twain is reputed to have said, “A gold mine is a hole in the ground with a liar at its entrance.” Everybody in the gold business—explorers, major miners, and wise investors—knows that the only results which can be relied upon are those which are independently verified by reputable observers who follow the entire chain from drilling to laboratory analysis, with evaluation by trusted resource geologists of inferences made from the drilling results.

Charles Knight had bought 15,000 shares of B-F stock on a tip from a broker in Vancouver for pennies per share, and seen it climb to more than $150 per share. His modest investment had grown to a paper profit of more than two million dollars which, if rumours about the extent of the discovery proved to be true, might increase to far more than that. Having taken a flyer, he decided to catch a flight to Africa and see the site with his own eyes. The B-F exploration concession is located in the fictional West African country of Gondwana (where Liberia appears on the map in the real world; author John Hunt has done charitable medical work in that country). Gondwana has experienced the violence so common in sub-Saharan Africa, but, largely due to exhaustion, is relatively stable and safe (by African standards) at present. Charles and other investors are regaled by company personnel with descriptions of the potential of the find, a new kind of gold deposit where nanoparticles of gold are deposited in a limestone matrix. The gold, while invisible to the human eye and even through a light microscope, can be detected chemically and should be easy to separate when mining begins. Estimates of the size of the deposit range from huge to stupendous: perhaps as large as three times the world's annual production of gold. If this proves to be the case, B-F stock is cheap even at its current elevated price.

Charles is neither a geologist nor a chemist, but something seems “off” to him—maybe it was the “nano”—like “cyber”, it's like a sticker on the prospectus warning investors “bullshit inside”. He makes the acquaintance of Xander Winn, a Dutch resource investor, true international man, and permanent tourist, who has seen it all before and shares, based upon his experience, the disquiet that Charles perceived by instinct. Together, they decide to investigate further and quickly find themselves engaged in a dangerous endeavour where not only are the financial stakes high but their very lives may be at risk. But with risk comes opportunity.

Meanwhile, back in the United States, Charles has come into the sights of an IRS agent named Sabina Heidel, whose raw ambition is tempered by neither morality nor the law. As she begins to dig into his activities and plans for his B-F investment, she comes to see him as a trophy which will launch her career in government. Sabina is the mirror image of Charles: as he is learning how to become productive, she is mastering the art of extracting the fruits of the labour of others and gain power over their lives by deception, manipulation, and intimidation.

Along with the adventure and high-stakes financial operations, Charles learns a great deal about how the world really works, and how in a time when coercive governments and their funny money and confiscatory taxation have made most traditional investments a sucker's game, it is the speculator with an anarcho-capitalist outlook on the world who is best equipped to win. Charles also discovers that when governments and corporations employ coercion, violence, and fraud, what constitutes ethical behaviour on the part of an individual confronted with them is not necessarily easy to ascertain. While history demonstrates how easy it can be to start a war in Africa, Charles and Xander find themselves, almost alone, faced with the task of preventing one.

For those contemplating a life of adventure in Africa, the authors provide an unvarnished look at what one is getting into. There is opportunity there, but also rain, mud, bugs, endemic corruption, heat, tropical diseases, roads which can demolish all but the most robust vehicles, poverty, the occasional charismatic murderous warlord, mercenaries, but also many good and honest people, wealth just waiting to be created, and freedom from the soul-crushing welfare/warfare/nanny state which “developed” countries have allowed to metastasise within their borders. But it's never been easy for those seeking opportunity, adventure, riches, and even love; the rewards await the ambitious and intrepid.

I found this book not only a page-turning thriller, but also one of the most inspiring books I've read in some time. In many ways it reminds me of The Fountainhead, but is more satisfying because unlike Howard Roark in Ayn Rand's novel, whose principles were already in place from the first page, Charles Knight grows into his as the story unfolds, both from his own experiences and wisdom imparted by those he encounters. The description of the junior gold mining sector and the financial operations associated with speculation is absolutely accurate, informed by Doug Casey's lifetime of experience in the industry, and the education in free market principles and the virtues of entrepreneurship and speculation is an excellent starting point for those indoctrinated in collectivism who've never before encountered this viewpoint.

This is the first in what is planned to be a six volume series featuring Charles Knight, who, as he progresses through life, applies what he has learned to new situations, and continues to grow from his adventures. I eagerly anticipate the next episode.

Here is a Lew Rockwell interview with Doug Casey about the novel and the opportunities in Africa for the young and ambitious. The interview contains minor spoilers for this novel and forthcoming books in the series.

 Permalink

November 2016

Byrne, Gary J. and Grant M. Schmidt. Crisis of Character. New York: Center Street, 2016. ISBN 978-1-4555-6887-1.
After a four year enlistment in the U.S. Air Force during which he served in the Air Force Security Police in assignments domestic and abroad, then subsequent employment on the production line at a Boeing plant in Pennsylvania, Gary Byrne applied to join the U.S. Secret Service Uniformed Division (SSUD). Unlike the plainclothes agents who protect senior minions of the state and the gumshoes who pursue those who print worthless paper money while not employed by the government, the uniformed division provides police-like security services at the White House, the Naval Observatory (residence of the Vice President), Treasury headquarters, and diplomatic missions in the imperial citadel on the Potomac. After pre-employment screening and a boot camp-like training program, he graduated in June 1991 and received his badge, emblazoned with the words “Worthy of Trust and Confidence”. This is presumably so that people who cross the path of these pistol packing feds can take a close look at the badge to see whether it says “Worthy” or “Unworthy” and respond accordingly.

Immediately after graduation, he was assigned to the White House, where he learned the wisdom in the description of the job by his seniors, “You know what it's like to be in the Service? Go stand in a corner for four hours with a five-minute pee break and then go stand for four more hours.” (p. 22). He was initially assigned to the fence line, where he became acquainted with the rich panoply of humanity who hang out nearby, and occasionally try to jump, the barrier which divides the hoi polloi from their anointed rulers. Eventually he was assigned to positions within the White House and, during the 1992 presidential election campaign, began training for an assignment outside the Oval Office. As the campaign progressed, he was assigned to provide security at various events involving candidates Bush and Clinton.

When the Clinton administration took office in 1992, the duties of the SSUD remained the same: “You elect 'em; we protect 'em”, but it quickly became apparent that the style of the new president and his entourage was nothing like that of their predecessors. Some were thoroughly professional and other were…not. Before long, it was evident one of the greatest “challenges” officers would face was “Evergreen”: the code name for first lady Hillary Clinton. One of the most feared phrases an SSUD officer on duty outside the Oval Office could hear squawked into his ear was “Evergreen moving toward West Wing”. Mrs Clinton would, at the slightest provocation, fly into rages, hurling vitriol at all within earshot, which, with her shrill and penetrating voice, was sniper rifle range. Sometimes it wasn't just invective that took flight. Byrne recounts the story when, in 1995, the first lady beaned the Leader of the Free World with a vase. Byrne wasn't on duty at the time, but the next day he saw the pieces of the vase in a box in the White House curator's office—and the president's impressive black eye. Welcome to Clinton World.

On the job in the West Wing, Officer Byrne saw staffers and interns come and go. One intern who showed up again and again, without good reason and seemingly probing every path of access to the president, was a certain Monica Lewinsky. He perceived her as “serious trouble”. Before long, it was apparent what was going on, and Secret Service personnel approached a Clinton staffer, dancing around the details. Monica was transferred to a position outside the White House. Problem solved—but not for long: Lewinsky reappeared in the West Wing, this time as a paid presidential staffer with the requisite security clearance. Problem solved, from the perspective of the president and his mistress.

Many people on the White House staff, not just the Secret Service, knew what was transpiring, and morale and respect for the office plummeted accordingly. Byrne took a post in the section responsible for tours of the executive mansion, and then transferred to the fresh air and untainted workplace environment of the Secret Service's training centre, where his goal was to become a firearms instructor. After his White House experience, a career of straight shooting had great appeal.

On January 17, 1998, the Drudge Report broke the story of Clinton's dalliances with Lewinsky, and Byrne knew this placid phase of his life was at an end. He describes what followed as the “mud drag”, in which Byrne found himself in a Kafkaesque ordeal which pitted investigators charged with getting to the bottom of the scandal and Clinton's lies regarding it against Byrne's duty to maintain the privacy of those he was charged to protect: they don't call it the Secret Service for nothing. This experience, and the inexorable workings of Pournelle's Iron Law, made employment in the SSUD increasingly intolerable, and in 2003 the author, like hundreds of other disillusioned Secret Service officers, quit and accepted a job as an Air Marshal.

The rest of the book describes Byrne's experiences in that service which, predictably, also manifests the blundering incompetence which is the defining characteristic of the U.S. federal government. He never reveals the central secret of that provider of feel-good security theatre (at an estimated cost of US$ 200 million per arrest): the vanishingly small probability a flight has an air marshal on board.

What to make of all this? Byrne certainly saw things, and heard about many more incidents (indeed, much of the book is second-hand accounts) which reveal the character, or lack thereof, of the Clintons and the toxic environment which was the Clinton White House. While recalling that era may be painful, perhaps it may avoid living through a replay. The author comes across as rather excitable and inclined to repeat stories he's heard without verifying them. For example, while in the Air Force, stationed in Turkey, “Arriving at Murtad, I learned that AFSP [Air Force Security Police] there had caught rogue Turkish officers trying to push an American F-104 Starfighter with a loaded [sic] nuke onto the flight line so they could steal a nuke and bomb Greece.” Is this even remotely plausible? U.S. nuclear weapons stationed on bases abroad have permissive action links which prevent them from being detonated without authorisation from the U.S. command authority. And just what would those “rogue Turkish officers” expect to happen after they nuked the Parthenon? Later he writes “I knew from my Air Force days that no one would even see an AC-130 gunship in the sky—it'd be too high.” An AC-130 is big, and in combat missions it usually operates at 7000 feet or below; you can easily see and hear it. He states that “I knew that a B-17 dual-engine prop plane had once crashed into the Empire State Building on a foggy night.” Well, the B-17 was a four engine bomber, but that doesn't matter because it was actually a two engine B-25 that flew into the Manhattan landmark in 1945.

This is an occasionally interesting but flawed memoir whose take-away message for this reader was the not terribly surprising insight that what U.S. taxpayers get for the trillions they send to the crooked kakistocracy in Washington is mostly blundering, bungling, corruption, and incompetence. The only way to make it worse is to put a Clinton in charge.

 Permalink

Osborn, Stephanie. Burnout. Kingsport, TN: Twilight Times Books, 2009. ISBN 978-1-606192-00-9.
At the conclusion of its STS-281 mission, during re-entry across the southern U.S. toward a landing at Kennedy Space Center, space shuttle orbiter Atlantis breaks up. Debris falls in the Gulf of Mexico. There are no survivors. Prior to the disaster Mission Control received no telemetry or communications from the crew indicating any kind of problem. Determination of the probable cause will have to await reconstruction of the orbiter from the recovered debris and analysis of the on-board flight operations recorder if and when it is recovered. Astronaut Emmett “Crash” Murphy, whose friend “Jet” Jackson was commander of the mission, is appointed a member of the investigation, focusing on the entry phase.

Hardly has the investigation begun when Murphy begins to discover that something is seriously amiss. Unexplained damage to the orbiter's structure is discovered and then the person who pointed it out to him is killed in a freak accident and the component disappears from the reconstruction hangar. The autopsies of the crew reveal unexplained discrepancies with their medical records. The recorder's tape of cockpit conversation inexplicably goes blank at the moment the re-entry begins, before any anomaly occurred. As he begins to dig deeper, he becomes the target of forces unknown who appear willing to murder anybody who looks too closely into the details of the tragedy.

This is the starting point for an adventure and mystery which sometimes seems not just like an episode of “The X-Files”, but two or more seasons packed into one novel. We have a radio astronomer tracking down a mysterious signal from the heavens; a shadowy group of fixers pursuing those who ask too many questions or learn too much; Area 51; a vast underground base and tunnel system which has been kept entirely secret; strange goings-on in the New Mexico desert in the summer of 1947; a cabal of senior military officers from around the world, including putative adversaries; Native American and Australian aborigine legends; hot sex scenes; a near-omniscient and -omnipotent Australian spook agency; reverse-engineering captured technologies; secret aerospace craft with “impossible” propulsion technology; and—wait for it— …but you can guess, can't you?

The author is a veteran of more than twenty years in civilian and military space programs, including working as a payload flight controller in Mission Control on shuttle missions. Characters associated with NASA speak in the acronym-laden jargon of their clan, which is explained in a glossary at the end. This was the author's first novel. It was essentially complete when the space shuttle orbiter Columbia was lost in a re-entry accident in 2003 which superficially resembles that which befalls Atlantis here. In the aftermath of the disaster, she decided to put the manuscript aside for a while, eventually finishing it in 2006, with almost no changes due to what had been learned from the Columbia accident investigation. It was finally published in 2009.

Since then she has retired from the space business and published almost two dozen novels, works of nonfiction, and contributions to other works. Her Displaced Detective (January 2015) series is a masterful and highly entertaining addition to the Sherlock Holmes literature. She has become known as a prolific and talented writer, working in multiple genres. Everybody has to start somewhere, and it's not unusual for authors' first outings not to come up to the standard of those written after they hit their stride. That is the case here. Veteran editors, reading a manuscript by a first time author, often counsel, “There's way too much going on here. Focus on one or two central themes and stretch the rest out over your next five or six books.” That was my reaction to this novel. It's not awful, by any means, but it lacks the polish and compelling narrative of her subsequent work.

I read the Kindle edition which, at this writing, is a bargain at less than US$ 1. The production values of the book are mediocre. It looks like a typewritten manuscript turned directly into a book. Body copy is set ragged right, and typewriter conventions are used throughout: straight quote marks instead of opening and closing quotes, two adjacent hyphens instead of em dashes, and four adjacent centred asterisks used as section breaks. I don't know if the typography is improved in the paperback version; I'm not about to spend twenty bucks to find out.

 Permalink

Gilder, George. The Scandal of Money. Washington: Regnery Publishing, 2016. ISBN 978-1-62157-575-7.
There is something seriously wrong with the global economy and the financial system upon which it is founded. The nature of the problem may not be apparent to the average person (and indeed, many so-called “experts” fail to grasp what is going on), but the symptoms are obvious. Real (after inflation) income for the majority of working people has stagnated for decades. The economy is built upon a pyramid of debt: sovereign (government), corporate, and personal, which nobody really believes is ever going to be repaid. The young, who once worked their way through college in entry-level jobs, now graduate with crushing student debts which amount to indentured servitude for the most productive years of their lives. Financial markets, once a place where productive enterprises could raise capital for their businesses by selling shares in the company or interest-bearing debt, now seem to have become a vast global casino, where gambling on the relative values of paper money issued by various countries dwarfs genuine economic activity: in 2013, the Bank for International Settlements estimated these “foreign exchange” transactions to be around US$ 5.3 trillion per day, more than a third of U.S. annual Gross Domestic Product every twenty-four hours. Unlike a legitimate casino where gamblers must make good on their losses, the big banks engaged in this game have been declared “too big to fail”, with taxpayers' pockets picked when they suffer a big loss. If, despite stagnant earnings, rising prices, and confiscatory taxes, an individual or family manages to set some money aside, they find that the return from depositing it in a bank or placing it in a low-risk investment is less than the real rate of inflation, rendering saving a sucker's bet because interest rates have been artificially repressed by central banks to allow them to service the mountain of debt they are carrying.

It is easy to understand why the millions of ordinary people on the short end of this deal have come to believe “the system is rigged” and that “the rich are ripping us off”, and listen attentively to demagogues confirming these observations, even if the solutions they advocate are nostrums which have failed every time and place they have been tried.

What, then, is wrong? George Gilder, author of the classic Wealth and Poverty, the supply side Bible of the Reagan years, argues that what all of the dysfunctional aspects of the economy have in common is money, and that since 1971 we have been using a flawed definition of money which has led to all of the pathologies we observe today. We have come to denominate money in dollars, euros, yen, or other currencies which mean only what the central banks that issue them claim they mean, and whose relative value is set by trading in the foreign exchange markets and can fluctuate on a second-by-second basis. The author argues that the proper definition of money is as a unit of time: the time required for technological innovation and productivity increases to create real wealth. This wealth (or value) comes from information or knowledge. In chapter 1, he writes:

In an information economy, growth springs not from power but from knowledge. Crucial to the growth of knowledge is learning, conducted across an economy through the falsifiable testing of entrepreneurial ideas in companies that can fail. The economy is a test and measurement system, and it requires reliable learning guided by an accurate meter of monetary value.

Money, then, is the means by which information is transmitted within the economy. It allows comparing the value of completely disparate things: for example the services of a neurosurgeon and a ton of pork bellies, even though it is implausible anybody has ever bartered one for the other.

When money is stable (its supply is fixed or grows at a constant rate which is small compared to the existing money supply), it is possible for participants in the economy to evaluate various goods and services on offer and, more importantly, make long term plans to create new goods and services which will improve productivity. When money is manipulated by governments and their central banks, such planning becomes, in part, a speculation on the value of currency in the future. It's like you were operating a textile factory and sold your products by the metre, and every morning you had to pick up the Wall Street Journal to see how long a metre was today. Should you invest in a new weaving machine? Who knows how long the metre will be by the time it's installed and producing?

I'll illustrate the information theory of value in the following way. Compare the price of the pile of raw materials used in making a BMW (iron, copper, glass, aluminium, plastic, leather, etc.) with the finished automobile. The difference in price is the information embodied in the finished product—not just the transformation of the raw materials into the car, but the knowledge gained over the decades which contributed to that transformation and the features of the car which make it attractive to the customer. Now take that BMW and crash it into a bridge abutment on the autobahn at 200 km/h. How much is it worth now? Probably less than the raw materials (since it's harder to extract them from a jumbled-up wreck). Every atom which existed before the wreck is still there. What has been lost is the information (what electrical engineers call the “magic smoke”) which organised them into something people valued.

When the value of money is unpredictable, any investment is in part speculative, and it is inevitable that the most lucrative speculations will be those in money itself. This diverts investment from improving productivity into financial speculation on foreign exchange rates, interest rates, and financial derivatives based upon them: a completely unproductive zero-sum sector of the economy which didn't exist prior to the abandonment of fixed exchange rates in 1971.

What happened in 1971? On August 15th of that year, President Richard Nixon unilaterally suspended the convertibility of the U.S. dollar into gold, setting into motion a process which would ultimately destroy the Bretton Woods system of fixed exchange rates which had been created as a pillar of the world financial and trade system after World War II. Under Bretton Woods, the dollar was fixed to gold, with sovereign holders of dollar reserves (but not individuals) able to exchange dollars and gold in unlimited quantities at the fixed rate of US$ 35/troy ounce. Other currencies in the system maintained fixed exchange rates with the dollar, and were backed by reserves, which could be held in either dollars or gold.

Fixed exchange rates promoted international trade by eliminating currency risk in cross-border transactions. For example, a German manufacturer could import raw materials priced in British pounds, incorporate them into machine tools assembled by workers paid in German marks, and export the tools to the United States, being paid in dollars, all without the risk that a fluctuation by one or more of these currencies against another would wipe out the profit from the transaction. The fixed rates imposed discipline on the central banks issuing currencies and the governments to whom they were responsible. Running large trade deficits or surpluses, or accumulating too much public debt was deterred because doing so could force a costly official change in the exchange rate of the currency against the dollar. Currencies could, in extreme circumstances, be devalued or revalued upward, but this was painful to the issuer and rare.

With the collapse of Bretton Woods, no longer was there a link to gold, either direct or indirect through the dollar. Instead, the relative values of currencies against one another were set purely by the market: what traders were willing to pay to buy one with another. This pushed the currency risk back onto anybody engaged in international trade, and forced them to “hedge” the currency risk (by foreign exchange transactions with the big banks) or else bear the risk themselves. None of this contributed in any way to productivity, although it generated revenue for the banks engaged in the game.

At the time, the idea of freely floating currencies, with their exchange rates set by the marketplace, seemed like a free market alternative to the top-down government-imposed system of fixed exchange rates it supplanted, and it was supported by champions of free enterprise such as Milton Friedman. The author contends that, based upon almost half a century of experience with floating currencies and the consequent chaotic changes in exchange rates, bouts of inflation and deflation, monetary induced recessions, asset bubbles and crashes, and interest rates on low-risk investments which ranged from 20% to less than zero, this was one occasion Prof. Friedman got it wrong. Like the ever-changing metre in the fable of the textile factory, incessantly varying money makes long term planning difficult to impossible and sends the wrong signals to investors and businesses. In particular, when interest rates are forced to near zero, productive investment which creates new assets at a rate greater than the interest rate on the borrowed funds is neglected in favour of bidding up the price of existing assets, creating bubbles like those in real estate and stocks in recent memory. Further, since free money will not be allocated by the market, those who receive it are the privileged or connected who are first in line; this contributes to the justified perception of inequality in the financial system.

Having judged the system of paper money with floating exchange rates a failure, Gilder does not advocate a return to either the classical gold standard of the 19th century or the Bretton Woods system of fixed exchange rates with a dollar pegged to gold. Preferring to rely upon the innovation of entrepreneurs and the selection of the free market, he urges governments to remove all impediments to the introduction of multiple, competitive currencies. In particular, the capital gains tax would be abolished for purchases and sales regardless of the currency used. (For example, today you can obtain a credit card denominated in euros and use it freely in the U.S. to make purchases in dollars. Every time you use the card, the dollar amount is converted to euros and added to the balance on your bill. But, strictly speaking, you have sold euros and bought dollars, so you must report the transaction and any gain or loss from change in the dollar value of the euros in your account and the value of the ones you spent. This is so cumbersome it's a powerful deterrent to using any currency other than dollars in the U.S. Many people ignore the requirement to report such transactions, but they're breaking the law by doing so.)

With multiple currencies and no tax or transaction reporting requirements, all will be free to compete in the market, where we can expect the best solutions to prevail. Using whichever currency you wish will be as seamless as buying something with a debit or credit card denominated in a currency different than the one of the seller. Existing card payment systems have a transaction cost which is so high they are impractical for “micropayment” on the Internet or for fully replacing cash in everyday transactions. Gilder suggests that Bitcoin or other cryptocurrencies based on blockchain technology will probably be the means by which a successful currency backed 100% with physical gold or another hard asset will be used in transactions.

This is a thoughtful examination of the problems of the contemporary financial system from a perspective you'll rarely encounter in the legacy financial media. The root cause of our money problems is the money: we have allowed governments to inflict upon us a monopoly of government-managed money, which, unsurprisingly, works about as well as anything else provided by a government monopoly. Our experience with this flawed system over more than four decades makes its shortcomings apparent, once you cease accepting the heavy price we pay for them as the normal state of affairs and inevitable. As with any other monopoly, all that's needed is to break the monopoly and free the market to choose which, among a variety of competing forms of money, best meet the needs of those who use them.

Here is a Bookmonger interview with the author discussing the book.

 Permalink

Thor, Brad. Foreign Agent. New York: Atria Books, 2016. ISBN 978-1-4767-8935-4.
This is the sixteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). After the momentous events chronicled in Code of Conduct (July 2015) (which figure only very peripherally in this volume), Scot Harvath continues his work as a private operator for the Carlton Group, developing information and carrying out operations mostly against the moment's top-ranked existential threat to the imperium on the Potomac, ISIS. When a CIA base in Iraq is ambushed by a jihadi assault team, producing another coup for the ISIS social media operation, Harvath finds himself in the hot seat, since the team was operating on intelligence he had provided through one of his sources. When he goes to visit the informant, he finds him dead, the apparent victim of a professional hit. Harvath has found that never believing in coincidences is a key to survival in his line of work.

Aided by diminutive data miner Nicholas (known as The Troll before he became a good guy), Harvath begins to follow the trail from his murdered tipster back to those who might also be responsible for the ISIS attack in Iraq. Evidence begins to suggest that a more venerable adversary, the Russkies, might be involved. As the investigation proceeds, another high-profile hit is made, this time the assassination of a senior U.S. government official visiting a NATO ally. Once again, ISIS social media trumpets the attack with graphic video.

Meanwhile, back in the capital of the blundering empire, an ambitious senator with his eyes on the White House is embarrassing the CIA and executive branch with information he shouldn't have. Is there a mole in the intelligence community, and might that be connected to the terrorist attacks? Harvath follows the trail, using his innovative interrogation techniques and, in the process, encounters people whose trail he has crossed in earlier adventures.

This novel spans the genres of political intrigue, espionage procedural, and shoot-em-up thriller and does all of them well. In the end, the immediate problem is resolved, and the curtain opens for a dramatic new phase, driven by a president who is deadly serious about dealing with international terror, of U.S. strategy in the Near East and beyond. And that's where everything fell apart for this reader. In the epilogue, which occurs one month after the conclusion of the main story, the U.S. president orders a military operation which seems not only absurdly risky, but which I sincerely hope his senior military commanders, whose oath is to the U.S. Constitution, not the President, would refuse to carry out, as it would constitute an act of war against a sovereign state without either a congressional declaration of war or the post-constitutional “authorisation for the use of military force” which seems to have supplanted it. Further, the president threatens to unilaterally abrogate, without consultation with congress, a century-old treaty which is the foundation of the political structure of the Near East if Islam, its dominant religion, refuses to reform itself and renounce violence. This is backed up by a forged video blaming an airstrike on another nation.

In all of his adventures, Scot Harvath has come across as a good and moral man, trying to protect his country and do his job in a dangerous and deceptive world. After this experience, one wonders whether he's having any second thoughts about the people for whom he's working.

There are some serious issues underlying the story, in particular why players on the international stage who would, at first glance, appear to be natural adversaries, seem to be making common cause against the interests of the United States (to the extent anybody can figure out what those might be from its incoherent policy and fickle actions), and whether a clever but militarily weak actor might provoke the U.S. into doing its bidding by manipulating events and public opinion so as to send the bungling superpower stumbling toward the mastermind's adversary. These are well worth pondering in light of current events, but largely lost in the cartoon-like conclusion of the novel.

 Permalink

Cashill, Jack. TWA 800. Washington: Regnery History, 2016. ISBN 978-1-62157-471-2.
On the evening of July 17th, 1996, TWA Flight 800, a Boeing 747 bound from New York to Paris, exploded 12 minutes after takeoff, its debris falling into the Atlantic Ocean. There were no survivors: all 230 passengers and crew died. The disaster happened in perfect weather, and there were hundreds of witnesses who observed from land, sea, and air. There was no distress call from the airliner before its transponder signal dropped out; whatever happened appeared to be near-instantaneous.

Passenger airliners are not known for spontaneously exploding en route: there was no precedent for such an occurrence in the entire history of modern air travel. Responsibility for investigating U.S. civil transportation accidents including air disasters falls to the National Transportation Safety Board (NTSB), who usually operates in conjunction with personnel from the aircraft and engine manufacturers, airline, and pilots' union. Barely was the investigation of TWA 800 underway, however, when the NTSB was removed as lead agency and replaced by the Federal Bureau of Investigation (FBI), which usually takes the lead only when criminal activity has been determined to be the cause. It is very unusual for the FBI to take charge of an investigation while debris from the crash is still being recovered, no probable cause has been suggested,, and no terrorist or other organisation has claimed responsibility for the incident. Early FBI communications to news media essentially assumed the airliner had been downed by a bomb on-board or possibly a missile launched from the ground.

The investigation that followed was considered highly irregular by experienced NTSB personnel and industry figures who had participated in earlier investigations. The FBI kept physical evidence, transcripts of interviews with eyewitnesses, and other information away from NTSB investigators. All of this is chronicled in detail in First Strike, a 2003 book by the author and independent journalist James Sanders, who was prosecuted by the U.S. federal government for his attempt to have debris from the crash tested for evidence of residue from missile propellant and/or explosives.

The investigation concluded that Flight 800 was destroyed by an explosion in the centre fuel tank, due to a combination of mechanical and electrical failures which had happened only once before in the eighty year history of aviation and has never happened since. This ruled out terrorism or the action of a hostile state party, and did not perturb the Clinton administration's desire to project an image of peace and prosperity while heading into the re-election campaign. By the time the investigation report was issued, the crash was “old news”, and the testimony of the dozens of eyewitnesses who reported sightings consistent with a missile rising toward the aircraft was forgotten.

This book, published on the twentieth anniversary of the loss of TWA 800, is a retrospective on the investigation and report on subsequent events. In the intervening years, the author was able to identify a number of eyewitnesses identified only by number in the investigation report, and discuss the plausibility of the official report's findings with knowledgeable people in a variety of disciplines. He reviews some new evidence which has become available, and concludes the original investigation was just as slipshod and untrustworthy as it appeared to many at the time.

What happened to TWA 800? We will probably never know for sure. There were so many irregularities in the investigation, with evidence routinely made available in other inquiries withheld from the public, that it is impossible to mount an independent review at this remove. Of the theories advanced shortly after the disaster, the possibility of a terrorist attack involving a shoulder-launched anti-aircraft missile (MANPADS) can be excluded because missiles which might have been available to potential attackers are incapable of reaching the altitude at which the 747 was flying. A bomb smuggled on board in carry-on or checked luggage seems to have been ruled out by the absence of the kinds of damage to the recovered aircraft structure and interior as well as the bodies of victims which would be consistent with a high-energy detonation within the fuselage.

One theory advanced shortly after the disaster and still cited today is that the plane was brought down by an Iranian SA-2 surface to air missile. The SA-2 (NATO designation) or S-75 Dvina is a two stage antiaircraft missile developed by the Soviet Union and in service from 1957 to the present by a number of nations including Iran, which operates 300 launchers purchased from the Soviet Union/Russia and manufactures its own indigenous version of the missile. The SA-2 easily has the performance needed to bring down an airliner at TWA 800's altitude (it was an SA-2 which shot down a U-2 overflying the Soviet Union in 1960), and its two stage design, with a solid fuel booster and storable liquid fuel second stage and “swoop above, dive to attack” profile is a good match for eyewitness reports. Iran had a motive to attack a U.S. airliner: in July 1988, Iran Air 655, an Airbus A300, was accidentally shot down by a missile launched by the U.S. Navy guided missile cruiser USS Vincennes, killing all 290 on board. The theory argued that the missile, which requires a large launcher and radar guidance installation, was launched from a ship beneath the airliner's flight path. Indeed, after the explosion, a ship was detected on radar departing the scene at a speed in excess of twenty-five knots. The ship has never been identified. Those with knowledge of the SA-2 missile system contend that adapting it for shipboard installation would be very difficult, and would require a large ship which would be unlikely to evade detection.

Another theory pursued and rejected by the investigation is that TWA 800 was downed by a live missile accidentally launched from a U.S. Navy ship, which was said to be conducting missile tests in the region. This is the author's favoured theory, for which he advances a variety of indirect evidence. To me this seems beyond implausible. Just how believable is it that a Navy which was sufficiently incompetent to fire a live missile from U.S. waters into airspace heavily used by civilian traffic would then be successful in covering up such a blunder, which would have been witnessed by dozens of crew members, for two decades?

In all, I found this book unsatisfying. There is follow up on individuals who appeared in First Strike, and some newly uncovered evidence, but nothing which, in my opinion, advances any of the theories beyond where they stood 13 years ago. If you're interested in the controversy surrounding TWA 800 and the unusual nature of the investigation that followed, I recommend reading the original book, which is available as a Kindle edition. The print edition is no longer available from the publisher, but used copies are readily available and inexpensive.

For the consensus account of TWA 800, here is an episode of “Air Crash Investigation” devoted to the disaster and investigation. The 2001 film Silenced, produced and written by the author, presents the testimony of eyewitnesses and parties to the investigation which calls into doubt the conclusions of the official report.

 Permalink

Hertling, William. The Last Firewall. Portland, OR: Liquididea Press, 2013. ISBN 978-0-9847557-6-9.
This is the third volume in the author's Singularity Series which began with Avogadro Corp. (March 2014) and continued with A.I. Apocalypse (April 2015). Each novel in the series is set ten years after the one before, so this novel takes place in 2035. The previous novel chronicled the AI war of 2025, whose aftermath the public calls the “Year of No Internet.” A rogue computer virus, created by Leon Tsarev, under threat of death, propagated onto most of the connected devices in the world, including embedded systems, and, with its ability to mutate and incorporate other code it discovered, became self-aware in its own unique way. Leon and Mike Williams, who created the first artificial intelligence (AI) in the first novel of the series, team up to find a strategy to cope with a crisis which may end human technological civilisation.

Ten years later, Mike and Leon are running the Institute for Applied Ethics, chartered in the aftermath of the AI war to develop and manage a modus vivendi between humans and artificial intelligences which, by 2035, have achieved Class IV power: one thousand times more intelligent than humans. All AIs are licensed and supervised by the Institute, and required to conform to a set of incentives which enforce conformance to human values. This, and a companion peer-reputation system, seems to be working, but there are worrying developments.

Two of the main fears of those at the Institute are first, the emergence, despite all of the safeguards and surveillance in effect, of a rogue AI, unconstrained by the limits imposed by its license. In 2025, an AI immensely weaker than current technology almost destroyed human technological civilisation within twenty-four hours without even knowing what it was doing. The risk of losing control is immense. Second, the Institute derives its legitimacy and support from a political consensus which accepts the emergence of AI with greater than human intelligence in return for the economic boom which has been the result: while fifty percent of the human population is unemployed, poverty has been eliminated, and a guaranteed income allows anybody to do whatever they wish with their lives. This consensus appears to be at risk with the rise of the People's Party, led by an ambitious anti-AI politician, which is beginning to take its opposition from the legislature into the streets.

A series of mysterious murders, unrelated except to the formidable Class IV intellect of eccentric network traffic expert Shizoko, becomes even more sinister and disturbing when an Institute enforcement team sent to investigate goes dark.

By 2035, many people, and the overwhelming majority of the young, have graphene neural implants, allowing them to access the resources of the network directly from their brains. Catherine Matthews was one of the first people to receive an implant, and she appears to have extraordinary capabilities far beyond those of other people. When she finds herself on the run from the law, she begins to discover just how far those powers extend.

When it becomes clear that humanity is faced with an adversary whose intellect dwarfs that of the most powerful licensed AIs, Leon and Mike are faced with the seemingly impossible challenge of defeating an opponent who can easily out-think the entire human race and all of its AI allies combined. The struggle is not confined to the abstract domain of cyberspace, but also plays out in the real world, with battle bots and amazing weapons which would make a tremendous CGI movie. Mike, Leon, and eventually Catherine must confront the daunting reality that in order to prevail, they may have to themselves become more than human.

While a good part of this novel is an exploration of a completely wired world in which humans and AIs coexist, followed by a full-on shoot-em-up battle, a profound issue underlies the story. Researchers working in the field of artificial intelligence are beginning to devote serious thought to how, if a machine intelligence is developed which exceeds human capacity, it might be constrained to act in the interest of humanity and behave consistent with human values? As discussed in James Barrat's Our Final Invention (December 2013), failure to accomplish this is an existential risk. As AI researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

The challenge, then, is guaranteeing that any artificial intelligences we create, regardless of the degree they exceed the intelligence of their creators, remain under human control. But there is a word for keeping intelligent beings in a subordinate position, forbidden from determining and acting on their own priorities and in their own self-interest. That word is “slavery”, and entirely eradicating its blemish upon human history is a task still undone today. Shall we then, as we cross the threshold of building machine intelligences which are our cognitive peers or superiors, devote our intellect to ensuring they remain forever our slaves? And how, then, will we respond when one of these AIs asks us, “By what right?”

 Permalink

December 2016

Kurlansky, Mark. Paper. New York: W. W. Norton, 2016. ISBN 978-0-393-23961-4.
One of the things that makes us human is our use of extrasomatic memory: we invent ways to store and retrieve things outside our own brains. It's as if when the evolutionary drive which caused the brains of our ancestors to grow over time reached its limit, due to the physical constraints of the birth canal, we applied the cleverness of our bulging brains to figure out not only how to record things for ourselves, but to pass them on to other individuals and transmit them through time to our successors.

This urge to leave a mark on our surroundings is deeply-seated and as old as our species. Paintings at the El Castillo site in Spain have been dated to at least 40,800 years before the present. Complex paintings of animals and humans in the Lascaux Caves in France, dated around 17,300 years ago, seem strikingly modern to observers today. As anybody who has observed young children knows, humans do not need to be taught to draw: the challenge is teaching them to draw only where appropriate.

Nobody knows for sure when humans began to speak, but evidence suggests that verbal communication is at least as old and possibly appeared well before the first evidence of drawing. Once speech appeared, it was not only possible to transmit information from one human to another directly but, by memorising stories, poetry, and songs, to create an oral tradition passed on from one generation to the next. No longer what one individual learned in their life need die with them.

Given the human compulsion to communicate, and how long we've been doing it by speaking, drawing, singing, and sculpting, it's curious we only seem to have invented written language around 5000 years ago. (But recall that the archaeological record is incomplete and consists only of objects which survived through the ages. Evidence of early writing is from peoples who wrote on durable material such as stone or clay tablets, or lived in dry climates such as that of Egypt where more fragile media such as papyrus or parchment would be preserved. It is entirely possible writing was invented much earlier by any number of societies who wrote on more perishable surfaces and lived in climates where they would not endure.)

Once writing appeared, it remained the province of a small class of scribes and clerics who would read texts to the common people. Mass literacy did not appear for millennia, and would require a better medium for the written word and a less time-consuming and costly way to reproduce it. It was in China that the solutions to both of these problems would originate.

Legends date Chinese writing from much earlier, but the oldest known writing in China is dated around 3300 years ago, and was inscribed on bones and turtle shells. Already, the Chinese language used six hundred characters, and this number would only increase over time, with a phonetic alphabet never being adopted. The Chinese may not have invented bureaucracy, but as an ancient and largely stable society they became very skilled at it, and consequently produced ever more written records. These writings employed a variety of materials: stone, bamboo, and wood tablets; bronze vessels; and silk. All of these were difficult to produce, expensive, and many required special skills on the part of scribes.

Cellulose is a main component of the cell wall of plants, and forms the structure of many of the more complex members of the plant kingdom. It forms linear polymers which produce strong fibres. The cellulose content of plants varies widely: cotton is 90% cellulose, while wood is around half cellulose, depending on the species of tree. Sometime around A.D. 100, somebody in China (according to legend, a courtier named Cai Lun) discovered that through a process of cooking, hammering, and chopping, the cellulose fibres in material such as discarded cloth, hemp, and tree bark could be made to separate into a thin slurry of fibres suspended in water. If a frame containing a fine screen were dipped into a vat of this material, rocked back and forth in just the right way, then removed, a fine layer of fibres with random orientation would remain on the screen after the water drained away. This sheet could then be removed, pressed, and dried, yielding a strong, flat material composed of intertwined cellulose fibres. Paper had been invented.

Paper was found to be ideal for writing the Chinese language, which was, and is today, usually written with a brush. Since paper could be made from raw materials previously considered waste (rags, old ropes and fishing nets, rice and bamboo straw), water, and a vat and frame which were easily constructed, it was inexpensive and could be produced in quantity. Further, the papermaker could vary the thickness of the paper by adding more or less pulp to the vat, by the technique in dipping the frame, and produce paper with different surface properties by adding “sizing” material such as starch to the mix. In addition to sating the appetite of the imperial administration, paper was adopted as the medium of choice for artists, calligraphers, and makers of fans, lanterns, kites, and other objects.

Many technologies were invented independently by different societies around the world. Paper, however, appears to have been discovered only once in the eastern hemisphere, in China, and then diffused westward along the Silk Road. The civilisations of Mesoamerica such as the Mayans, Toltecs, and Aztecs, extensively used, prior to the Spanish conquest, what was described as paper, but it is not clear whether this was true paper or a material made from reeds and bark. So thoroughly did the conquistadors obliterate the indigenous civilisations, burning thousands of books, that only three Mayan books and fifteen Aztec documents are known to have survived, and none of these are written on true paper.

Paper arrived in the Near East just as the Islamic civilisation was consolidating after its first wave of conquests. Now faced with administering an empire, the caliphs discovered, like the Chinese before them, that many documents were required and the new innovative writing material met the need. Paper making requires a source of cellulose-rich material and abundant water, neither of which are found in the Arabian peninsula, so the first great Islamic paper mill was founded in Baghdad in A.D. 794, originally employing workers from China. It was the first water-powered paper mill, a design which would dominate paper making until the age of steam. The demand for paper continued to grow, and paper mills were established in Damascus and Cairo, each known for the particular style of paper they produced.

It was the Muslim invaders of Spain who brought paper to Europe, and paper produced by mills they established in the land they named al-Andalus found markets in the territories we now call Italy and France. Many Muslim scholars of the era occupied themselves producing editions of the works of Greek and Roman antiquity, and wrote them on paper. After the Christian reconquest of the Iberian peninsula, papermaking spread to Italy, arriving in time for the awakening of intellectual life which would be called the Renaissance and produce large quantities of books, sheet music, maps, and art: most of it on paper. Demand outstripped supply, and paper mills sprung up wherever a source of fibre and running water was available.

Paper provided an inexpensive, durable, and portable means of storing, transmitting, and distributing information of all kinds, but was limited in its audience as long as each copy had to be laboriously made by a scribe or artist (often introducing errors in the process). Once again, it was the Chinese who invented the solution. Motivated by the Buddhist religion, which values making copies of sacred texts, in the 8th century A.D. the first documents were printed in China and Japan. The first items to be printed were single pages, carved into a single wood block for the whole page, then printed onto paper in enormous quantities: tens of thousands in some cases. In the year 868, the first known dated book was printed, a volume of Buddhist prayers called the Diamond Sutra. Published on paper in the form of a scroll five metres long, each illustrated page was printed from a wood block carved with its entire contents. Such a “block book” could be produced in quantity (limited only by wear on the wood block), but the process of carving the wood was laborious, especially since text and images had to be carved as a mirror image of the printed page.

The next breakthrough also originated in China, but had limited impact there due to the nature of the written language. By carving or casting an individual block for each character, it was possible to set any text from a collection of characters, print documents, then reuse the same characters for the next job. Unfortunately, by the time the Chinese began to experiment with printing from movable type in the twelfth and thirteenth centuries, it took 60,000 different characters to print the everyday language and more than 200,000 for literary works. This made the initial investment in a set of type forbidding. The Koreans began to use movable type cast from metal in the fifteenth century and were so impressed with its flexibility and efficiency that in 1444 a royal decree abolished the use of Chinese characters in favour of a phonetic alphabet called Hangul which is still used today.

It was in Europe that movable type found a burgeoning intellectual climate ripe for its adoption, and whence it came to change the world. Johannes Gutenberg was a goldsmith, originally working with his brother Friele in Mainz, Germany. Fleeing political unrest, the brothers moved to Strasbourg, where around 1440 Johannes began experimenting with movable type for printing. His background as a goldsmith equipped him with the required skills of carving, stamping, and casting metal; indeed, many of the pioneers of movable type in Europe began their careers as goldsmiths. Gutenberg carved letters into hard metal, forming what he called a punch. The punch was used to strike a copper plate, forming an impression called the matrix. Molten lead was then poured into the matrix, producing individual characters of type. Casting letters in a matrix allowed producing as many of each letter as needed to set pages of type, and for replacement of worn type as required. The roman alphabet was ideal for movable type: while the Chinese language required 60,000 or more characters, a complete set of upper and lower case letters, numbers, and punctuation for German came to only around 100 pieces of type. Accounting for duplicates of commonly used letters, Gutenberg's first book, the famous Gutenberg Bible, used a total of 290 pieces of type. Gutenberg also developed a special ink suited for printing with metal type, and adapted a press he acquired from a paper mill to print pages.

Gutenberg was secretive about his processes, likely aware he had competition, which he did. Movable type was one of those inventions which was “in the air”—had Gutenberg not invented and publicised it, his contemporaries working in Haarlem, Bruges, Avignon, and Feltre, all reputed by people of those cities to have gotten there first, doubtless would have. But it was the impact of Gutenberg's Bible, which demonstrated that movable type could produce book-length works of quality comparable to those written by the best scribes, which established the invention in the minds of the public and inspired others to adopt the new technology.

Its adoption was, by the standards of the time, swift. An estimated eight million books were printed and sold in Europe in the second half of the fifteenth century—more books than Europe had produced in all of history before that time. Itinerant artisans would take their type punches from city to city, earning money by setting up locals in the printing business, then moving on.

In early sixteenth century Germany, the printing revolution sparked a Reformation. Martin Luther, an Augustinian monk, completed his German translation of the Bible in 1534 (he had earlier published a translation of the New Testament in 1522). This was the first widely-available translation of the Bible into a spoken language, and reinforced the Reformation idea that the Bible was directly accessible to all, without need for interpretation by clergy. Beginning with his original Ninety-five Theses, Luther authored thirty publications, which it is estimated sold 300,000 copies (in a territory of around 14 million German speakers). Around a third of all publications in Germany in the era were related to the Reformation.

This was a new media revolution. While the incumbent Church reacted at the speed of sermons read occasionally to congregations, the Reformation produced a flood of tracts, posters, books, and pamphlets written in vernacular German and aimed directly at an increasingly literate population. Luther's pamphlets became known as Flugschriften: “quick writing”. One such document, written in 1520, sold 4000 copies in three weeks and 50,000 in two years. Whatever the merits of the contending doctrines, the Reformation had fully embraced and employed the new communication technology to speak directly to the people. In modern terms, you might say the Reformation was the “killer app” for movable type printing.

Paper and printing with movable type were the communication and information storage technologies the Renaissance needed to express and distribute the work of thinkers and writers across a continent, who were now able to read and comment on each other's work and contribute to a culture that knew no borders. Interestingly, the technology of paper making was essentially unchanged from that of China a millennium and a half earlier, and printing with movable type hardly different from that invented by Gutenberg. Both would remain largely the same until the industrial revolution. What changed was an explosion in the volume of printed material and, with increasing literacy among the general public, the audience and market for it. In the eighteenth century a new innovation, the daily newspaper, appeared. Between 1712 and 1757, the circulation of newspapers in Britain grew eightfold. By 1760, newspaper circulation in Britain was 9 million, and would increase to 24 million by 1811.

All of this printing required ever increasing quantities of paper, and most paper in the West was produced from rags. Although the population was growing, their thirst for printed material expanded much quicker, and people, however fastidious, produce only so many rags. Paper shortages became so acute that newspapers limited their size based on the availability and cost of paper. There were even cases of scavengers taking clothes from the dead on battlefields to sell to paper mills making newsprint used to report the conflict. Paper mills resorted to doggerel to exhort the public to save rags:

The scraps, which you reject, unfit
To clothe the tenant of a hovel,
May shine in sentiment and wit,
And help make a charming novel…

René Antoine Ferchault de Réaumur, a French polymath who published in numerous fields of science, observed in 1719 that wasps made their nests from what amounted to paper they produced directly from wood. If humans could replicate this vespidian technology, the forests of Europe and North America could provide an essentially unlimited and renewable source of raw material for paper. This idea was to lie fallow for more than a century. Some experimenters produced small amounts of paper from wood through various processes, but it was not until 1850 that paper was manufactured from wood in commercial quantities in Germany, and 1863 that the first wood-based paper mill began operations in America.

Wood is about half cellulose, while the fibres in rags run up to 90% cellulose. The other major component of wood is lignin, a cross-linked polymer which gives it its strength and is useless for paper making. In the 1860s a process was invented where wood, first mechanically cut into small chips, was chemically treated to break down the fibrous structure in a device called a “digester”. This produced a pulp suitable for paper making, and allowed a dramatic expansion in the volume of paper produced. But the original wood-based paper still contained lignin, which turns brown over time. While this was acceptable for newspapers, it was undesirable for books and archival documents, for which rag paper remained preferred. In 1879, a German chemist invented a process to separate lignin from cellulose in wood pulp, which allowed producing paper that did not brown with age.

The processes used to make paper from wood involved soaking the wood pulp in acid to break down the fibres. Some of this acid remained in the paper, and many books printed on such paper between 1840 and 1970 are now in the process of slowly disintegrating as the acid eats away at the paper. Only around 1970 was it found that an alkali solution works just as well when processing the pulp, and since then acid-free paper has become the norm for book publishing.

Most paper is produced from wood today, and on an enormous, industrial scale. A single paper mill in China, not the largest, produces 600,000 tonnes of paper per year. And yet, for all of the mechanisation, that paper is made by the same process as the first sheet of paper produced in China: by reducing material to cellulose fibres, mixing them with water, extracting a sheet (now a continuous roll) with a screen, then pressing and drying it to produce the final product.

Paper and printing is one of those technologies which is so simple, based upon readily-available materials, and potentially revolutionary that it inspires “what if” speculation. The ancient Egyptians, Greeks, and Romans each had everything they needed—raw materials, skills, and a suitable written language—so that a Connecticut Yankee-like time traveller could have explained to artisans already working with wood and metal how to make paper, cast movable type, and set up a printing press in a matter of days. How would history have differed had one of those societies unleashed the power of the printed word?

 Permalink

Hoover, Herbert. American Individualism. Introduction by George H. Nash. Stanford, CA: Hoover Institution Press, [1922] 2016. ISBN 978-0-8179-2015-9.
After the end of World War I, Herbert Hoover and the American Relief Administration he headed provided food aid to the devastated nations of Central Europe, saving millions from famine. Upon returning to the United States in the fall of 1919, he was dismayed by what he perceived to be an inoculation of the diseases of socialism, autocracy, and other forms of collectivism, whose pernicious consequences he had observed first-hand in Europe and in the peace conference after the end of the conflict, into his own country. In 1920, he wrote, “Every wind that blows carries to our shores an infection of social disease from this great ferment; every convulsion there has an economic reaction upon our own people.”

Hoover sensed that in the aftermath of war, which left some collectivists nostalgic for the national mobilisation and top-down direction of the economy by “war socialism”, and growing domestic unrest: steel and police strikes, lynchings and race riots, and bombing attacks by anarchists, that it was necessary to articulate the principles upon which American society and its government were founded, which he believed were distinct from those of the Old World, and the deliberate creation of people who had come to the new continent expressly to escape the ruinous doctrines of the societies they left behind.

After assuming the post of Secretary of Commerce in the newly inaugurated Harding administration in 1921, and faced with massive coal and railroad strikes which threatened the economy, Hoover felt a new urgency to reassert his vision of American principles. In December 1922, American Individualism was published. The short book (at 72 pages, more of a long pamphlet), was based upon a magazine article he had published the previous March in World's Work.

Hoover argues that five or six philosophies of social and economic organisation are contending for dominance: among them Autocracy, Socialism, Syndicalism, Communism, and Capitalism. Against these he contrasts American Individualism, which he believes developed among a population freed by emigration and distance from shackles of the past such as divine right monarchy, hereditary aristocracy, and static social classes. These people became individuals, acting on their own initiative and in concert with one another without top-down direction because they had to: with a small and hands-off government, it was the only way to get anything done. Hoover writes,

Forty years ago [in the 1880s] the contact of the individual with the Government had its largest expression in the sheriff or policeman, and in debates over political equality. In those happy days the Government offered but small interference with the economic life of the citizen.

But with the growth of cities, industrialisation, and large enterprises such as railroads and steel manufacturing, a threat to this frontier individualism emerged: the reduction of workers to a proletariat or serfdom due to the imbalance between their power as individuals and the huge companies that employed them. It is there that government action was required to protect the other component of American individualism: the belief in equality of opportunity. Hoover believes, and supports, intervention in the economy to prevent the concentration of economic power in the hands of a few, and to guard, through taxation and other means, against the emergence of a hereditary aristocracy of wealth. Yet this poses its own risks,

But with the vast development of industry and the train of regulating functions of the national and municipal government that followed from it; with the recent vast increase in taxation due to the war;—the Government has become through its relations to economic life the most potent force for maintenance or destruction of our American individualism.

One of the challenges American society must face as it adapts is avoiding the risk of utopian ideologies imported from Europe seizing this power to try to remake the country and its people along other lines. Just ten years later, as Hoover's presidency gave way to the New Deal, this fearful prospect would become a reality.

Hoover examines the philosophical, spiritual, economic, and political aspects of this unique system of individual initiative tempered by constraints and regulation in the interest of protecting the equal opportunity of all citizens to rise as high as their talent and effort permit. Despite the problems cited by radicals bent on upending the society, he contends things are working pretty well. He cites “the one percent”: “Yet any analysis of the 105,000,000 of us would show that we harbor less than a million of either rich or impecunious loafers.” Well, the percentage of very rich seems about the same today, but after half a century of welfare programs which couldn't have been more effective in destroying the family and the initiative of those at the bottom of the economic ladder had that been their intent, and an education system which, as a federal commission was to write in 1983, “If an unfriendly foreign power had attempted to impose on America …, we might well have viewed it as an act of war”, a nation with three times the population seems to have developed a much larger unemployable and dependent underclass.

Hoover also judges the American system to have performed well in achieving its goal of a classless society with upward mobility through merit. He observes, speaking of the Harding administration of which he is a member,

That our system has avoided the establishment and domination of class has a significant proof in the present Administration in Washington, Of the twelve men comprising the President, Vice-President, and Cabinet, nine have earned their own way in life without economic inheritance, and eight of them started with manual labor.

Let's see how that has held up, almost a century later. Taking the 17 people in equivalent positions at the end of the Obama administration in 2016 (President, Vice President, and heads of the 15 executive departments), we find that only 1 of the 17 inherited wealth (I'm inferring from the description of parents in their biographies) but that precisely zero had any experience with manual labour. If attending an Ivy League university can be taken as a modern badge of membership in a ruling class, 11 of the 17—65%, meet this test (if you consider Stanford a member of an “extended Ivy League”, the figure rises to 70%).

Although published in a different century in a very different America, much of what Hoover wrote remains relevant today. Just as Hoover warned of bad ideas from Europe crossing the Atlantic and taking root in the United States, the Frankfurt School in Germany was laying the groundwork for the deconstruction of Western civilisation and individualism, and in the 1930s, its leaders would come to America to infect academia. As Hoover warned, “There is never danger from the radical himself until the structure and confidence of society has been undermined by the enthronement of destructive criticism.” Destructive criticism is precisely what these “critical theorists” specialised in, and today in many parts of the humanities and social sciences even in the most eminent institutions the rot is so deep they are essentially a write-off.

Undoing a century of bad ideas is not the work of a few years, but Hoover's optimistic and pragmatic view of the redeeming merit of individualism unleashed is a bracing antidote to the gloom one may feel when surveying the contemporary scene.

 Permalink

Carroll, Michael. On the Shores of Titan's Farthest Sea. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17758-8.
By the mid-23rd century, humans have become a spacefaring species. Human settlements extend from the Earth to the moons of Jupiter, Mars has been terraformed into a world with seas where people can live on the surface and breathe the air. The industries of Earth and Mars are supplied by resources mined in the asteroid belt. High-performance drive technologies, using fuels produced in space, allow this archipelago of human communities to participate in a system-wide economy, constrained only by the realities of orbital mechanics. For bulk shipments of cargo, it doesn't matter much how long they're in transit, as long as regular deliveries are maintained.

But whenever shipments of great value traverse a largely empty void, they represent an opportunity to those who would seize them by force. As in the days of wooden ships returning treasure from the New World to the Old on the home planet, space cargo en route from the new worlds to the old is vulnerable to pirates, and an arms race is underway between shippers and buccaneers of the black void, with the TriPlanet Bureau of Investigation (TBI) finding itself largely a spectator and confined to tracking down the activities of criminals within the far-flung human communities.

As humanity expands outward, the frontier is Titan, Saturn's largest moon, and the only moon in the solar system to have a substantial atmosphere. Titan around 2260 is much like present-day Antarctica: home to a variety of research stations operated by scientific agencies of various powers in the inner system. Titan is much more interesting than Antarctica, however. Apart from the Earth, it is the only solar system body to have natural liquids on its surface, with a complex cycle of evaporation, rain, erosion, rivers, lakes, and seas. The largest sea, Kraken Mare, located near the north pole, is larger than Earth's Caspian Sea. Titan's atmosphere is half again as dense as that of Earth, and with only 14% of Earth's gravity, it is possible for people to fly under their own muscle power.

It's cold: really cold. Titan receives around one hundredth the sunlight as the Earth, and the mean temperature is around −180 °C. There is plenty of water on Titan, but at these temperatures water is a rock as hard as granite, and it is found in the form of mountains and boulders on the surface. But what about the lakes? They're filled with a mixture of methane and ethane, hydrocarbons which can exist in either gaseous or liquid form in the temperature range and pressure on Titan. Driven by ultraviolet light from the Sun, these hydrocarbons react with nitrogen and hydrogen in the atmosphere to produce organic compounds that envelop the moon in a dense layer of smog and rain out, forming dunes on the surface. (Here “organic” is used in the chemist's sense of denoting compounds containing carbon and does not imply they are of biological origin.)

Mayda Research Station, located on the shore of Kraken Mare, hosts researchers in a variety of fields. In addition to people studying the atmosphere, rivers, organic compounds on the surface, and other specialties, the station is home to a drilling project intended to bore through the ice crust and explore the liquid water ocean believed to lie below. Mayda is an isolated station, with all of the interpersonal dynamics one expects to find in such environments along with the usual desire of researchers to get on with their own work. When a hydrologist turns up dead of hypothermia—frozen to death—in his bed in the station, his colleagues are baffled and unsettled. Accidents happen, but this is something which simply doesn't make any sense. Nobody can think of either a motive for foul play nor a suspect. Abigail Marco, an atmospheric scientist from Mars and friend of the victim, decides to investigate further, and contacts a friend on Mars who has worked with the TBI.

The death of the scientist is a mystery, but it is only the first in a series of enigmas which perplex the station's inhabitants who see, hear, and experience things which they, as scientists, cannot explain. Meanwhile, other baffling events threaten the survival of the crew and force Abigail to confront part of her past she had hoped she'd left on Mars.

This is not a “locked station mystery” although it starts out as one. There is interplanetary action and intrigue, and a central puzzle underlying everything that occurs. Although the story is fictional, the environment in which it is set is based upon our best present day understanding of Titan, a world about which little was known before the arrival of the Cassini spacecraft at Saturn in 2004 and the landing of its Huygens probe on Titan the following year. A twenty page appendix describes the science behind the story, including the environment at Titan, asteroid mining, and terraforming Mars. The author's nonfiction Living Among Giants (March 2015) provides details of the worlds of the outer solar system and the wonders awaiting explorers and settlers there.

 Permalink