2016  

January 2016

Waldman, Jonathan. Rust. New York: Simon & Schuster, 2015. ISBN 978-1-4516-9159-7.
In May of 1980 two activists, protesting the imprisonment of a Black Panther convicted of murder, climbed the Statue of Liberty in New York harbour, planning to unfurl a banner high on the statue. After spending a cold and windy night aloft, they descended and surrendered to the New York Police Department's Emergency Service Unit. Fearful that the climbers may have damaged the fragile copper cladding of the monument, a comprehensive inspection was undertaken. What was found was shocking.

The structure of the Statue of Liberty was designed by Alexandre-Gustave Eiffel, and consists of an iron frame weighing 135 tons, which supports the 80 ton copper skin. As marine architects know well, a structure using two dissimilar metals such as iron and copper runs a severe risk of galvanic corrosion, especially in an environment such as the sea air of a harbour. If the iron and copper were to come into contact, a voltage would flow across the junction, and the iron would be consumed in the process. Eiffel's design prevented the iron and copper from touching one another by separating them with spacers made of asbestos impregnated with shellac.

What Eiffel didn't anticipate is that over the years superintendents of the statue would decide to “protect” its interior by applying various kinds of paint. By 1980 eight coats of paint had accumulated, almost as thick as the copper skin. The paint trapped water between the skin and the iron frame, and this set electrolysis into action. One third of the rivets in the frame were damaged or missing, and some of the frame's iron ribs had lost two thirds of their material. The asbestos insulators had absorbed water and were long gone. The statue was at risk of structural failure.

A private fund-raising campaign raised US$ 277 million to restore the statue, which ended up replacing most of its internal structure. On July 4th, 1986, the restored statue was inaugurated, marking its 100th anniversary.

Earth, uniquely among known worlds, has an atmosphere with free oxygen, produced by photosynthetic plants. While much appreciated by creatures like ourselves which breathe it, oxygen is a highly reactive gas and combines with many other elements, either violently in fire, or more slowly in reactions such as rusting metals. Further, 71% of the Earth's surface is covered by oceans, whose salty water promotes other forms of corrosion all too familiar to owners of boats. This book describes humanity's “longest war”: the battle against the corruption of our works by the inexorable chemical process of corrosion.

Consider an everyday object much more humble than the Statue of Liberty: the aluminium beverage can. The modern can is one of the most highly optimised products of engineering ever created. Around 180 billion cans are produced and consumed every year around the world: four six packs for every living human being. Reducing the mass of each can by just one gram will result in an annual saving of 180,000 metric tons of aluminium worth almost 300 million dollars at present prices, so a long list of clever tricks has been employed to reduce the mass of cans. But it doesn't matter how light or inexpensive the can is if it explodes, leaks, or changes the flavour of its contents. Coca-Cola, with a pH of 2.75 and a witches’ brew of ingredients, under a pressure of 6 atmospheres, is as corrosive to bare aluminium as battery acid. If the inside of the can were not coated with a proprietary epoxy lining (whose composition depends upon the product being canned, and is carefully guarded by can manufacturers), the Coke would corrode through the thin walls of the can in just three days. The process of scoring the pop-top removes the coating around the score, and risks corrosion and leakage if a can is stored on its side; don't do that.

The author takes us on an eclectic tour the history of corrosion and those who battle it, from the invention of stainless steel, inspecting the trans-Alaska oil pipeline by sending a “pig” (essentially a robot submarine equipped with electronic sensors) down its entire length, and evangelists for galvanizing (zinc coating) steel. We meet Dan Dunmire, the Pentagon's rust czar, who estimates that corrosion costs the military on the order of US$ 20 billion a year and describes how even the most humble of mitigation strategies can have huge payoffs. A new kind of gasket intended to prevent corrosion where radio antennas protrude through the fuselage of aircraft returned 175 times its investment in a single year. Overall return on investment in the projects funded by his office is estimated as fifty to one. We're introduced to the world of the corrosion engineer, a specialty which, while not glamorous, pays well and offers superb job security, since rust will always be with us.

Not everybody we encounter battles rust. Photographer Alyssha Eve Csük has turned corrosion into fine art. Working at the abandoned Bethlehem Steel Works in Pennsylvania, perhaps the rustiest part of the rust belt, she clandestinely scrambles around the treacherous industrial landscape in search of the beauty in corrosion.

This book mixes the science of corrosion with the stories of those who fight it, in the past and today. It is an enlightening and entertaining look into the most mundane of phenomena, but one which affects all the technological works of mankind.

 Permalink

Levenson, Thomas. The Hunt for Vulcan. New York: Random House, 2015. ISBN 978-0-8129-9898-6.
The history of science has been marked by discoveries in which, by observing where nobody had looked before, with new and more sensitive instruments, or at different aspects of reality, new and often surprising phenomena have been detected. But some of the most profound of our discoveries about the universe we inhabit have come from things we didn't observe, but expected to.

By the nineteenth century, one of the most solid pillars of science was Newton's law of universal gravitation. With a single equation a schoolchild could understand, it explained why objects fall, why the Moon orbits the Earth and the Earth and other planets the Sun, the tides, and the motion of double stars. But still, one wonders: is the law of gravitation exactly as Newton described, and does it work everywhere? For example, Newton's gravity gets weaker as the inverse square of the distance between two objects (for example, if you double the distance, the gravitational force is four times weaker [2² = 4]) but has unlimited range: every object in the universe attracts every other object, however weakly, regardless of distance. But might gravity not, say, weaken faster at great distances? If this were the case, the orbits of the outer planets would differ from the predictions of Newton's theory. Comparing astronomical observations to calculated positions of the planets was a way to discover such phenomena.

In 1781 astronomer William Herschel discovered Uranus, the first planet not known since antiquity. (Uranus is dim but visible to the unaided eye and doubtless had been seen innumerable times, including by astronomers who included it in star catalogues, but Herschel was the first to note its non-stellar appearance through his telescope, originally believing it a comet.) Herschel wasn't looking for a new planet; he was observing stars for another project when he happened upon Uranus. Further observations of the object confirmed that it was moving in a slow, almost circular orbit, around twice the distance of Saturn from the Sun.

Given knowledge of the positions, velocities, and masses of the planets and Newton's law of gravitation, it should be possible to predict the past and future motion of solar system bodies for an arbitrary period of time. Working backward, comparing the predicted influence of bodies on one another with astronomical observations, the masses of the individual planets can be estimated to produce a complete model of the solar system. This great work was undertaken by Pierre-Simon Laplace who published his Mécanique céleste in five volumes between 1799 and 1825. As the middle of the 19th century approached, ongoing precision observations of the planets indicated that all was not proceeding as Laplace had foreseen. Uranus, in particular, continued to diverge from where it was expected to be after taking into account the gravitational influence upon its motion by Saturn and Jupiter. Could Newton have been wrong, and the influence of gravity different over the vast distance of Uranus from the Sun?

In the 1840s two mathematical astronomers, Urbain Le Verrier in France and John Couch Adams in Britain, working independently, investigated the possibility that Newton was right, but that an undiscovered body in the outer solar system was responsible for perturbing the orbit of Uranus. After almost unimaginably tedious calculations (done using tables of logarithms and pencil and paper arithmetic), both Le Verrier and Adams found a solution and predicted where to observe the new planet. Adams failed to persuade astronomers to look for the new world, but Le Verrier prevailed upon an astronomer at the Berlin Observatory to try, and Neptune was duly discovered within one degree (twice the apparent size of the full Moon) of his prediction.

This was Newton triumphant. Not only was the theory vindicated, it had been used, for the first time in history, to predict the existence of a previously unknown planet and tell the astronomers right where to point their telescopes to observe it. The mystery of the outer solar system had been solved. But problems remained much closer to the Sun.

The planet Mercury orbits the Sun every 88 days in an eccentric orbit which never exceeds half the Earth's distance from the Sun. It is a small world, with just 6% of the Earth's mass. As an inner planet, Mercury never appears more than 28° from the Sun, and can best be observed in the morning or evening sky when it is near its maximum elongation from the Sun. (With a telescope, it is possible to observe Mercury in broad daylight.) Flush with his success with Neptune, and rewarded with the post of director of the Paris Observatory, in 1859 Le Verrier turned his attention toward Mercury.

Again, through arduous calculations (by this time Le Verrier had a building full of minions to assist him, but so grueling was the work and so demanding a boss was Le Verrier that during his tenure at the Observatory 17 astronomers and 46 assistants quit) the influence of all of the known planets upon the motion of Mercury was worked out. If Mercury orbited a spherical Sun without other planets tugging on it, the point of its closest approach to the Sun (perihelion) in its eccentric orbit would remain fixed in space. But with the other planets exerting their gravitational influence, Mercury's perihelion should advance around the Sun at a rate of 526.7 arcseconds per century. But astronomers who had been following the orbit of Mercury for decades measured the actual advance of the perihelion as 565 arcseconds per century. This left a discrepancy of 38.3 arcseconds, for which there was no explanation. (The modern value, based upon more precise observations over a longer period of time, for the perihelion precession of Mercury is 43 arcseconds per century.) Although small (recall that there are 1,296,000 arcseconds in a full circle), this anomalous precession was much larger than the margin of error in observations and clearly indicated something was amiss. Could Newton be wrong?

Le Verrier thought not. Just as he had done for the anomalies of the orbit of Uranus, Le Verrier undertook to calculate the properties of an undiscovered object which could perturb the orbit of Mercury and explain the perihelion advance. He found that a planet closer to the Sun (or a belt of asteroids with equivalent mass) would do the trick. Such an object, so close to the Sun, could easily have escaped detection, as it could only be readily observed during a total solar eclipse or when passing in front of the Sun's disc (a transit). Le Verrier alerted astronomers to watch for transits of this intra-Mercurian planet.

On March 26, 1859, Edmond Modeste Lescarbault, a provincial physician in a small town and passionate amateur astronomer turned his (solar-filtered) telescope toward the Sun. He saw a small dark dot crossing the disc of the Sun, taking one hour and seventeen minutes to transit, just as expected by Le Verrier. He communicated his results to the great man, and after a visit and detailed interrogation, the astronomer certified the doctor's observation as genuine and computed the orbit for the new planet. The popular press jumped upon the story. By February 1860, planet Vulcan was all the rage.

Other observations began to arrive, both from credible and unknown observers. Professional astronomers mounted worldwide campaigns to observe the Sun around the period of predicted transits of Vulcan. All of the planned campaigns came up empty. Searches for Vulcan became a major focus of solar eclipse expeditions. Unless the eclipse happened to occur when Vulcan was in conjunction with the Sun, it should be readily observable when the Sun was obscured by the Moon. Eclipse expeditions prepared detailed star charts for the vicinity of the Sun to exclude known stars for the search during the fleeting moments of totality. In 1878, an international party of eclipse chasers including Thomas Edison descended on Rawlins, Wyoming to hunt Vulcan in an eclipse crossing that frontier town. One group spotted Vulcan; others didn't. Controversy and acrimony ensued.

After 1878, most professional astronomers lost interest in Vulcan. The anomalous advance of Mercury's perihelion was mostly set aside as “one of those things we don't understand”, much as astronomers regard dark matter today. In 1915, Einstein published his theory of gravitation: general relativity. It predicted that when objects moved rapidly or gravitational fields were strong, their motion would deviate from the predictions of Newton's theory. Einstein recalled the moment when he performed the calculation of the motion of Mercury in his just-completed theory. It predicted precisely the perihelion advance observed by the astronomers. He said that his heart shuddered in his chest and that he was “beside himself with joy.”

Newton was wrong! For the extreme conditions of Mercury's orbit, so close to the Sun, Einstein's theory of gravitation is required to obtain results which agree with observation. There was no need for planet Vulcan, and now it is mostly forgotten. But the episode is instructive as to how confidence in long-accepted theories and wishful thinking can lead us astray when what might be needed is an overhaul of our most fundamental theories. A century hence, which of our beliefs will be viewed as we regard planet Vulcan today?

 Permalink

Ward, Jonathan H. Countdown to a Moon Launch. Cham, Switzerland: Springer International, 2015. ISBN 978-3-319-17791-5.
In the companion volume, Rocket Ranch (December 2015), the author describes the gargantuan and extraordinarily complex infrastructure which was built at the Kennedy Space Center (KSC) in Florida to assemble, check out, and launch the Apollo missions to the Moon and the Skylab space station. The present book explores how that hardware was actually used, following the “processing flow” of the Apollo 11 launch vehicle and spacecraft from the arrival of components at KSC to the moment of launch.

As intricate as the hardware was, it wouldn't have worked, nor would it have been possible to launch flawless mission after flawless mission on time had it not been for the management tools employed to coordinate every detail of processing. Central to this was PERT (Program Evaluation and Review Technique), a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine and missile systems. PERT breaks down the progress of a project into milestones connected by activities into a graph of dependencies. Each activity has an estimated time to completion. A milestone might be, say, the installation of the guidance system into a launch vehicle. That milestone would depend upon the assembly of the components of the guidance system (gyroscopes, sensors, electronics, structure, etc.), each of which would depend upon their own components. Downstream, integrated test of the launch vehicle would depend upon the installation of the guidance system. Many activities proceed in parallel and only come together when a milestone has them as its mutual dependencies. For example, the processing and installation of rocket engines is completely independent of work on the guidance system until they join at a milestone where an engine steering test is performed.

As a project progresses, the time estimates for the various activities will be confronted with reality: some will be completed ahead of schedule while other will slip due to unforeseen problems or over-optimistic initial forecasts. This, in turn, ripples downstream in the dependency graph, changing the time available for activities if the final completion milestone is to be met. For any given graph at a particular time, there will be a critical path of activities where a schedule slip of any one will delay the completion milestone. Each lower level milestone in the graph has its own critical path leading to it. As milestones are completed ahead or behind schedule, the overall critical path will shift. Knowing the critical path allows program managers to concentrate resources on items along the critical path to avoid, wherever possible, overall schedule slips (with the attendant extra costs).

Now all this sounds complicated, and in a project with the scope of Apollo, it is almost bewildering to contemplate. The Launch Control Center was built with four firing rooms. Three were outfitted with all of the consoles to check out and launch a mission, but the fourth cavernous room ended up being used to display and maintain the PERT charts for activities in progress. Three levels of charts were maintained. Level A was used by senior management and contained hundreds of major milestones and activities. Each of these was expanded out into a level B chart which, taken together, tracked in excess of 7000 milestones. These, in turn, were broken down into detail on level C charts, which tracked more than 40,000 activities. The level B and C charts were displayed on more than 400 square metres of wall space in the back room of firing room four. As these detailed milestones were completed on the level C charts, changes would propagate down that chart and those which affected its completion upward to the level A and B charts.

Now, here's the most breathtaking thing about this: they did it all by hand! For most of the Apollo program, computer implementations of PERT were not available (or those that existed could not handle this level of detail). (Today, the PERT network for processing of an Apollo mission could be handled on a laptop computer.) There were dozens of analysts and clerks charged with updating the networks, with the processing flow displayed on an enormous board with magnetic strips which could be shifted around by people climbing up and down rolling staircases. Photographers would take pictures of the board which were printed and distributed to managers monitoring project status.

If PERT was essential to coordinating all of the parallel activities in preparing a spacecraft for launch, configuration control was critical to ensure than when the countdown reached T0, everything would work as expected. Just as there was a network of dependencies in the PERT chart, the individual components were tested, subassemblies were tested, assemblies of them were tested, all leading up to an integrated test of the assembled launcher and spacecraft. The successful completion of a test established a tested configuration for the item. Anything which changed that configuration in any way, for example unplugging a cable and plugging it back in, required re-testing to confirm that the original configuration had been restored. (One of the pins in the connector might not have made contact, for instance.) This was all documented by paperwork signed off by three witnesses. The mountain of paper was intimidating; there was even a slide rule calculator for estimating the cost of various kinds of paperwork.

With all of this management superstructure it may seem a miracle that anything got done at all. But, as the end of the decade approached, the level of activity at KSC was relentless (and took a toll upon the workforce, although many recall it as the most intense and rewarding part of their careers). Several missions were processed in parallel: Apollo 11 rolled out to the launch pad while Apollo 10 was still en route to the Moon, and Apollo 12 was being assembled and tested.

To illustrate how all of these systems and procedures came together, the author takes us through the processing of Apollo 11 in detail, starting around six months before launch when the Saturn V stages, and command, service, and lunar modules arrived independently from the contractors who built them or the NASA facilities where they had been individually tested. The original concept for KSC was that it would be an “operational spaceport” which would assemble pre-tested components into flight vehicles, run integrated system tests, and then launch them in an assembly-line fashion. In reality, the Apollo and Saturn programs never matured to this level, and were essentially development and test projects throughout. Components not only arrived at KSC with “some assembly required”; they often were subject to a blizzard of engineering change orders which required partially disassembling equipment to make modifications, then exhaustive re-tests to verify the previously tested configuration had been restored.

Apollo 11 encountered relatively few problems in processing, so experiences from other missions where problems arose are interleaved to illustrate how KSC coped with contingencies. While Apollo 16 was on the launch pad, a series of mistakes during the testing process damaged a propellant tank in the command module. The only way to repair this was to roll the entire stack back to the Vehicle Assembly Building, remove the command and service modules, return them to the spacecraft servicing building then de-mate them, pull the heat shield from the command module, change out the tank, then put everything back together, re-stack, and roll back to the launch pad. Imagine how many forms had to be filled out. The launch was delayed just one month.

The process of servicing the vehicle on the launch pad is described in detail. Many of the operations, such as filling tanks with toxic hypergolic fuel and oxidiser, which burn on contact, required evacuating the pad of all non-essential personnel and special precautions for those engaged in these hazardous tasks. As launch approached, the hurdles became higher: a Launch Readiness Review and the Countdown Demonstration Test, a full dress rehearsal of the countdown up to the moment before engine start, including fuelling all of the stages of the launch vehicle (and then de-fuelling them after conclusion of the test).

There is a wealth of detail here, including many obscure items I've never encountered before. Consider “Forward Observers”. When the Saturn V launched, most personnel and spectators were kept a safe distance of more than 5 km from the launch pad in case of calamity. But three teams of two volunteers each were stationed at sites just 2 km from the pad. They were charged with observing the first seconds of flight and, if they saw a catastrophic failure (engine explosion or cut-off, hard-over of an engine gimbal, or the rocket veering into the umbilical tower), they would signal the astronauts to fire the launch escape system and abort the mission. If this happened, the observers would then have to dive into crude shelters often frequented by rattlesnakes to ride out the fiery aftermath.

Did you know about the electrical glitch which almost brought the Skylab 2 mission to flaming catastrophe moments after launch? How lapses in handling of equipment and paperwork almost spelled doom for the crew of Apollo 13? The time an oxygen leak while fuelling a Saturn V booster caused cars parked near the launch pad to burst into flames? It's all here, and much more. This is an essential book for those interested in the engineering details of the Apollo project and the management miracles which made its achievements possible.

 Permalink

Regis, Ed. Monsters. New York: Basic Books, 2015. ISBN 978-0-465-06594-3.
In 1863, as the American Civil War raged, Count Ferdinand von Zeppelin, an ambitious young cavalry officer from the German kingdom of Württemberg arrived in America to observe the conflict and learn its lessons for modern warfare. He arranged an audience with President Lincoln, who authorised him to travel among the Union armies. Zeppelin spent a month with General Joseph Hooker's Army of the Potomac. Accustomed to German military organisation, he was unimpressed with what he saw and left to see the sights of the new continent. While visiting Minnesota, he ascended in a tethered balloon and saw the landscape laid out below him like a military topographical map. He immediately grasped the advantage of such an eye in the sky for military purposes. He was impressed.

Upon his return to Germany, Zeppelin pursued a military career, distinguishing himself in the 1870 war with France, although being considered “a hothead”. It was this characteristic which brought his military career to an abrupt end in 1890. Chafing under what he perceived as stifling leadership by the Prussian officer corps, he wrote directly to the Kaiser to complain. This was a bad career move; the Kaiser “promoted” him into retirement. Adrift, looking for a new career, Zeppelin seized upon controlled aerial flight, particularly for its military applications. And he thought big.

By 1890, France was at the forefront of aviation. By 1885 the first dirigible, La France, had demonstrated aerial navigation over complex closed courses and carried passengers. Built for the French army, it was just a technology demonstrator, but to Zeppelin it demonstrated a capability with such potential that Germany must not be left behind. He threw his energy into the effort, formed a company, raised the money, and embarked upon the construction of Luftschiff Zeppelin 1 (LZ 1).

Count Zeppelin was not a man to make small plans. Eschewing sub-scale demonstrators or technology-proving prototypes, he went directly to a full scale airship intended to be militarily useful. It was fully 128 metres long, almost two and a half times the size of La France, longer than a football field. Its rigid aluminium frame contained 17 gas bags filled with hydrogen, and it was powered by two gasoline engines. LZ 1 flew just three times. An observer from the German War Ministry reported it to be “suitable for neither military nor for non-military purposes.” Zeppelin's company closed its doors and the airship was sold for scrap.

By 1905, Zeppelin was ready to try again. On its first flight, the LZ 2 lost power and control and had to make a forced landing. Tethered to the ground at the landing site, it was caught by the wind and destroyed. It was sold for scrap. Later the LZ 3 flew successfully, and Zeppelin embarked upon construction of the LZ 4, which would be larger still. While attempting a twenty-four hour endurance flight, it suffered motor failure, landed, and while tied down was caught by wind. Its gas bags rubbed against one another and static electricity ignited the hydrogen, which reduced the airship to smoking wreckage.

Many people would have given up at this point, but not the redoubtable Count. The LZ 5, delivered to the military, was lost when carried away by the wind after an emergency landing and dashed against a hill. LZ 6 burned in its hangar after an engine caught fire. LZ 7, the first civilian passenger airship, crashed into a forest on its first flight and was damaged beyond repair. LZ 8, its replacement, was destroyed by a gust of wind while being walked out of its hangar.

With the outbreak of war in 1914, the airship went to war. Germany operated 117 airships, using them for reconnaissance and even bombing targets in England. Of the 117, fully 81 were destroyed, about half due to enemy action and half by the woes which had wrecked so many airships prior to the conflict.

Based upon this stunning record of success, after the end of the Great War, Britain decided to embark in earnest on its own airship program, building even larger airships than Germany. Results were no better, culminating in the R100 and R101, built to provide air and cargo service on routes throughout the Empire. On its maiden flight to India in 1930, R101 crashed and burned in a storm while crossing France, killing 48 of the 54 on board. After the catastrophe, the R100 was retired and sold for scrap.

This did not deter the Americans, who, in addition to their technical prowess and “can do” spirit, had access to helium, produced as a by-product of their natural gas fields. Unlike hydrogen, helium is nonflammable, so the risk of fire, which had destroyed so many airships using hydrogen, was entirely eliminated. Helium does not provide as much lift as hydrogen, but this can be compensated for by increasing the size of the ship. Helium is also around fifty times more expensive than hydrogen, which makes managing an airship in flight more difficult. While the commander of a hydrogen airship can freely “valve” gas to reduce lift when required, doing this in a helium ship is forbiddingly expensive and restricted only to the most dire of emergencies.

The U.S. Navy believed the airship to be an ideal platform for long-range reconnaissance, anti-submarine patrols, and other missions where its endurance, speed, and the ability to operate far offshore provided advantages over ships and heavier than air craft. Between 1921 and 1935 the Navy operated five rigid airships, three built domestically and two abroad. Four of the five crashed in storms or due to structural failure, killing dozens of crew.

This sorry chronicle leads up to a detailed recounting of the history of the Hindenburg. Originally designed to use helium, it was redesigned for hydrogen after it became clear the U.S., which had forbidden export of helium in 1927, would not grant a waiver, especially to a Germany by then under Nazi rule. The Hindenburg was enormous: at 245 metres in length, it was longer than the U.S. Capitol building and more than three times the length of a Boeing 747. It carried between 50 and 72 passengers who were served by a crew of 40 to 61, with accommodations (apart from the spartan sleeping quarters) comparable to first class on ocean liners. In 1936, the great ship made 17 transatlantic crossings without incident. On its first flight to the U.S. in 1937, it was destroyed by fire while approaching the mooring mast at Lakehurst, New Jersey. The disaster and its aftermath are described in detail. Remarkably, given the iconic images of the flaming airship falling to the ground and the structure glowing from the intense heat of combustion, of the 97 passengers and crew on board, 62 survived the disaster. (One of the members of the ground crew also died.)

Prior to the destruction of the Hindenburg, a total of twenty-six hydrogen filled airships had been destroyed by fire, excluding those shot down in wartime, with a total of 250 people killed. The vast majority of all rigid airships built ended in disaster—if not due to fire then structural failure, weather, or pilot error. Why did people continue to pursue this technology in the face of abundant evidence that it was fundamentally flawed?

The author argues that rigid airships are an example of a “pathological technology”, which he characterises as:

  1. Embracing something huge, either in size or effects.
  2. Inducing a state bordering on enthralment among its proponents…
  3. …who underplay its downsides, risks, unintended consequences, and obvious dangers.
  4. Having costs out of proportion to the benefits it is alleged to provide.

Few people would argue that the pursuit of large airships for more than three decades in the face of repeated disasters was a pathological technology under these criteria. Even setting aside the risks from using hydrogen as a lifting gas (which I believe the author over-emphasises: prior to the Hindenburg accident nobody had ever been injured on a commercial passenger flight of a hydrogen airship, and nobody gives a second thought today about boarding an airplane with 140 tonnes of flammable jet fuel in the tanks and flying across the Pacific with only two engines). Seemingly hazardous technologies can be rendered safe with sufficient experience and precautions. Large lighter than air ships were, however, inherently unsafe because they were large and lighter than air: nothing could be done about that. They were are the mercy of the weather, and if they were designed to be strong enough to withstand whatever weather conditions they might encounter, they would have been too heavy to fly. As the experience of the U.S. Navy with helium airships demonstrated, it didn't matter if you were immune to the risks of hydrogen; the ship would eventually be destroyed in a storm.

The author then moves on from airships to discuss other technologies he deems pathological, and here, in my opinion, goes off the rails. The first of these technologies is Project Plowshare, a U.S. program to explore the use of nuclear explosions for civil engineering projects such as excavation, digging of canals, creating harbours, and fracturing rock to stimulate oil and gas production. With his characteristic snark, Regis mocks the very idea of Plowshare, and yet examination of the history of the program belies this ridicule. For the suggested applications, nuclear explosions were far more economical than chemical detonations and conventional earthmoving equipment. One principal goal of Plowshare was to determine the efficacy of such explosions and whether they would pose risks (for example, release of radiation) which were unacceptable. Over 11 years 26 nuclear tests were conducted under the program, most at the Nevada Test Site, and after a review of the results it was concluded the radiation risk was unacceptable and the results unpromising. Project Plowshare was shut down in 1977. I don't see what's remotely pathological about this. You have an idea for a new technology; you explore it in theory; conduct experiments; then decide it's not worth pursuing. Now maybe if you're Ed Regis, you may have been able to determine at the outset, without any of the experimental results, that the whole thing was absurd, but a great many people with in-depth knowledge of the issues involved preferred to run the experiments, take the data, and decide based upon the results. That, to me, seems the antithesis of pathological.

The next example of a pathological technology is the Superconducting Super Collider, a planned particle accelerator to be built in Texas which would have an accelerator ring 87.1 km in circumference and collide protons at a centre of mass energy of 40 TeV. The project was approved and construction begun in the 1980s. In 1993, Congress voted to cancel the project and work underway was abandoned. Here, the fit with “pathological technology” is even worse. Sure, the project was large, but it was mostly underground: hardly something to “enthral” anybody except physics nerds. There were no risks at all, apart from those in any civil engineering project of comparable scale. The project was cancelled because it overran its budget estimates but, even if completed, would probably have cost less than a tenth the expenditures to date on the International Space Station, which has produced little or nothing of scientific value. How is it pathological when a project, undertaken for well-defined goals, is cancelled when those funding it, seeing its schedule slip and budget balloon beyond that projected, pull the plug on it? Isn't that how things are supposed to work? Who were the seers who forecast all of this at the project's inception?

The final example of so-called pathological technology is pure spite. Ed Regis has a fine time ridiculing participants in the first 100 Year Starship symposium, a gathering to explore how and why humans might be able, within a century, to launch missions (robotic or crewed) to other star systems. This is not a technology at all, but rather an exploration of what future technologies might be able to do, and the limits imposed by the known laws of physics upon potential technologies. This is precisely the kind of “exploratory engineering” that Konstantin Tsiolkovsky engaged in when he worked out the fundamentals of space flight in the late 19th and early 20th centuries. He didn't know the details of how it would be done, but he was able to calculate, from first principles, the limits of what could be done, and to demonstrate that the laws of physics and properties of materials permitted the missions he envisioned. His work was largely ignored, which I suppose may be better than being mocked, as here.

You want a pathological technology? How about replacing reliable base load energy sources with inefficient sources at the whim of clouds and wind? Banning washing machines and dishwashers that work in favour of ones that don't? Replacing toilets with ones that take two flushes in order to “save water”? And all of this in order to “save the planet” from the consequences predicted by a theoretical model which has failed to predict measured results since its inception, through policies which impoverish developing countries and, even if you accept the discredited models, will have negligible results on the global climate. On this scandal of our age, the author is silent. He concludes:

Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped.

Or better yet, never begun.

Except, it seems, you can only recognise them in retrospect.

 Permalink