Engineering

Ackroyd, Peter. London Under. New York: Anchor Books, 2011. ISBN 978-0-307-47378-3.
Unlike New York, London grew from a swamp and its structure was moulded by the rivers that fed it. Over millennia, history has accreted in layer after layer as generations built atop the works of their ancestors. Descending into the caverns, buried rivers, sewers, subways, and infrastructure reveals the deep history, architecture and engineering, and legends of a great city.

May 2020 Permalink

Blum, Andrew. Tubes. New York: HarperCollins, 2012. ISBN 978-0-06-199493-7.
The Internet has become a routine fixture in the lives of billions of people, the vast majority of whom have hardly any idea how it works or what physical infrastructure allows them to access and share information almost instantaneously around the globe, abolishing, in a sense, the very concept of distance. And yet the Internet exists—if it didn't, you wouldn't be able to read this. So, if it exists, where is it, and what is it made of?

In this book, the author embarks upon a quest to trace the Internet from that tangle of cables connected to the router behind his couch to the hardware which enables it to communicate with its peers worldwide. The metaphor of the Internet as a cloud—simultaneously everywhere and nowhere—has become commonplace, and yet as the author begins to dig into the details, he discovers the physical Internet is nothing like a cloud: it is remarkably centralised (a large Internet exchange or “peering location” will tend grow ever larger, since networks want to connect to a place where the greatest number of other networks connect), often grungy (when pulling fibre optic cables through century-old conduits beneath the streets of Manhattan, one's mind turns more to rats than clouds), and anything but decoupled from the details of geography (undersea cables must choose a route which minimises risk of breakage due to earthquakes and damage from ship anchors in shallow water, while taking the shortest route and connecting to the backbone at a location which will provide the lowest possible latency).

The author discovers that while much of the Internet's infrastructure is invisible to the layman, it is populated, for the most part, with people and organisations open and willing to show it off to visitors. As an amateur anthropologist, he surmises that to succeed in internetworking, those involved must necessarily be skilled in networking with one another. A visit to a NANOG gathering introduces him to this subculture and the retail politics of peering.

Finally, when non-technical people speak of “the Internet”, it isn't just the interconnectivity they're thinking of but also the data storage and computing resources accessible via the network. These also have a physical realisation in the form of huge data centres, sited based upon the availability of inexpensive electricity and cooling (a large data centre such as those operated by Google and Facebook may consume on the order of 50 megawatts of electricity and dissipate that amount of heat). While networking people tend to be gregarious bridge-builders, data centre managers view themselves as defenders of a fortress and closely guard the details of their operations from outside scrutiny. When Google was negotiating to acquire the site for their data centre in The Dalles, Oregon, they operated through an opaque front company called “Design LLC”, and required all parties to sign nondisclosure agreements. To this day, if you visit the facility, there's nothing to indicate it belongs to Google; on the second ring of perimeter fencing, there's a sign, in Gothic script, that says “voldemort industries”—don't be evil! (p. 242) (On p. 248 it is claimed that the data centre site is deliberately obscured in Google Maps. Maybe it once was, but as of this writing it is not. From above, apart from the impressive power substation, it looks no more exciting than a supermarket chain's warehouse hub.) The author finally arranges to cross the perimeter, get his retina scanned, and be taken on a walking tour around the buildings from the outside. To cap the visit, he is allowed inside to visit—the lunchroom. The food was excellent. He later visits Facebook's under-construction data centre in the area and encounters an entirely different culture, so perhaps not all data centres are Morlock territory.

The author comes across as a quintessential liberal arts major (which he was) who is alternately amused by the curious people he encounters who understand and work with actual things as opposed to words, and enthralled by the wonder of it all: transcending space and time, everywhere and nowhere, “free” services supported by tens of billions of dollars of power-gobbling, heat-belching infrastructure—oh, wow! He is also a New York collectivist whose knee-jerk reaction is “public, good; private, bad” (notwithstanding that the build-out of the Internet has been almost exclusively a private sector endeavour). He waxes poetic about the city-sponsored (paid for by grants funded by federal and state taxpayers plus loans) fibre network that The Dalles installed which, he claims, lured Google to site its data centre there. The slightest acquaintance with economics or, for that matter, arithmetic, demonstrates the absurdity of this. If you're looking for a site for a multi-billion dollar data centre, what matters is the cost of electricity and the climate (which determines cooling expenses). Compared to the price tag for the equipment inside the buildings, the cost of running a few (or a few dozen) kilometres of fibre is lost in the round-off. In fact, we know, from p. 235 that the 27 kilometre city fibre run cost US$1.8 million, while Google's investment in the data centre is several billion dollars.

These quibbles aside, this is a fascinating look at the physical substrate of the Internet. Even software people well-acquainted with the intricacies of TCP/IP may have only the fuzziest comprehension of where a packet goes after it leaves their site, and how it gets to the ultimate destination. This book provides a tour, accessible to all readers, of where the Internet comes together, and how counterintuitive its physical realisation is compared to how we think of it logically.

In the Kindle edition, end-notes are bidirectionally linked to the text, but the index is just a list of page numbers. Since the Kindle edition does include real page numbers, you can type in the number from the index, but that's hardly as convenient as books where items in the index are directly linked to the text. Citations of Internet documents in the end notes are given as URLs, but not linked; the reader must copy and paste them into a browser's address bar in order to access the documents.

September 2012 Permalink

Carlson, W. Bernard. Tesla: Inventor of the Electrical Age. Princeton: Princeton University Press, 2013. ISBN 978-0-691-16561-5.
Nicola Tesla was born in 1858 in a village in what is now Croatia, then part of the Austro-Hungarian Empire. His father and grandfather were both priests in the Orthodox church. The family was of Serbian descent, but had lived in Croatia since the 1690s among a community of other Serbs. His parents wanted him to enter the priesthood and enrolled him in school to that end. He excelled in mathematics and, building on a boyhood fascination with machines and tinkering, wanted to pursue a career in engineering. After completing high school, Tesla returned to his village where he contracted cholera and was near death. His father promised him that if he survived, he would “go to the best technical institution in the world.” After nine months of illness, Tesla recovered and, in 1875 entered the Joanneum Polytechnic School in Graz, Austria.

Tesla's university career started out brilliantly, but he came into conflict with one of his physics professors over the feasibility of designing a motor which would operate without the troublesome and unreliable commutator and brushes of existing motors. He became addicted to gambling, lost his scholarship, and dropped out in his third year. He worked as a draftsman, taught in his old high school, and eventually ended up in Prague, intending to continue his study of engineering at the Karl-Ferdinand University. He took a variety of courses, but eventually his uncles withdrew their financial support.

Tesla then moved to Budapest, where he found employment as chief electrician at the Budapest Telephone Exchange. He quickly distinguished himself as a problem solver and innovator and, before long, came to the attention of the Continental Edison Company of France, which had designed the equipment used in Budapest. He was offered and accepted a job at their headquarters in Ivry, France. Most of Edison's employees had practical, hands-on experience with electrical equipment, but lacked Tesla's formal education in mathematics and physics. Before long, Tesla was designing dynamos for lighting plants and earning a handsome salary. With his language skills (by that time, Tesla was fluent in Serbian, German, and French, and was improving his English), the Edison company sent him into the field as a trouble-shooter. This further increased his reputation and, in 1884 he was offered a job at Edison headquarters in New York. He arrived and, years later, described the formalities of entering the U.S. as an immigrant: a clerk saying “Kiss the Bible. Twenty cents!”.

Tesla had never abandoned the idea of a brushless motor. Almost all electric lighting systems in the 1880s used direct current (DC): electrons flowed in only one direction through the distribution wires. This is the kind of current produced by batteries, and the first electrical generators (dynamos) produced direct current by means of a device called a commutator. As the generator is turned by its power source (for example, a steam engine or water wheel), power is extracted from the rotating commutator by fixed brushes which press against it. The contacts on the commutator are wired to the coils in the generator in such a way that a constant direct current is maintained. When direct current is used to drive a motor, the motor must also contain a commutator which converts the direct current into a reversing flow to maintain the motor in rotary motion.

Commutators, with brushes rubbing against them, are inefficient and unreliable. Brushes wear and must eventually be replaced, and as the commutator rotates and the brushes make and break contact, sparks may be produced which waste energy and degrade the contacts. Further, direct current has a major disadvantage for long-distance power transmission. There was, at the time, no way to efficiently change the voltage of direct current. This meant that the circuit from the generator to the user of the power had to run at the same voltage the user received, say 120 volts. But at such a voltage, resistance losses in copper wires are such that over long runs most of the energy would be lost in the wires, not delivered to customers. You can increase the size of the distribution wires to reduce losses, but before long this becomes impractical due to the cost of copper it would require. As a consequence, Edison electric lighting systems installed in the 19th century had many small powerhouses, each supplying a local set of customers.

Alternating current (AC) solves the problem of power distribution. In 1881 the electrical transformer had been invented, and by 1884 high-efficiency transformers were being manufactured in Europe. Powered by alternating current (they don't work with DC), a transformer efficiently converts current from one voltage and current to another. For example, power might be transmitted from the generating station to the customer at 12000 volts and 1 ampere, then stepped down to 120 volts and 100 amperes by a transformer at the customer location. Losses in a wire are purely a function of current, not voltage, so for a given level of transmission loss, the cables to distribute power at 12000 volts will cost a hundredth as much as if 120 volts were used. For electric lighting, alternating current works just as well as direct current (as long as the frequency of the alternating current is sufficiently high that lamps do not flicker). But electricity was increasingly used to power motors, replacing steam power in factories. All existing practical motors ran on DC, so this was seen as an advantage to Edison's system.

Tesla worked only six months for Edison. After developing an arc lighting system only to have Edison put it on the shelf after acquiring the rights to a system developed by another company, he quit in disgust. He then continued to work on an arc light system in New Jersey, but the company to which he had licensed his patents failed, leaving him only with a worthless stock certificate. To support himself, Tesla worked repairing electrical equipment and even digging ditches, where one of his foremen introduced him to Alfred S. Brown, who had made his career in telegraphy. Tesla showed Brown one of his patents, for a “thermomagnetic motor”, and Brown contacted Charles F. Peck, a lawyer who had made his fortune in telegraphy. Together, Peck and Brown saw the potential for the motor and other Tesla inventions and in April 1887 founded the Tesla Electric Company, with its laboratory in Manhattan's financial district.

Tesla immediately set to make his dream of a brushless AC motor a practical reality and, by using multiple AC currents, out of phase with one another (the polyphase system), he was able to create a magnetic field which itself rotated. The rotating magnetic field induced a current in the rotating part of the motor, which would start and turn without any need for a commutator or brushes. Tesla had invented what we now call the induction motor. He began to file patent applications for the motor and the polyphase AC transmission system in the fall of 1887, and by May of the following year had been granted a total of seven patents on various aspects of the motor and polyphase current.

One disadvantage of the polyphase system and motor was that it required multiple pairs of wires to transmit power from the generator to the motor, which increased cost and complexity. Also, existing AC lighting systems, which were beginning to come into use, primarily in Europe, used a single phase and two wires. Tesla invented the split-phase motor, which would run on a two wire, single phase circuit, and this was quickly patented.

Unlike Edison, who had built an industrial empire based upon his inventions, Tesla, Peck, and Brown had no interest in founding a company to manufacture Tesla's motors. Instead, they intended to shop around and license the patents to an existing enterprise with the resources required to exploit them. George Westinghouse had developed his inventions of air brakes and signalling systems for railways into a successful and growing company, and was beginning to compete with Edison in the electric light industry, installing AC systems. Westinghouse was a prime prospect to license the patents, and in July 1888 a deal was concluded for cash, notes, and a royalty for each horsepower of motors sold. Tesla moved to Pittsburgh, where he spent a year working in the Westinghouse research lab improving the motor designs. While there, he filed an additional fifteen patent applications.

After leaving Westinghouse, Tesla took a trip to Europe where he became fascinated with Heinrich Hertz's discovery of electromagnetic waves. Produced by alternating current at frequencies much higher than those used in electrical power systems (Hertz used a spark gap to produce them), here was a demonstration of transmission of electricity through thin air—with no wires at all. This idea was to inspire much of Tesla's work for the rest of his life. By 1891, he had invented a resonant high frequency transformer which we now call a Tesla coil, and before long was performing spectacular demonstrations of artificial lightning, illuminating lamps at a distance without wires, and demonstrating new kinds of electric lights far more efficient than Edison's incandescent bulbs. Tesla's reputation as an inventor was equalled by his talent as a showman in presentations before scientific societies and the public in both the U.S. and Europe.

Oddly, for someone with Tesla's academic and practical background, there is no evidence that he mastered Maxwell's theory of electromagnetism. He believed that the phenomena he observed with the Tesla coil and other apparatus were not due to the Hertzian waves predicted by Maxwell's equations, but rather something he called “electrostatic thrusts”. He was later to build a great edifice of mistaken theory on this crackpot idea.

By 1892, plans were progressing to harness the hydroelectric power of Niagara Falls. Transmission of this power to customers was central to the project: around one fifth of the American population lived within 400 miles of the falls. Westinghouse bid Tesla's polyphase system and with Tesla's help in persuading the committee charged with evaluating proposals, was awarded the contract in 1893. By November of 1896, power from Niagara reached Buffalo, twenty miles away, and over the next decade extended throughout New York. The success of the project made polyphase power transmission the technology of choice for most electrical distribution systems, and it remains so to this day. In 1895, the New York Times wrote:

Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.

After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed.

None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles.

Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy.

In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.

February 2016 Permalink

Clark, John D. Ignition! New Brunswick, NJ: Rutgers University Press, 1972. ISBN 978-0-8135-0725-5.
This may be the funniest book about chemistry ever written. In the golden age of science fiction, one recurring theme was the search for a super “rocket fuel” (with “fuel” used to mean “propellant”) which would enable the exploits depicted in the stories. In the years between the end of World War II and the winding down of the great space enterprise with the conclusion of the Apollo project, a small band of researchers (no more than 200 in the U.S., of whom around fifty were lead scientists), many of whom had grown up reading golden age science fiction, found themselves tasked to make their boyhood dreams real—to discover exotic propellants which would allow rockets to accomplish missions envisioned not just by visionaries but also the hard headed military men who, for the most part, paid the bills.

Propulsion chemists are a rare and special breed. As Isaac Asimov (who worked with the author during World War II) writes in a short memoir at the start of the book:

Now, it is clear that anyone working with rocket fuels is outstandingly mad. I don't mean garden-variety crazy or merely raving lunatic. I mean a record-shattering exponent of far-out insanity.

There are, after all, some chemicals that explode shatteringly, some that flame ravenously, some that corrode hellishly, some that poison sneakily, and some that stink stenchily. As far as I know, though, only liquid rocket fuels have all these delightful properties combined into one delectable whole.

And yet amazingly, as head of propulsion research at the Naval Air Rocket Test Station and its successor organisation for seventeen years, the author not only managed to emerge with all of his limbs and digits intact, his laboratory never suffered a single time-lost mishap. This, despite routinely working with substances such as:

Chlorine trifluoride, ClF3, or “CTF” as the engineers insist on calling it, is a colorless gas, a greenish liquid, or a white solid. … It is also quite probably the most vigorous fluorinating agent in existence—much more vigorous than fluorine itself. … It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water—with which it reacts explosively. It can be kept in some of the ordinary structural metals—steel, copper, aluminum, etc.—because the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminum keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes. (p. 73)

And ClF3 is pretty benign compared to some of the other dark corners of chemistry into which their research led them. There is extensive coverage of the quest for a high energy monopropellant, the discovery of which would greatly simplify the design of turbomachinery, injectors, and eliminate problems with differential thermal behaviour and mixture ratio over the operating range of an engine which used it. However, the author reminds us:

A monopropellant is a liquid which contains in itself both the fuel and the oxidizer…. But! Any intimate mixture of a fuel and an oxidizer is a potential explosive, and a molecule with one reducing (fuel) end and one oxidizing end, separated by a pair of firmly crossed fingers, is an invitation to disaster. (p. 10)

One gets an excellent sense of just how empirical all of this was. For example, in the quest for “exotic fuel” (which the author defines as “It's expensive, it's got boron in it, and it probably doesn't work.”), straightforward inorganic chemistry suggested that burning a borane with hydrazine, for example:

2B5H9 + 5N2H4 ⟶ 10BN + 19H2

would be a storable propellant with a specific impulse (Isp) of 326 seconds with a combustion chamber temperature of just 2000°K. But this reaction and the calculation of its performance assumes equilibrium conditions and, apart from a detonation (something else with which propulsion chemists are well acquainted), there are few environments as far from equilibrium as a rocket combustion chamber. In fact, when you try to fire these propellants in an engine, you discover the reaction products actually include elemental boron and ammonia, which result in disappointing performance. Check another one off the list.

Other promising propellants ran afoul of economic considerations and engineering constraints. The lithium, fluorine, and hydrogen tripropellant system has been measured (not theoretically calculated) to have a vacuum Isp of an astonishing 542 seconds at a chamber pressure of only 500 psi and temperature of 2200°K. (By comparison, the space shuttle main engine has a vacuum Isp of 452.3 sec. with a chamber pressure of 2994 psi and temperature of 3588°K; a nuclear thermal rocket would have an Isp in the 850–1000 sec. range. Recall that the relationship between Isp and mass ratio is exponential.) This level of engine performance makes a single stage to orbit vehicle not only feasible but relatively straightforward to engineer. Unfortunately, there is a catch or, to be precise, a list of catches. Lithium and fluorine are both relatively scarce and very expensive in the quantities which would be required to launch from the Earth's surface. They are also famously corrosive and toxic, and then you have to cope with designing an engine in which two of the propellants are cryogenic fluids and the third is a metal which is solid below 180°C. In the end, the performance (which is breathtaking for a chemical rocket) just isn't worth the aggravation.

In the final chapter, the author looks toward the future of liquid rocket propulsion and predicts, entirely correctly from a perspective four decades removed, that chemical propulsion was likely to continue to use the technologies upon which almost all rockets had settled by 1970: LOX/hydrocarbon for large first stages, LOX/LH2 for upper stages, and N2O4/hydrazine for storable missiles and in-space propulsion. In the end economics won out over the potential performance gains to be had from the exotic (and often far too exciting) propellants the author and his colleagues devoted their careers to exploring. He concludes as follows.

There appears to be little left to do in liquid propellant chemistry, and very few important developments to be anticipated. In short, we propellant chemists have worked ourselves out of a job. The heroic age is over.

But it was great fun while it lasted. (p. 192)

Now if you've decided that you just have to read this book and innocently click on the title above to buy a copy, you may be at as much risk of a heart attack as those toiling in the author's laboratory. This book has been out of print for decades and is considered such a classic, both for its unique coverage of the golden age of liquid propellant research, comprehensive description of the many avenues explored and eventually abandoned, hands-on chemist-to-chemist presentation of the motivation for projects and the adventures in synthesising and working with these frisky molecules, not to mention the often laugh out loud writing, that used copies, when they are available, sell for hundreds of dollars. As I am writing these remarks, seven copies are offered at Amazon at prices ranging from US$300–595. Now, this is a superb book, but it isn't that good!

If, however, you type the author's name and the title of the book into an Internet search engine, you will probably quickly come across a PDF edition consisting of scanned pages of the original book. I'm not going to link to it here, both because I don't link to works which violate copyright as a matter of principle and since my linking to a copy of the PDF edition might increase its visibility and risk of being taken down. I am not one of those people who believes “information wants to be free”, but I also doubt John Clark would have wanted his unique memoir and invaluable reference to be priced entirely beyond the means of the vast majority of those who would enjoy and be enlightened by reading it. In the case of “orphaned works”, I believe the moral situation is ambiguous (consider: if you do spend a fortune for a used copy of an out of print book, none of the proceeds benefit the author or publisher in any way). You make the call.

April 2012 Permalink

Courland, Robert. Concrete Planet. Amherst, NY: Prometheus Books, 2011. ISBN 978-1-61614-481-4.
Visitors to Rome are often stunned when they see the Pantheon and learn it was built almost 19 centuries ago, during the reign of the emperor Hadrian. From the front, the building has a classical style echoed in neo-classical government buildings around the world, but as visitors walk inside, it is the amazing dome which causes them to gasp. At 43.3 metres in diameter, it was the largest dome ever built in its time, and no larger dome has, in all the centuries since, ever been built in the same way. The dome of the Pantheon is a monolithic structure of concrete, whose beauty and antiquity attests to the versatility and durability of this building material which has become a ubiquitous part of the modern world.

To the ancients, who built from mud, stone, and later brick, it must have seemed like a miracle to discover a material which, mixed with water, could be moulded into any form and would harden into stone. Nobody knows how or where it was discovered that by heating natural limestone to a high temperature it could be transformed into quicklime (calcium oxide), a corrosive substance which reacts exothermically with water, solidifying into a hard substance. The author speculates that the transformation of limestone into quicklime due to lightning strikes may have been discovered in Turkey and applied to production of quicklime by a kilning process, but the evidence for this is sketchy. But from the neolithic period, humans discovered how to make floors from quicklime and a binder, and this technology remained in use until the 19th century.

All of these early lime-based mortars could not set underwater and were vulnerable to attack by caustic chemicals. It was the Romans who discovered that by mixing volcanic ash (pozzolan), which was available to them in abundance from the vicinity of Mt. Vesuvius, it was possible to create a “hydraulic cement” which could set underwater and was resistant to attack from the elements. In addition to structures like the Pantheon, the Colosseum, roads, and viaducts, Roman concrete was used to build the artificial harbour at Caesarea in Judea, the largest application of hydraulic concrete before the 20th century.

Jane Jacobs has written that the central aspect of a dark age is not that specific things have been forgotten, but that a society has forgotten what it has forgotten. It is indicative of the dark age which followed the fall of the Roman empire that even with the works of the Roman engineers remaining for all to see, the technology of Roman concrete used to build them, hardly a secret, was largely forgotten until the 18th century, when a few buildings were constructed from similar formulations.

It wasn't until the middle of the 19th century that the precursors of modern cement and concrete construction emerged. The adoption of this technology might have been much more straightforward had it not been the case that a central player in it was William Aspdin, a world-class scoundrel whose own crookedness repeatedly torpedoed ventures in which he was involved which, had he simply been honest and straightforward in his dealings, would have made him a fortune beyond the dreams of avarice.

Even with the rediscovery of waterproof concrete, its adoption was slow in the 19th century. The building of the Thames Tunnel by the great engineers Marc Brunel and his son Isambard Kingdom Brunel was a milestone in the use of concrete, albeit one achieved only after a long series of setbacks and mishaps over a period of 18 years.

Ever since antiquity, and despite numerous formulations, concrete had one common structural property: it was very strong in compression (it resisted forces which tried to crush it), but had relatively little tensile strength (if you tried to pull it apart, it would easily fracture). This meant that concrete structures had to be carefully designed so that the concrete was always kept in compression, which made it difficult to build cantilevered structures or others requiring tensile strength, such as many bridge designs employing iron or steel. In the latter half of the 19th century, a number of engineers and builders around the world realised that by embedding iron or steel reinforcement within concrete, its tensile strength could be greatly increased. The advent of reinforced concrete allowed structures impossible to build with pure concrete. In 1903, the 16-story Ingalls Building in Cincinnati became the first reinforced concrete skyscraper, and the tallest building today, the Burj Khalifa in Dubai, is built from reinforced concrete.

The ability to create structures with the solidity of stone, the strength of steel, in almost any shape a designer can imagine, and at low cost inspired many in the 20th century and beyond, with varying degrees of success. Thomas Edison saw in concrete a way to provide affordable houses to the masses, complete with concrete furniture. It was one of his less successful ventures. Frank Lloyd Wright quickly grasped the potential of reinforced concrete, and used it in many of his iconic buildings. The Panama Canal made extensive use of reinforced concrete, and the Hoover Dam demonstrated that there was essentially no limit to the size of a structure which could be built of it (the concrete of the dam is still curing to this day). The Sydney Opera House illustrated (albeit after large schedule slips, cost overruns, and acrimony between the architect and customer) that just about anything an architect can imagine could be built of reinforced concrete.

To see the Pantheon or Colosseum is to think “concrete is eternal” (although the Colosseum is not in its original condition, this is mostly due to its having been mined for building materials over the centuries). But those structures were built with unreinforced Roman concrete. Just how long can we expect our current structures, built from a different kind of concrete and steel reinforcing bars to last? Well, that's…interesting. Steel is mostly composed of iron, and iron is highly reactive in the presence of water and oxygen: it rusts. You'll observe that water and oxygen are abundant on Earth, so unprotected steel can be expected to eventually crumble into rust, losing its structural strength. This is why steel bridges, for example, must be regularly stripped and repainted to provide a barrier which protects the steel against the elements. In reinforced concrete, it is the concrete itself which protects the steel reinforcement, initially by providing an alkali environment which inhibits rust and then, after the concrete cures, by physically excluding water and the atmosphere from the reinforcement. But, as builders say, “If it ain't cracked, it ain't concrete.” Inevitably, cracks will allow air and water to reach the reinforcement, which will begin to rust. As it rusts, it loses its structural strength and, in addition, expands, which further cracks the concrete and allows more air and moisture to enter. Eventually you'll see the kind of crumbling used to illustrate deteriorating bridges and other infrastructure.

How long will reinforced concrete last? That depends upon the details. Port and harbour facilities in contact with salt water have failed in less than fifty years. Structures in less hostile environments are estimated to have a life of between 100 and 200 years. Now, this may seem like a long time compared to the budget cycle of the construction industry, but eternity it ain't, and when you consider the cost of demolition and replacement of structures such as dams and skyscrapers, it's something to think about. But obviously, if the Romans could build concrete structures which have lasted millennia, so can we. The author discusses alternative formulations of concrete and different kinds of reinforcing which may dramatically increase the life of reinforced concrete construction.

This is an interesting and informative book, but I found the author's style a bit off-putting. In the absence of fact, which is usually the case when discussing antiquity, the author simply speculates. Speculation is always clearly identified, but rather than telling a story about a shaman discovering where lightning struck limestone and spinning it unto a legend about the discovery of manufacture of quicklime, it might be better to say, “nobody really knows how it happened”. Eleven pages are spent discussing the thoroughly discredited theory that the Egyptian pyramids were made of concrete, coming to the conclusion that the theory is bogus. So why mention it? There are a number of typographical errors and a few factual errors (no, the Mesoamericans did not build pyramids “a few of which would equal those in Egypt”).

Still, if you're interested in the origin of the material which surrounds us in the modern world, how it was developed by the ancients, largely forgotten, and then recently rediscovered and used to revolutionise construction, this is a worthwhile read.

October 2015 Permalink

Dartnell, Lewis. The Knowledge. New York: Penguin Press, 2014. ISBN 978-0-14-312704-8.
In one of his first lectures to freshman physics students at Caltech, Richard Feynman posed the question that if everything we had learned was forgotten, and you could only transmit a single sentence to the survivors, what would it be? This book expands upon that idea and attempts to distil the essentials of technological civilisation which might allow rebuilding after an apocalyptic collapse. That doesn't imply re-tracing the course humans followed to get where we are today: for one thing, many of the easily-exploited sources of raw material and energy have been depleted, and for some time survivors will probably be exploiting the ruins of the collapsed civilisation instead of re-starting its primary industries. The author explores the core technologies required to meet basic human needs such as food, shelter, transportation, communication, and storing information, and how they might best be restored. At the centre is the fundamental meta-technology upon which all others are based: the scientific method as a way to empirically discover how things work and apply that knowledge to get things done.

June 2020 Permalink

Dequasie, Andrew. The Green Flame. Washington: American Chemical Society, 1991. ISBN 978-0-8412-1857-4.
The 1950s were a time of things which seem, to our present day safety-obsessed viewpoint, the purest insanity: exploding multi-megaton thermonuclear bombs in the atmosphere, keeping bombers with nuclear weapons constantly in the air waiting for the order to go to war, planning for nuclear powered aircraft, and building up stockpiles of chemical weapons. Amidst all of this madness, motivated by fears that the almost completely opaque Soviet Union might be doing even more crazy things, one of the most remarkable episodes was the boron fuels project, chronicled here from the perspective of a young chemical engineer who, in 1953, joined the effort at Olin Mathieson Chemical Corporation, a contractor developing a pilot plant to furnish boron fuels to the Air Force.

Jet aircraft in the 1950s were notoriously thirsty and, before in-flight refuelling became commonplace, had limited range. Boron-based fuels, which the Air Force called High Energy Fuel (HEF) and the Navy called “zip fuel”, based upon compounds of boron and hydrogen called boranes, were believed to permit planes to deliver range and performance around 40% greater than conventional jet fuel. This bright promise, as is so often the case in engineering, was marred by several catches.

First of all, boranes are extremely dangerous chemicals. Many are pyrophoric: they burst into flame on contact with the air. They are also prone to forming shock-sensitive explosive compounds with any impurities they interact with during processing or storage. Further, they are neurotoxins, easily absorbed by inhalation or contact with the skin, with some having toxicities as great as chemical weapon nerve agents. The instability of the boranes rules them out as fuels, but molecules containing a borane group bonded to a hydrocarbon such as an ethyl, methyl, or propyl group were believed to be sufficiently well-behaved to be usable.

But first, you had to make the stuff, and just about every step in the process involved something which wanted to kill you in one way or another. Not only were the inputs and outputs of the factory highly toxic, the by-products of the process were prone to burst into flames or explode at the slightest provocation, and this gunk regularly needed to be cleaned out from the tanks and pipes. This task fell to the junior staff. As the author notes, “The younger generation has always been the cat's paw of humanity…”.

This book chronicles the harrowing history of the boron fuels project as seen from ground level. Over the seven years the author worked on the project, eight people died in five accidents (however, three of these were workers at another chemical company who tried, on a lark, to make a boron-fuelled rocket which blew up in their faces; this was completely unauthorised by their employer and the government, so it's stretching things to call this an industrial accident). But, the author observes, in the epoch fatal accidents at chemical plants, even those working with substances less hazardous than boranes, were far from uncommon.

The boron fuels program was cancelled in 1959, and in 1960 the author moved on to other things. In the end, it was the physical characteristics of the fuels and their cost which did in the project. It's one thing for a small group of qualified engineers and researchers to work with a dangerous substance, but another entirely to contemplate airmen in squadron service handling tanker truck loads of fuel which was as toxic as nerve gas. When burned, one of the combustion products was boric oxide, a solid which would coat and corrode the turbine blades in the hot section of a jet engine. In practice, the boron fuel could be used only in the afterburner section of engines, which meant a plane using it would have to have separate fuel tanks and plumbing for turbine and afterburner fuel, adding weight and complexity. The solid products in the exhaust reduced the exhaust velocity, resulting in lower performance than expected from energy considerations, and caused the exhaust to be smoky, rendering the plane more easily spotted. It was calculated, based upon the cost of fuel produced by the pilot plant, if the XB-70 were to burn boron fuel continuously, the fuel cost would amount to around US$ 4.5 million 2010 dollars per hour. Even by the standards of extravagant cold war defence spending, this was hard to justify for what proved to be a small improvement in performance.

While the chemistry and engineering is covered in detail, this book is also a personal narrative which immerses the reader in the 1950s, where a newly-minted engineer, just out of his hitch in the army, could land a job, buy a car, be entrusted with great responsibility on a secret project considered important to national security, and set out on a career full of confidence in the future. Perhaps we don't do such crazy things today (or maybe we do—just different ones), but it's also apparent from opening this time capsule how much we've lost.

I have linked the Kindle edition to the title above, since it is the only edition still in print. You can find the original hardcover and paperback editions from the ISBN, but they are scarce and expensive. The index in the Kindle edition is completely useless: it cites page numbers from the print edition, but no page numbers are included in the Kindle edition.

March 2014 Permalink

Drexler, K. Eric. Radical Abundance. New York: PublicAffairs, 2013. ISBN 978-1-61039-113-9.
Nanotechnology burst into public awareness with the publication of the author's Engines of Creation in 1986. (The author coined the word “nanotechnology” to denote engineering at the atomic scale, fabricating structures with the atomic precision of molecules. A 1974 Japanese paper had used the term “nano-technology”, but with an entirely different meaning.) Before long, the popular media were full of speculation about nanobots in the bloodstream, self-replicating assemblers terraforming planets or mining the asteroids, and a world economy transformed into one in which scarcity, in the sense we know it today, would be transcended. Those inclined to darker speculation warned of “grey goo”—runaway self-replicators which could devour the biosphere in 24 hours, or nanoengineered super weapons.

Those steeped in conventional wisdom scoffed at these “futuristic” notions, likening them to earlier predictions of nuclear power “too cheap to meter” or space colonies, but detractors found it difficult to refute Drexler's arguments that the systems he proposed violated no law of physics and that the chemistry of such structures was well-understood and predicted that, if we figured out how to construct them, they would work. Drexler's argument was reinforced when, in 1992, he published Nanosystems, a detailed technical examination of molecular engineering based upon his MIT Ph.D. dissertation.

As the 1990s progressed, there was an increasing consensus that if nanosystems existed, we would be able to fabricate nanosystems that worked as Drexler envisions, but the path from our present-day crude fabrication technologies to atomic precision on the macroscopic scale was unclear. On the other hand, there were a number of potential pathways which might get there, increasing the probability that one or more might work. The situation is not unlike that in the early days of integrated circuits. It was clear from the laws of physics that were it possible to fabricate a billion transistors on a chip they would work, but it was equally clear that a series of increasingly difficult and expensive to surmount hurdles would have to be cleared in order to fabricate such a structure. Its feasibility then became a question of whether engineers were clever enough to solve all the problems along the way and if the market for each generation of increasingly complex chips would be large enough to fund the development of the next.

A number of groups around the world, both academic and commercial, began to pursue potential paths toward nanotechnology, laying the foundation for the next step beyond conventional macromolecular chemical synthesis. It seemed like the major impediment to a rapid take-off of nanotechnology akin to that experienced in the semiconductor field was a lack of funding. But, as Eric Drexler remarked to me in a conversation in the 1990s, most of the foundation of nanotechnology was chemistry and “You can buy a lot of chemistry for a billion dollars.”

That billion dollars appeared to be at hand in 2000, when the U.S. created a billion dollar National Nanotechnology Initiative (NNI). The NNI quickly published an implementation plan which clearly stated that “the essence of nanotechnology is the ability to work at the molecular level, atom by atom, to create large structures with fundamentally new molecular organization”. And then it all went south. As is almost inevitable with government-funded science and technology programs, the usual grantmasters waddled up to the trough, stuck their snouts into the new flow of funds, and diverted it toward their research interests which have nothing to do with the mission statement of the NNI. They even managed to redefine “nanotechnology” for their own purposes to exclude the construction of objects with atomic precision. This is not to say that some of the research NNI funds isn't worthwhile, but it's not nanotechnology in the original sense of the word, and doesn't advance toward the goal of molecular manufacturing. (We often hear about government-funded research and development “picking winners and losers”. In fact, such programs pick only losers, since the winners will already have been funded by the productive sector of the economy based upon their potential return.)

In this book Drexler attempts a fundamental reset of the vision he initially presented in Engines of Creation. He concedes the word “nanotechnology” to the hogs at the federal trough and uses “atomically precise manufacturing” (APM) to denote a fabrication technology which, starting from simple molecular feedstocks, can make anything by fabricating and assembling parts in a hierarchical fashion. Just as books, music, and movies have become data files which can be transferred around the globe in seconds, copied at no cost, and accessed by a generic portable device, physical objects will be encoded as fabrication instructions which a generic factory can create as required, constrained only that the size of the factory be large enough to assemble the final product. But the same garage-sized factory can crank out automobiles, motorboats, small aircraft, bicycles, computers, furniture, and anything on that scale or smaller just as your laser printer can print any document whatsoever as long as you have a page description of it.

Further, many of these objects can be manufactured using almost exclusively the most abundant elements on Earth, reducing cost and eliminating resource constraints. And atomic precision means that there will be no waste products from the manufacturing process—all intermediate products not present in the final product will be turned back into feedstock. Ponder, for a few moments, the consequences of this for the global economy.

In chapter 5 the author introduces a heuristic for visualising the nanoscale. Imagine the world scaled up in size by a factor of ten million, and time slowed down by the same factor. This scaling preserves properties such as velocity, force, and mass, and allows visualising nanoscale machines as the same size and operating speed as those with which we are familiar. At this scale a single transistor on a contemporary microchip would be about as big as an iPad and the entire chip the size of Belgium. Using this viewpoint, the author acquaints the reader with the realities of the nanoscale and demonstrates that analogues of macroscopic machines, when we figure out how to fabricate them, will work and, because they will operate ten million times faster, will be able to process macroscopic quantities of material on a practical time scale.

But can we build them? Here, Drexler introduces the concept of “exploratory engineering”: using the known laws of physics and conservative principles of engineering to explore what is possible. Essentially, there is a landscape of feasibility. One portion is what we have already accomplished, another which is ruled out by the laws of physics. The rest is that which we could accomplish if we could figure out how and could afford it. This is a huge domain—given unlimited funds and a few decades to work on the problem, there is little doubt one could build a particle accelerator which circled the Earth's equator. Drexler cites the work of Konstantin Tsiolkovsky as a masterpiece of exploratory engineering highly relevant to atomically precise manufacturing. By 1903, working alone, he had demonstrated the feasibility of achieving Earth orbit by means of a multistage rocket burning liquid hydrogen and oxygen. Now, Tsiolkovsky had no idea how to build the necessary engines, fuel tanks, guidance systems, launch facilities, etc., but from basic principles he was able to show that no physical law ruled out their construction and that known materials would suffice for them to work. We are in much the same position with APM today.

The tone of this book is rather curious. Perhaps having been burned by his earlier work being sensationalised, the author is reserved to such an extent that on p. 275 he includes a two pargraph aside urging readers to “curb their enthusiasm”, and much of the text, while discussing what may be the most significant development in human history since the invention of agriculture, often reads like a white paper from the Brookings Institution with half a dozen authors: “Profound changes in national interests will call for a ground-up review of grand strategy. Means and ends, risks and opportunities, the future self-perceived interests of today's strategic competitors—none of these can be taken for granted.” (p. 269)

I am also dismayed to see that Drexler appears to have bought in to the whole anthropogenic global warming scam and repeatedly genuflects to the whole “carbon is bad” nonsense. The acknowledgements include a former advisor to the anti-human World Wide Fund for Nature.

Despite quibbles, if you've been thinking “Hey, it's the 21st century, where's my nanotechnology?”, this is the book to read. It chronicles steady progress on the foundations of APM and multiple paths through which the intermediate steps toward achieving it may be achieved. It is enlightening and encouraging. Just don't get enthusiastic.

August 2013 Permalink

Florence, Ronald. The Perfect Machine. New York: Harper Perennial, 1994. ISBN 978-0-06-092670-0.
George Ellery Hale was the son of a wealthy architect and engineer who made his fortune installing passenger elevators in the skyscrapers which began to define the skyline of Chicago as it rebuilt from the great fire of 1871. From early in his life, the young Hale was fascinated by astronomy, building his own telescope at age 14. Later he would study astronomy at MIT, the Harvard College Observatory, and in Berlin. Solar astronomy was his first interest, and he invented new instruments for observing the Sun and discovered the magnetic fields associated with sunspots.

His work led him into an academic career, culminating in his appointment as a full professor at the University of Chicago in 1897. He was co-founder and first editor of the Astrophysical Journal, published continuously since 1895. Hale's greatest goal was to move astronomy from its largely dry concentration on cataloguing stars and measuring planetary positions into the new science of astrophysics: using observational techniques such as spectroscopy to study the composition of stars and nebulæ and, by comparing them, begin to deduce their origin, evolution, and the mechanisms that made them shine. His own work on solar astronomy pointed the way to this, but the Sun was just one star. Imagine how much more could be learned when the Sun was compared in detail to the myriad stars visible through a telescope.

But observing the spectra of stars was a light-hungry process, especially with the insensitive photographic material available around the turn of the 20th century. Obtaining the spectrum of all but a few of the brightest stars would require exposure times so long they would exceed the endurance of observers to operate the small telescopes which then predominated, over multiple nights. Thus, Hale became interested in larger telescopes, and the quest for ever more light from the distant universe would occupy him for the rest of his life.

First, he promoted the construction of a 40 inch (102 cm) refractor telescope, accessible from Chicago at a dark sky site in Wisconsin. At the epoch, universities, government, and private foundations did not fund such instruments. Hale persuaded Chicago streetcar baron Charles T. Yerkes to pick up the tab, and Yerkes Observatory was born. Its 40 inch refractor remains the largest telescope of that kind used for astronomy to this day.

There are two principal types of astronomical telescopes. A refracting telescope has a convex lens at one end of a tube, which focuses incoming light to an eyepiece or photographic plate at the other end. A reflecting telescope has a concave mirror at the bottom of the tube, the top end of which is open. Light enters the tube and falls upon the mirror, which reflects and focuses it upward, where it can be picked off by another mirror, directly focused on a sensor, or bounced back down through a hole in the main mirror. There are a multitude of variations in the design of both types of telescopes, but the fundamental principles of refraction and reflection remain the same.

Refractors have the advantages of simplicity, a sealed tube assembly which keeps out dust and moisture and excludes air currents which might distort the image but, because light passes through the lens, must use clear glass free of bubbles, strain lines, or other irregularities that might interfere with forming a perfect focus. Further, refractors tend to focus different colours of light at different distances. This makes them less suitable for use in spectroscopy. Colour performance can be improved by making lenses of two or more different kinds of glass (an achromatic or apochromatic design), but this further increases the complexity, difficulty, and cost of manufacturing the lens. At the time of the construction of the Yerkes refractor, it was believed the limit had been reached for the refractor design and, indeed, no larger astronomical refractor has been built since.

In a reflector, the mirror (usually made of glass or some glass-like substance) serves only to support an extremely thin (on the order of a thousand atoms) layer of reflective material (originally silver, but now usually aluminium). The light never passes through the glass at all, so as long as it is sufficiently uniform to take on and hold the desired shape, and free of imperfections (such as cracks or bubbles) that would make the reflecting surface rough, the optical qualities of the glass don't matter at all. Best of all, a mirror reflects all colours of light in precisely the same way, so it is ideal for spectrometry (and, later, colour photography).

With the Yerkes refractor in operation, it was natural that Hale would turn to a reflector in his quest for ever more light. He persuaded his father to put up the money to order a 60 inch (1.5 metre) glass disc from France, and, when it arrived months later, set one of his co-workers at Yerkes, George W. Ritchey, to begin grinding the disc into a mirror. All of this was on speculation: there were no funds to build a telescope, an observatory to house it, nor to acquire a site for the observatory. The persistent and persuasive Hale approached the recently-founded Carnegie Institution, and eventually secured grants to build the telescope and observatory on Mount Wilson in California, along with an optical laboratory in nearby Pasadena. Components for the telescope had to be carried up the crude trail to the top of the mountain on the backs of mules, donkeys, or men until a new road allowing the use of tractors was built. In 1908 the sixty inch telescope began operation, and its optics and mechanics performed superbly. Astronomers could see much deeper into the heavens. But still, Hale was not satisfied.

Even before the sixty inch entered service, he approached John D. Hooker, a Los Angeles hardware merchant, for seed money to fund the casting of a mirror blank for an 84 inch telescope, requesting US$ 25,000 (around US$ 600,000 today). Discussing the project, Hooker and Hale agreed not to settle for 84, but rather to go for 100 inches (2.5 metres). Hooker pledged US$ 45,000 to the project, with Hale promising the telescope would be the largest in the world and bear Hooker's name. Once again, an order for the disc was placed with the Saint-Gobain glassworks in France, the only one with experience in such large glass castings. Problems began almost immediately. Saint-Gobain did not have the capacity to melt the quantity of glass required (four and a half tons) all at once: they would have to fill the mould in three successive pours. A massive piece of cast glass (101 inches in diameter and 13 inches thick) cannot simply be allowed to cool naturally after being poured. If that were to occur, shrinkage of the outer parts of the disc as it cooled while the inside still remained hot would almost certainly cause the disc to fracture and, even if it didn't, would create strains within the disc that would render it incapable of holding the precise figure (curvature) required by the mirror. Instead, the disc must be placed in an annealing oven, where the temperature is reduced slowly over a period of time, allowing the internal stresses to be released. So massive was the 100 inch disc that it took a full year to anneal.

When the disc finally arrived in Pasadena, Hale and Ritchey were dismayed by what they saw, There were sheets of bubbles between the three layers of poured glass, indicating they had not fused. There was evidence the process of annealing had caused the internal structure of the glass to begin to break down. It seemed unlikely a suitable mirror could be made from the disc. After extended negotiations, Saint-Gobain decided to try again, casting a replacement disc at no additional cost. Months later, they reported the second disc had broken during annealing, and it was likely no better disc could be produced. Hale decided to proceed with the original disc. Patiently, he made the case to the Carnegie Institution to fund the telescope and observatory on Mount Wilson. It would not be until November 1917, eleven years after the order was placed for the first disc, that the mirror was completed, installed in the massive new telescope, and ready for astronomers to gaze through the eyepiece for the first time. The telescope was aimed at brilliant Jupiter.

Observers were horrified. Rather than a sharp image, Jupiter was smeared out over multiple overlapping images, as if multiple mirrors had been poorly aimed into the eyepiece. Although the mirror had tested to specification in the optical shop, when placed in the telescope and aimed at the sky, it appeared to be useless for astronomical work. Recalling that the temperature had fallen rapidly from day to night, the observers adjourned until three in the morning in the hope that as the mirror continued to cool down to the nighttime temperature, it would perform better. Indeed, in the early morning hours, the images were superb. The mirror, made of ordinary plate glass, was subject to thermal expansion as its temperature changed. It was later determined that the massive disc took twenty-four hours to cool ten degrees Celsius. Rapid changes in temperature on the mountain could cause the mirror to misbehave until its temperature stabilised. Observers would have to cope with its temperamental nature throughout the decades it served astronomical research.

As the 1920s progressed, driven in large part by work done on the 100 inch Hooker telescope on Mount Wilson, astronomical research became increasingly focused on the “nebulæ”, many of which the great telescope had revealed were “island universes”, equal in size to our own Milky Way and immensely distant. Many were so far away and faint that they appeared as only the barest smudges of light even in long exposures through the 100 inch. Clearly, a larger telescope was in order. As always, Hale was interested in the challenge. As early as 1921, he had requested a preliminary design for a three hundred inch (7.6 metre) instrument. Even based on early sketches, it was clear the magnitude of the project would surpass any scientific instrument previously contemplated: estimates came to around US$ 12 million (US$ 165 million today). This was before the era of “big science”. In the mid 1920s, when Hale produced this estimate, one of the most prestigious scientific institutions in the world, the Cavendish Laboratory at Cambridge, had an annual research budget of less than £ 1000 (around US$ 66,500 today). Sums in the millions and academic science simply didn't fit into the same mind, unless it happened to be that of George Ellery Hale. Using his connections, he approached people involved with foundations endowed by the Rockefeller fortune. Rockefeller and Carnegie were competitors in philanthropy: perhaps a Rockefeller institution might be interested in outdoing the renown Carnegie had obtained by funding the largest telescope in the world. Slowly, and with an informality which seems unimaginable today, Hale negotiated with the Rockefeller foundation, with the brash new university in Pasadena which now called itself Caltech, and with a prickly Carnegie foundation who saw the new telescope as trying to poach its painfully-assembled technical and scientific staff on Mount Wilson. By mid-1928 a deal was in hand: a Rockefeller grant for US$ 6 million (US$ 85 million today) to design and build a 200 inch (5 metre) telescope. Caltech was to raise the funds for an endowment to maintain and operate the instrument once it was completed. Big science had arrived.

In discussions with the Rockefeller foundation, Hale had agreed on a 200 inch aperture, deciding the leap to an instrument three times the size of the largest existing telescope and the budget that would require was too great. Even so, there were tremendous technical challenges to be overcome. The 100 inch demonstrated that plate glass had reached or exceeded its limits. The problems of distortion due to temperature changes only increase with the size of a mirror, and while the 100 inch was difficult to cope with, a 200 inch would be unusable, even if it could be somehow cast and annealed (with the latter process probably taking several years). Two promising alternatives were fused quartz and Pyrex borosilicate glass. Fused quartz has hardly any thermal expansion at all. Pyrex has about three times greater expansion than quartz, but still far less than plate glass.

Hale contracted with General Electric Company to produce a series of mirror blanks from fused quartz. GE's legendary inventor Elihu Thomson, second only in reputation to Thomas Edison, agreed to undertake the project. Troubles began almost immediately. Every attempt to get rid of bubbles in quartz, which was still very viscous even at extreme temperatures, failed. A new process, which involved spraying the surface of cast discs with silica passed through an oxy-hydrogen torch was developed. It required machinery which, in operation, seemed to surpass visions of hellfire. To build up the coating on a 200 inch disc would require enough hydrogen to fill two Graf Zeppelins. And still, not a single suitable smaller disc had been produced from fused quartz.

In October 1929, just a year after the public announcement of the 200 inch telescope project, the U.S. stock market crashed and the economy began to slow into the great depression. Fortunately, the Rockefeller foundation invested very conservatively, and lost little in the market chaos, so the grant for the telescope project remained secure. The deepening depression and the accompanying deflation was a benefit to the effort because raw material and manufactured goods prices fell in terms of the grant's dollars, and industrial companies which might not have been interested in a one-off job like the telescope were hungry for any work that would help them meet their payroll and keep their workforce employed.

In 1931, after three years of failures, expenditures billed at manufacturing cost by GE which had consumed more than one tenth the entire budget of the project, and estimates far beyond that for the final mirror, Hale and the project directors decided to pull the plug on GE and fused quartz. Turning to the alternative of Pyrex, Corning glassworks bid between US$ 150,000 and 300,000 for the main disc and five smaller auxiliary discs. Pyrex was already in production at industrial scale and used to make household goods and laboratory glassware in the millions, so Corning foresaw few problems casting the telescope discs. Scaling things up is never a simple process, however, and Corning encountered problems with failures in the moulds, glass contamination, and even a flood during the annealing process before the big disc was ready for delivery.

Getting it from the factory in New York to the optical shop in California was an epic event and media circus. Schools let out so students could go down to the railroad tracks and watch the “giant eye” on its special train make its way across the country. On April 10, 1936, the disc arrived at the optical shop and work began to turn it into a mirror.

With the disc in hand, work on the telescope structure and observatory could begin in earnest. After an extended period of investigation, Palomar Mountain had been selected as the site for the great telescope. A rustic construction camp was built to begin preliminary work. Meanwhile, Westinghouse began to fabricate components of the telescope mounting, which would include the largest bearing ever manufactured.

But everything depended on the mirror. Without it, there would be no telescope, and things were not going well in the optical shop. As the disc was ground flat preliminary to being shaped into the mirror profile, flaws continued to appear on its surface. None of the earlier smaller discs had contained such defects. Could it be possible that, eight years into the project, the disc would be found defective and everything would have to start over? The analysis concluded that the glass had become contaminated as it was poured, and that the deeper the mirror was ground down the fewer flaws would be discovered. There was nothing to do but hope for the best and begin.

Few jobs demand the patience of the optical craftsman. The great disc was not ready for its first optical test until September 1938. Then began a process of polishing and figuring, with weekly tests of the mirror. In August 1941, the mirror was judged to have the proper focal length and spherical profile. But the mirror needed to be a parabola, not a sphere, so this was just the start of an even more exacting process of deepening the curve. In January 1942, the mirror reached the desired parabola to within one wavelength of light. But it needed to be much better than that. The U.S. was now at war. The uncompleted mirror was packed away “for the duration”. The optical shop turned to war work.

In December 1945, work resumed on the mirror. In October 1947, it was pronounced finished and ready to install in the telescope. Eleven and a half years had elapsed since the grinding machine started to work on the disc. Shipping the mirror from Pasadena to the mountain was another epic journey, this time by highway. Finally, all the pieces were in place. Now the hard part began.

The glass disc was the correct shape, but it wouldn't be a mirror until coated with a thin layer of aluminium. This was a process which had been done many times before with smaller mirrors, but as always size matters, and a host of problems had to be solved before a suitable coating was obtained. Now the mirror could be installed in the telescope and tested further. Problem after problem with the mounting system, suspension, and telescope drive had to be found and fixed. Testing a mirror in its telescope against a star is much more demanding than any optical shop test, and from the start of 1949, an iterative process of testing, tweaking, and re-testing began. A problem with astigmatism in the mirror was fixed by attaching four fisherman's scales from a hardware store to its back (they are still there). In October 1949, the telescope was declared finished and ready for use by astronomers. Twenty-one years had elapsed since the project began. George Ellery Hale died in 1938, less than ten years into the great work. But it was recognised as his monument, and at its dedication was named the “Hale Telescope.”

The inauguration of the Hale Telescope marked the end of the rapid increase in the aperture of observatory telescopes which had characterised the first half of the twentieth century, largely through the efforts of Hale. It would remain the largest telescope in operation until 1975, when the Soviet six metre BTA-6 went into operation. That instrument, however, was essentially an exercise in Cold War one-upmanship, and never achieved its scientific objectives. The Hale would not truly be surpassed before the ten metre Keck I telescope began observations in 1993, 44 years after the Hale. The Hale Telescope remains in active use today, performing observations impossible when it was inaugurated thanks to electronics undreamt of in 1949.

This is an epic recounting of a grand project, the dawn of “big science”, and the construction of instruments which revolutionised how we see our place in the cosmos. There is far more detail than I have recounted even in this long essay, and much insight into how a large, complicated project, undertaken with little grasp of the technical challenges to be overcome, can be achieved through patient toil sustained by belief in the objective.

A PBS documentary, The Journey to Palomar, is based upon this book. It is available on DVD or a variety of streaming services.

In the Kindle edition, footnotes which appear in the text are just asterisks, which are almost impossible to select on touch screen devices without missing and accidentally turning the page. In addition, the index is just a useless list of terms and page numbers which have nothing to do with the Kindle document, which lacks real page numbers. Disastrously, the illustrations which appear in the print edition are omitted: for a project which was extensively documented in photographs, drawings, and motion pictures, this is inexcusable.

October 2016 Permalink

Gergel, Max G. Excuse Me Sir, Would You Like to Buy a Kilo of Isopropyl Bromide? Rockford, IL: Pierce Chemical Company, 1979. OCLC 4703212.
Throughout Max Gergel's long career he has been an unforgettable character for all who encountered him in the many rôles he has played: student, bench chemist, instructor of aviation cadets, entrepreneur, supplier to the Manhattan Project, buyer and seller of obscure reagents to a global clientele, consultant to industry, travelling salesman peddling products ranging from exotic halocarbons to roach killer and toilet bowl cleaner, and evangelist persuading young people to pursue careers in chemistry. With family and friends (and no outside capital) he founded Columbia Organic Chemicals, a specialty chemical supplier specialising in halocarbons but, operating on a shoestring, willing to make almost anything a customer was ready to purchase (even Max drew the line, however, when the silver-tongued director of the Naval Research Laboratory tried to persuade him to make pentaborane).

The narrative is as rambling and entertaining as one imagines sharing a couple (or a couple dozen) drinks with Max at an American Chemical Society meeting would have been. He jumps from family to friends to finances to business to professional colleagues to suppliers to customers to nuggets of wisdom for starting and building a business to eccentric characters he has met and worked with to his love life to the exotic and sometimes bone-chilling chemical syntheses he did in his company's rough and ready facilities. Many of Columbia's contracts involved production of moderate quantities (between a kilogram and several 55 gallon drums) of substances previously made only in test tube batches. This “medium scale chemistry”—situated between the laboratory bench and an industrial facility making tank car loads of the stuff—involves as much art (or, failing that, brute force and cunning) as it does science and engineering, and this leads to many of the adventures and misadventures chronicled here. For example, an exothermic reaction may be simple to manage when you're making a few grams of something—the liberated heat is simply conducted to the walls to the test tube and dissipated: at worst you may only need to add the reagent slowly, stir well, and/or place the reaction vessel in a water bath. But when DuPont placed an order for allene in gallon quantities, this posed a problem which Max resolved as follows.

When one treats 1,2,3-Trichloropropane with alkali and a little water the reaction is violent; there is a tendency to deposit the reaction product, the raw materials and the apparatus on the ceiling and the attending chemist. I solved this by setting up duplicate 12 liter flasks, each equipped with double reflux condensers and surrounding each with half a dozen large tubs. In practice, when the reaction “took off” I would flee through the door or window and battle the eruption with water from a garden hose. The contents flying from the flasks were deflected by the ceiling and collected under water in the tubs. I used towels to wring out the contents which separated, shipping the lower level to DuPont. They complained of solids suspended in the liquid, but accepted the product and ordered more. I increased the number of flasks to four, doubled the number of wash tubs and completed the new order.

They ordered a 55 gallon drum. … (p. 127)

All of this was in the days before the EPA, OSHA, and the rest of the suffocating blanket of soft despotism descended upon entrepreneurial ventures in the United States that actually did things and made stuff. In the 1940s and '50s, when Gergel was building his business in South Carolina, he was free to adopt the “whatever it takes” attitude which is the quintessential ingredient for success in start-ups and small business. The flexibility and ingenuity which allowed Gergel not only to compete with the titans of the chemical industry but become a valued supplier to them is precisely what is extinguished by intrusive regulation, which accounts for why sclerotic dinosaurs are so comfortable with it. On the other hand, Max's experience with methyl iodide illustrates why some of these regulations were imposed:

There is no description adequate for the revulsion I felt over handling this musky smelling, high density, deadly liquid. As residue of the toxicity I had chronic insomnia for years, and stayed quite slim. The government had me questioned by Dr. Rotariu of Loyola University for there had been a number of cases of methyl bromide poisoning and the victims were either too befuddled or too dead to be questioned. He asked me why I had not committed suicide which had been the final solution for some of the afflicted and I had to thank again the patience and wisdom of Dr. Screiber. It is to be noted that another factor was our lack of a replacement worker. (p. 130)

Whatever it takes.

This book was published by Pierce Chemical Company and was never, as best I can determine, assigned either an ISBN or Library of Congress catalogue number. I cite it above by its OCLC Control Number. The book is hopelessly out of print, and used copies, when available, sell for forbidding prices. Your only alternative to lay hands on a print copy is an inter-library loan, for which the OCLC number is a useful reference. (I hear members of the write-off generation asking, “What is this ‘library’ of which you speak?”) I found a scanned PDF edition in the library section of the Sciencemadness.org Web site; the scanned pages are sometimes a little gnarly around the bottom, but readable. You will also find the second volume of Gergel's memoirs, The Ageless Gergel, among the works in this collection.

May 2012 Permalink

Goetz, Peter. A Technical History of America's Nuclear Weapons. Unspecified: Independently published, 2020. ISBN Vol. 1 979-8-6646-8488-9, Vol. 2 978-1-7181-2136-2.

This is an encyclopedic history and technical description of United States nuclear weapons, delivery systems, manufacturing, storage, maintenance, command and control, security, strategic and tactical doctrine, and interaction with domestic politics and international arms control agreements, covering the period from the inception of these weapons in World War II through 2020. This encompasses a huge amount of subject matter, and covering it in the depth the author undertakes is a large project, with the two volume print edition totalling 1244 20×25 centimetre pages. The level of detail and scope is breathtaking, especially considering that not so long ago much of the information documented here was among the most carefully-guarded secrets of the U.S. military. You will learn the minutiæ of neutron initiators, which fission primaries were used in what thermonuclear weapons, how the goal of “one-point safety” was achieved, the introduction of permissive action links to protect against unauthorised use of weapons and which weapons used what kind of security device, and much, much more.

If the production quality of this work matched its content, it would be an invaluable reference for anybody interested in these weapons, from military historians, students of large-scale government research and development projects, researchers of the Cold War and the nuclear balance of power, and authors setting fiction in that era and wishing to get the details right. Sadly, when it comes to attention to detail, this work, as published in this edition, is sadly lacking—it is both slipshod and shoddy. I was reading it for information, not with the fine-grained attention I devote when proofreading my work or that of others, but in the process I marked 196 errors of fact, spelling, formatting, and grammar, or about one every six printed pages. Now, some of these are just sloppy things (including, or course, misuse of the humble apostrophe) which grate upon the reader but aren't likely to confuse, but others are just glaring errors.

Here are some of the obvious errors. Names misspelled or misstated include Jay Forrester, John von Neumann, Air Force Secretary Hans Mark, and Ronald Reagan. In chapter 11, an entire paragraph is duplicated twice in a row. In chapter 9, it is stated that the Little Feller nuclear test in 1962 was witnessed by president John F. Kennedy; in fact, it was his brother, Attorney General Robert F. Kennedy, who observed the test. There is a long duplicated passage at the start of chapter 20, but this may be a formatting error in the Kindle edition. In chapter 29, it is stated that nitrogen tetroxide was the fuel of the Titan II missile—in fact, it was the oxidiser. In chapter 41, the Northrop B-2 stealth bomber is incorrectly attributed to Lockheed in four places. In chapter 42, the Trident submarine-launched missile is referred to as “Titan” on two occasions.

The problem with such a plethora of errors is that when reading information with which you aren't acquainted or have the ability to check, there's no way to know whether they're correct or nonsense. Before using anything from this book as a source in your own work, I'd advise keeping in mind the Russian proverb, Доверяй, но проверяй—“Trust, but verify”. In this case, I'd go light on the trust and double up on the verification.

In the citation above, I link to the Kindle edition, which is free for Kindle Unlimited subscribers. The print edition is published in two paperbacks, Volume 1 and Volume 2.

January 2021 Permalink

Gordon, John Steele. A Thread Across the Ocean. New York: Harper Perennial, 2002. ISBN 978-0-06-052446-3.
There are inventions, and there are meta-inventions. Many things were invented in the 19th century which contributed to the wealth of the present-day developed world, but there were also concepts which emerged in that era of “anything is possible” ferment which cast even longer shadows. One of the most important is entrepreneurship—the ability of a visionary who sees beyond the horizon of the conventional wisdom to assemble the technical know-how, the financial capital, the managers and labourers to do the work, while keeping all of the balls in the air and fending off the horrific setbacks that any breakthrough technology will necessarily encounter as it matures.

Cyrus W. Field may not have been the first entrepreneur in the modern mold, but he was without doubt one of the greatest. Having started with almost no financial resources and then made his fortune in the manufacture of paper, he turned his attention to telegraphy. Why, in the mid-19th century, should news and information between the Old World and the New move only as fast as sailing ships could convey it, while the telegraph could flash information across continents in seconds? Why, indeed?—Field took a proposal to lay a submarine cable from Newfoundland to the United States to cut two days off the transatlantic latency of around two weeks to its logical limit: a cable across the entire Atlantic which could relay information in seconds, linking the continents together in a web of information which was, if low bandwidth, almost instantaneous compared to dispatches carried on ships.

Field knew next to nothing about electricity, manufacturing of insulated cables thousands of miles long, paying-out mechanisms to lay them on the seabed, or the navigational challenges in carrying a cable from one continent to another. But he was supremely confident that success in the endeavour would enrich those who accomplished it beyond their dreams of avarice, and persuasive in enlisting in the effort not only wealthy backers to pay the bills but also technological savants including Samuel F. B. Morse and William Thompson (later Lord Kelvin), who invented the mirror galvanometer which made the submarine cable viable.

When you try to do something audacious which has never been attempted before, however great the promise, you shouldn't expect to succeed the first time, or the second, or the third…. Indeed, the history of transatlantic cable was one of frustration, dashed hopes, lost investments, derision in the popular press—until it worked. Then it was the wonder of the age. So it has been and shall always be with entrepreneurship.

Today, gigabytes per second flow beneath the oceans through the tubes. Unless you're in continental Eurasia, it's likely these bits reached you through one of them. It all had to start somewhere, and this is the chronicle of how that came to be. This may have been the first time it became evident there was a time value to information: that the news, financial quotes, and messages delivered in minutes instead of weeks were much more valuable than those which arrived long after the fact.

It is also interesting that the laying of the first successful transatlantic cable was almost entirely a British operation. While the American Cyrus Field was the promoter, almost all of the capital, the ships, the manufacture of the cable, and the scientific and engineering expertise in its production and deployment was British.

October 2012 Permalink

Greenberg, Stanley. Time Machines. Munich: Hirmer Verlag, 2011. ISBN 978-3-7774-4041-5.
Should our civilisation collapse due to folly, shortsightedness, and greed, and an extended dark age ensue, in which not only our painfully-acquired knowledge is lost, but even the memory of what we once knew and accomplished forgotten, certainly among the most impressive of the achievements of our lost age when discovered by those who rise from the ruins to try again will be the massive yet delicate apparatus of our great physics experiments. Many, buried deep in the Earth, will survive the chaos of the dark age and beckon to pioneers of the next age of discovery just as the tombs of Egypt did to those in our epoch. Certainly, when the explorers of that distant time first illuminate the great detector halls of our experiments, they will answer, as Howard Carter did when asked by Lord Carnarvon, “Can you see anything?”, “Yes, wonderful things.”

This book is a collection of photographs of these wonderful things, made by a master photographer and printed in a large-format (26×28 cm) coffee-table book. We visit particle accelerators in Japan, the United States, Canada, Switzerland, Italy, and Germany; gravitational wave detectors in the U.S. and Italy; neutrino detectors in Canada, Japan, the U.S., Italy, and the South Pole; and the 3000 km² cosmic ray observatory in Argentina.

This book is mostly about the photographs, not the physics or engineering: the photographs are masterpieces. All are reproduced in monochrome, which emphasises the beautiful symmetries of these machines without the distractions of candy-coloured cable bundles. There is an introduction by particle physicist David C. Cassidy which briefly sketches the motivation for building these cathedrals of science and end notes which provide additional details of the hardware in each photograph, but you don't pay the substantial price of the book for these. The photographs are obviously large format originals (nobody could achieve this kind of control of focus and tonal range with a convenient to use camera) and they are printed exquisitely. The screen is so fine I have difficulty evaluating it even with a high power magnifier, but it looks to me like the book was printed using not just a simple halftone screen but with ink in multiple shades of grey.

The result is just gorgeous. Resist the temptation to casually flip from image to image—immerse yourself in each of them and work out the perspective. One challenge is that it's often difficult to determine the scale of what you're looking at from a cursory glance at the picture. You have to search for something with which you're familiar until it all snaps into scale; this is sometimes difficult and I found the disorientation delightful and ultimately enlightening.

You will learn nothing about physics from this book. You will learn nothing about photography apart from a goal to which to aspire as you master the art. But you will see some of the most amazing creations of the human mind, built in search of the foundations of our understanding of the universe we inhabit, photographed by a master and reproduced superbly, inviting you to linger on every image and wish you could see these wonders with your own eyes.

December 2012 Permalink

Hiltzik, Michael. Colossus. New York: Free Press, 2010. ISBN 978-1-4165-3216-3.
This book, subtitled “Hoover Dam and the Making of the American Century” chronicles the protracted, tangled, and often ugly history which led up to the undertaking, in the depths of the Great Depression, of the largest single civil engineering project ever attempted in the world up to that time, its achievement ahead of schedule and only modestly above budget, and its consequences for the Colorado River basin and the American West, which it continues to profoundly influence to this day.

Ever since the 19th century, visionaries, ambitious politicians, builders and engineers, and more than a few crackpots and confidence men had dreamt of and promoted grand schemes to harness the wild rivers of the American southwest, using their water to make the barren deserts bloom and opening up a new internal frontier for agriculture and (with cheap hydroelectric power) industry. Some of the schemes, and their consequences, were breathtaking. Consider the Alamo Canal, dug in 1900 to divert water from the Colorado River to irrigate the Imperial Valley of California. In 1905, the canal, already silted up by the water of the Colorado, overflowed, creating a flood which submerged more than five hundred square miles of lowlands in southern California, creating the Salton Sea, which is still there today (albeit smaller, due to evaporation and lack of inflow). Just imagine how such an environmental disaster would be covered by the legacy media today. President Theodore Roosevelt, considered a champion of the environment and the West, declined to provide federal assistance to deal with the disaster, leaving it up to the Southern Pacific Railroad, who had just acquired title to the canal, to, as the man said, “plug the hole”.

Clearly, the challenges posed by the notoriously fickle Colorado River, known for extreme floods, heavy silt, and a tendency to jump its banks and establish new watercourses, would require a much more comprehensive and ambitious solution. Further, such a solution would require the assent of the seven states within the river basin: Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming, among the sparsely populated majority of which there was deep distrust that California would exploit the project to loot them of their water for its own purposes. Given the invariant nature of California politicians and subsequent events, such suspicion was entirely merited.

In the 1920s, an extensive sequence of negotiations and court decisions led to the adoption of a compact between the states (actually, under its terms, only six states had to approve it, and Arizona did not until 1944). Commerce Secretary Herbert Hoover played a major part in these negotiations, although other participants dispute that his rôle was as central as he claimed in his memoirs. In December 1928, President Coolidge signed a bill authorising construction of the dam and a canal to route water downstream, and Congress appropriated US$165 million for the project, the largest single federal appropriation in the nation's history to that point.

What was proposed gave pause even to the master builders who came forward to bid on the project: an arch-gravity dam 221 metres high, 379 metres long, and 200 metres wide at its base. Its construction would require 3.25 million cubic yards (2.48 million cubic metres) of concrete, and would be, by a wide margin, the largest single structure ever built by the human species. The dam would create a reservoir containing 35.2 cubic kilometres of water, with a surface area of 640 square kilometres. These kinds of numbers had to bring a sense of “failure is not an option” even to the devil-may-care roughneck engineers of the epoch. Because, if for no other reason, they had a recent example of how the devil might care in the absence of scrupulous attention to detail. Just months before the great Colorado River dam was approved, the St. Francis Dam in California, built with the same design proposed for the new dam, suddenly failed catastrophically, killing more than 600 people downstream. William Mulholland, an enthusiastic supporter of the Colorado dam, had pronounced the St. Francis dam safe just hours before it failed. The St. Francis dam collapse was the worst civil engineering failure in American history and arguably remains so to date. The consequences of a comparable failure of the new dam were essentially unthinkable.

The contract for construction was won by a consortium of engineering firms called the “Six Companies” including names which would be celebrated in twentieth century civil engineering including Kaiser, Bechtel, and Morrison-Knudsen. Work began in 1931, as the Depression tightened its grip upon the economy and the realisation sank in that a near-term recovery was unlikely to occur. With this project one of the few enterprises hiring, a migration toward the job site began, and the labour market was entirely tilted toward the contractors. Living and working conditions at the outset were horrific, and although the former were eventually ameliorated once the company town of Boulder City was constructed, the rate of job-related deaths and injuries remained higher than those of comparable projects throughout the entire construction.

Everything was on a scale which dwarfed the experience of earlier projects. If the concrete for the dam had been poured as one monolithic block, it would have taken more than a century to cure, and the heat released in the process would have caused it to fracture into rubble. So the dam was built of more than thirty thousand blocks of concrete, each about fifty feet square and five feet high, cooled as it cured by chilled water from a refrigeration plant running through more than six hundred miles of cooling pipes embedded in the blocks. These blocks were then cemented into the structure of the dam with grout injected between the interlocking edges of adjacent blocks. And this entire structure had to be engineered to last forever and never fail.

At the ceremony marking the start of construction, Secretary of the Interior Ray Wilbur surprised the audience by referring to the project as “Hoover Dam”—the first time a comparable project had been named after a sitting president, which many thought unseemly, notwithstanding Hoover's involvement in the interstate compact behind the project. After Hoover's defeat by Roosevelt in 1932, the new administration consistently referred to the project as “Boulder Dam” and so commemorated it in a stamp issued on the occasion of the dam's dedication in September 1935. This was a bit curious as well, since the dam was actually built in Black Canyon, since the geological foundations in Boulder Canyon had been found unsuitable to anchor the structure. For years thereafter, Democrats called it “Boulder Dam”, while Republican stalwarts insisted on “Hoover Dam”. In 1947, newly-elected Republican majorities in the U.S. congress passed a bill officially naming the structure after Hoover and, signed by President Truman, so it has remained ever since.

This book provides an engaging immersion in a very different age, in which economic depression was tempered by an unshakable confidence in the future and the benefits to flow from continental scale collective projects, guided by wise men in Washington and carried out by roughnecks risking their lives in the savage environment of the West. The author discusses whether such a project could be accomplished today and concludes that it probably couldn't. (Of course, since all of the rivers with such potential for irrigation and power generation have already been dammed, the question is largely moot, but is relevant for grand scale projects such as solar power satellites, ocean thermal energy conversion, and other engineering works of comparable transformative consequences on the present-day economy.) We have woven such a web of environmental constraints, causes for litigation, and a tottering tower of debt that it is likely that a project such as Hoover Dam, without which the present-day U.S. southwest would not exist in its present form, could never have been carried out today, and certainly not before its scheduled completion date. Those who regard such grand earthworks as hubristic folly (to which the author tips his hat in the final chapters) might well reflect that history records the achievements of those who have grand dreams and bring them into existence, not those who sputter out their lives in courtrooms or trading floors.

December 2010 Permalink

Launius, Roger D. and Dennis R. Jenkins. Coming Home. Washington: National Aeronautics and Space Administration, 2012. ISBN 978-0-16-091064-7. NASA SP-2011-593.
In the early decades of the twentieth century, when visionaries such as Konstantin Tsiolkovsky, Hermann Oberth, and Robert H. Goddard started to think seriously about how space travel might be accomplished, most of the focus was on how rockets might be designed and built which would enable their payloads to be accelerated to reach the extreme altitude and velocity required for long-distance ballistic or orbital flight. This is a daunting problem. The Earth has a deep gravity well: so deep that to place a satellite in a low orbit around it, you must not only lift the satellite from the Earth's surface to the desired orbital altitude (which isn't particularly difficult), but also impart sufficient velocity to it so that it does not fall back but, instead, orbits the planet. It's the speed that makes it so difficult.

Recall that the kinetic energy of a body is given by ½mv². If mass (m) is given in kilograms and velocity (v) in metres per second, energy is measured in joules. Note that the square of the velocity appears in the formula: if you triple the velocity, you need nine times the energy to accelerate the mass to that speed. A satellite must have a velocity of around 7.8 kilometres/second to remain in a low Earth orbit. This is about eight times the muzzle velocity of the 5.56×45mm NATO round fired by the M-16 and AR-15 rifles. Consequently, the satellite has sixty-four times the energy per unit mass of the rifle bullet, and the rocket which places it into orbit must expend all of that energy to launch it.

Every kilogram of a satellite in a low orbit has a kinetic energy of around 30 megajoules (thirty million joules). By comparison, the energy released by detonating a kilogram of TNT is 4.7 megajoules. The satellite, purely due to its motion, has more than six times the energy as an equal mass of TNT. The U.S. Space Shuttle orbiter had a mass, without payload, of around 70,000 kilograms. When preparing to leave orbit and return to Earth, its kinetic energy was about that of half a kiloton of TNT. During the process of atmospheric reentry and landing, in about half an hour, all of that energy must be dissipated in a non-destructive manner, until the orbiter comes to a stop on the runway with kinetic energy zero.

This is an extraordinarily difficult problem, which engineers had to confront as soon as they contemplated returning payloads from space to the Earth. The first payloads were, of course, warheads on intercontinental ballistic missiles. While these missiles did not go into orbit, they achieved speeds which were sufficiently fast as to present essentially the same problems as orbital reentry. When the first reconnaissance satellites were developed by the U.S. and the Soviet Union, the technology to capture images electronically and radio them to ground stations did not yet exist. The only option was to expose photographic film in orbit then physically return it to Earth for processing and interpretation. This was the requirement which drove the development of orbital reentry. The first manned orbital capsules employed technology proven by film return spy satellites. (In the case of the Soviets, the basic structure of the Zenit reconnaissance satellites and manned Vostok capsules was essentially the same.)

This book chronicles the history and engineering details of U.S. reentry and landing technology, for both unmanned and manned spacecraft. While many in the 1950s envisioned sleek spaceplanes as the vehicle of choice, when the time came to actually solve the problems of reentry, a seemingly counterintuitive solution came to the fore: the blunt body. We're all acquainted with the phenomenon of air friction: the faster an airplane flies, the hotter its skin gets. The SR-71, which flew at three times the speed of sound, had to be made of titanium since aluminium would have lost its strength at the temperatures which resulted from friction. But at the velocity of a returning satellite, around eight times faster than an SR-71, air behaves very differently. The satellite is moving so fast that air can't get out of the way and piles up in front of it. As the air is compressed, its temperature rises until it equals or exceeds that of the surface of the Sun. This heat is then radiated in all directions. That impinging upon the reentering body can, if not dealt with, destroy it.

A streamlined shape will cause the compression to be concentrated at the nose, leading to extreme heating. A blunt body, however, will cause a shock wave to form which stands off from its surface. Since the compressed air radiates heat in all directions, only that radiated in the direction of the body will be absorbed; the rest will be harmlessly radiated away into space, reducing total heating. There is still, however, plenty of heat to worry about.

Let's consider the Mercury capsules in which the first U.S. astronauts flew. They reentered blunt end first, with a heat shield facing the air flow. Compression in the shock layer ahead of the heat shield raised the air temperature to around 5800° K, almost precisely the surface temperature of the Sun. Over the reentry, the heat pulse would deposit a total of 100 megajoules per square metre of heat shield. The astronaut was just a few centimetres from the shield, and the temperature on the back side of the shield could not be allowed to exceed 65° C. How in the world do you accomplish that?

Engineers have investigated a wide variety of ways to beat the heat. The simplest are completely passive systems: they have no moving parts. An example of a passive system is a “heat sink”. You simply have a mass of some substance with high heat capacity (which means it can absorb a large amount of energy with a small rise in temperature), usually a metal, which absorbs the heat during the pulse, then slowly releases it. The heat sink must be made of a material which doesn't melt or corrode during the heat pulse. The original design of the Mercury spacecraft specified a beryllium heat sink design, and this was flown on the two suborbital flights, but was replaced for the orbital missions. The Space Shuttle used a passive heat shield of a different kind: ceramic tiles which could withstand the heat on their surface and provided insulation which prevented the heat from reaching the aluminium structure beneath. The tiles proved very difficult to manufacture, were fragile, and required a great deal of maintenance, but they were, in principle, reusable.

The most commonly used technology for reentry is ablation. A heat shield is fabricated of a material which, when subjected to reentry heat, chars and releases gases. The gases carry away the heat, while the charred material which remains provides insulation. A variety of materials have been used for ablative heat shields, from advanced silicone and carbon composites to oak wood, on some early Soviet and Chinese reentry experiments. Ablative heat shields were used on Mercury orbital capsules, in projects Gemini and Apollo, all Soviet and Chinese manned spacecraft, and will be used by the SpaceX and Boeing crew transport capsules now under development.

If the heat shield works and you make it through the heat pulse, you're still falling like a rock. The solution of choice for landing spacecraft has been parachutes, and even though they seem simple conceptually, in practice there are many details which must be dealt with, such as stabilising the falling craft so it won't tumble and tangle the parachute suspension lines when the parachute is deployed, and opening the canopy in multiple stages to prevent a jarring shock which might damage the parachute or craft.

The early astronauts were pilots, and never much liked the idea of having to be fished out of the ocean by the Navy at the conclusion of their flights. A variety of schemes were explored to allow piloted flight to a runway landing, including inflatable wings and paragliders, but difficulties developing the technologies and schedule pressure during the space race caused the Gemini and Apollo projects to abandon them in favour of parachutes and a splashdown. Not until the Space Shuttle were precision runway landings achieved, and now NASA has abandoned that capability. SpaceX hopes to eventually return their Crew Dragon capsule to a landing pad with a propulsive landing, but that is not discussed here.

In the 1990s, NASA pursued a variety of spaceplane concepts: the X-33, X-34, and X-38. These projects pioneered new concepts in thermal protection for reentry which would be less expensive and maintenance-intensive than the Space Shuttle's tiles. In keeping with NASA's practice of the era, each project was cancelled after consuming a large sum of money and extensive engineering development. The X-37 was developed by NASA, and when abandoned, was taken over by the Air Force, which operates it on secret missions. Each of these projects is discussed here.

This book is the definitive history of U.S. spacecraft reentry systems. There is a wealth of technical detail, and some readers may find there's more here than they wanted to know. No specialised knowledge is required to understand the descriptions: just patience. In keeping with NASA tradition, quaint units like inches, pounds, miles per hour, and British Thermal Units are used in most of the text, but then in the final chapters, the authors switch back and forth between metric and U.S. customary units seemingly at random. There are some delightful anecdotes, such as when the designers of NASA's new Orion capsule had to visit the Smithsonian's National Air and Space Museum to examine an Apollo heat shield to figure out how it was made, attached to the spacecraft, and the properties of the proprietary ablative material it employed.

As a NASA publication, this book is in the public domain. The paperback linked to above is a republication of the original NASA edition. The book may be downloaded for free from the book's Web page in three electronic formats: PDF, MOBI (Kindle), and EPUB. Get the PDF! While the PDF is a faithful representation of the print edition, the MOBI edition is hideously ugly and mis-formatted. Footnotes are interleaved in the text at random locations in red type (except when they aren't in red type), block quotes are not set off from the main text, dozens of hyphenated words and adjacent words are run together, and the index is completely useless: citing page numbers in the print edition which do not appear in the electronic edition; for some reason large sections of the index are in red type. I haven't looked at the EPUB edition, but given the lack of attention to detail evident in the MOBI, my expectations for it are not high.

April 2016 Permalink

Lehto, Steve. Chrysler's Turbine Car. Chicago: Chicago Review Press, 2010. ISBN 978-1-56976-549-4.
There were few things so emblematic of the early 1960s as the jet airliner. Indeed, the period was often referred to contemporarily as the “jet age”, and products from breakfast cereal to floor wax were positioned as modern wonders of that age. Anybody who had experienced travel in a piston powered airliner and then took their first flight in a jet felt that they had stepped into the future: gone was the noise, rattling, and shaking from the cantankerous and unreliable engines that would knock the fillings loose in your teeth, replaced by a smooth whoosh which (although, in the early jets, deafening to onlookers outside), allowed carrying on a normal conversation inside the cabin. Further, notwithstanding some tragic accidents in the early days as pilots became accustomed to the characteristics of the new engines and airframes, it soon became apparent that these new airliners were a great deal safer and more reliable than their predecessors: they crashed a lot less frequently, and flights delayed and cancelled due to mechanical problems became the rare exception rather than something air travellers put up with only because the alternative was so much worse.

So, if the jet age had arrived, and jet power had proven itself to be so superior to the venerable and hideously overcomplicated piston engine, where were the jet cars? This book tells the long and tangled story of just how close we came to having turbine powered automobiles in the 1960s, how a small group of engineers plugging away at problem after problem over twenty years managed to produce an automotive powerplant so clearly superior to contemporary piston engines that almost everybody who drove a vehicle powered by it immediately fell in love and wished they could have one of their own, and ultimately how financial problems and ill-considered government meddling destroyed the opportunity to replace automotive powerplants dependent upon petroleum-based fuels (which, at the time, contained tetraethyl lead) with one which would run on any combustible liquid, emit far less pollution from the tailpipe, run for hundreds of thousands of miles without an oil change or need for a tune-up, start instantly and reliably regardless of the ambient temperature, and run so smoothly and quietly that for the first time passengers were aware of the noise of the tires rolling over the road.

In 1945, George Huebner, who had worked on turboprop aircraft for Chrysler during World War II, returned to the civilian automotive side of the company as war work wound down. A brilliant engineer as well as a natural-born promoter of all things he believed in, himself most definitely included, by 1946 he was named Chrysler's chief engineer and used his position to champion turbine propulsion, which he had already seen was the future in aviation, for automotive applications. The challenges were daunting: turboshaft engines (turbines which delivered power by turning a shaft coupled to the turbine rotor, as used in turboprop airplanes and helicopters) gulped fuel at a prodigious rate, including when at “idle”, took a long time to “spool up” to maximum power, required expensive exotic materials in the high-temperature section of the engine, and had tight tolerances which required parts to be made by costly and low production rate investment casting, which could not produce parts in the quantity, nor at a cost acceptable for a mass market automotive powerplant.

Like all of the great engineers, Huebner was simultaneously stubborn and optimistic: stubborn in his belief that a technology so much simpler and inherently more thermodynamically efficient must eventually prevail, and optimistic that with patient engineering, tackling one problem after another and pursuing multiple solutions in parallel, any challenge could be overcome. By 1963, coming up on the twentieth year of the effort, progress had been made on all fronts to the extent that Huebner persuaded Chrysler management that the time had come to find out whether the driving public was ready to embrace the jet age in their daily driving. In one of the greatest public relations stunts of all time, Chrysler ordered 55 radically styled (for the epoch) bodies from the Ghia shop in Italy, and mated them with turbine drivetrains and chassis in a Michigan factory previously used to assemble taxicabs. Fifty of these cars (the other five being retained for testing and promotional purposes) were loaned, at no charge, for periods of three months each, to a total of 203 drivers and their families. Delivery of one of these loaners became a media event, and the lucky families instant celebrities in their communities: a brief trip to the grocery store would turn into several hours fielding questions about the car and offering rides around the block to gearheads who pleaded for them.

The turbine engines, as turbine engines are wont to, once the bugs have been wrung out, performed superbly. Drivers of the loaner cars put more than a million miles on them with only minor mechanical problems. One car was rear-ended at a stop light, but you can't blame the engine for that. (Well, perhaps the guilty party was transfixed by the striking design of the rear of the car!) Drivers did notice slower acceleration from a stop due to “turbine lag”—the need for the turbine to spool up in RPM from idle, and poorer fuel economy in city driving. Fuel economy on the highway was comparable to contemporary piston engine cars. What few drivers noticed in the era of four gallons a buck gasoline, was that the turbine could run on just about any fuel you can imagine: unleaded gasoline, kerosene, heating oil, ethanol, methanol, aviation jet fuel, diesel, or any mix thereof. As a stunt, while visiting a peanut festival in Georgia, a Chrysler Turbine filled up with peanut oil, with tequila during a tour through Mexico, and with perfume at a French auto show; in each case the engine ran perfectly on the eccentric fuel (albeit with a distinctive aroma imparted to the exhaust).

So, here we are all these many years later in the twenty-first century. Where are our jet cars? That's an interesting story which illustrates the unintended consequences of well-intended public policy. Just as the turbine engine was being refined and perfected as an automotive power plant, the U.S. government started to obsess about air quality, and decided, in the spirit of the times, to impose detailed mandates upon manufacturers which constrained the design of their products. (As opposed, say, to imposing an excise tax upon vehicles based upon their total emissions and allowing manufacturers to weigh the trade-offs across their entire product line, or leaving it to states and municipalities most affected by pollution to enforce their own standards on vehicles licensed in their jurisdiction.) Since almost every vehicle on the road was piston engine powered, it was inevitable that regulators would draft their standards around the characteristics of that powerplant. In doing so, they neglected to note that the turbine engine already met all of the most stringent emissions standards they then envisioned for piston engines (and in addition, ran on unleaded fuels, completely eliminating the most hazardous emission of piston engines) with a single exception: oxides of nitrogen (NOx). The latter was a challenge for turbine engineers, because the continuous combustion in a turbine provides a longer time for nitrogen to react with oxygen. Engineers were sure they'd be able to find a way to work around this single remaining challenge, having already solved all of the emission problems the piston engine still had to overcome.

But they never got the chance. The government regulations were imposed with such short times for compliance that automakers were compelled to divert all of their research, development, and engineering resources to modifying their existing engines to meet the new standards, which proved to be ever-escalating: once a standard was met, it was made more stringent with another near-future deadline. At Chrysler, the smallest of the Big Three, this hit particularly hard, and the turbine project found its budget and engineering staff cannibalised to work on making ancient engines run rougher, burn more fuel, perform more anæmicly, and increase their cost and frequency of maintenance to satisfy a tailpipe emission standard written into law by commissars in Washington who probably took the streetcar to work. Then the second part of the double whammy hit: the oil embargo and the OPEC cartel hike in the price of oil, which led to federal fuel economy standards, which pulled in the opposite direction from the emissions standards and consumed all resources which might have been devoted to breakthroughs in automotive propulsion which would have transcended the increasingly baroque tweaks to the piston engine. A different time had arrived, and increasingly people who once eagerly awaited the unveiling of the new models from Detroit each fall began to listen to their neighbours who'd bought one of those oddly-named Japanese models and said, “Well, it's tiny and it looks odd, but it costs a whole lot less, goes almost forever on a gallon of gas, and it never, ever breaks”. From the standpoint of the mid-1970s, this began to sound pretty good to a lot of folks, and Detroit, the city and the industry which built it, began its descent from apogee to the ruin it is today.

If we could go back and change a few things in history, would we all be driving turbine cars today? I'm not so sure. At the point the turbine was undone by ill-advised public policy, one enormous engineering hurdle remained, and in retrospect it isn't clear that it could have been overcome. All turbine engines, to the present day, require materials and manufacturing processes which have never been scaled up to the volumes of passenger car manufacturing. The pioneers of the automotive turbine were confident that could be done, but they conceded that it would require at least the investment of building an entire auto plant from scratch, and that is something that Chrysler could not remotely fund at the time. It's much like building a new semiconductor fabrication facility with a new scaling factor, but without the confidence that if it succeeds a market will be there for its products. At the time the Chrysler Turbine cars were tested, Huebner estimated their cost of manufacturing at around US$50,000: roughly half of that the custom-crafted body and the rest the powertrain—the turbine engines were essentially hand-built. Such has been the depreciation of the U.S. dollar that this is equivalent to about a third of a million present-day greenbacks. Then or now, getting this cost down to something the average car buyer could afford was a formidable challenge, and it isn't obvious that the problem could have been solved, even without the resources needed to do so having been expended to comply with emissions and fuel economy diktats.

Further, turbine engines become less efficient as you scale them down—in the turbine world, the bigger the better, and they work best when run at a constant load over a long period of time. Consequently, turbine power would seem optimal for long-haul trucks, which require more power than a passenger car, run at near-constant speed over highways for hours on end, and already run on the diesel fuel which is ideal for turbines. And yet, despite research and test turbine vehicles having been built by manufacturers in the U.S., Britain, and Sweden, the diesel powerplant remains supreme. Truckers and trucking companies understand long-term investment and return, and yet the apparent advantages of the turbine haven't allowed it to gain a foothold in that market. Perhaps the turbine passenger car was one of those great ideas for which, in the final analysis, the numbers just didn't work.

I actually saw one of these cars on the road in 1964, doubtlessly driven by one the lucky drivers chosen to test it. There was something sweet about seeing the Jet Car of the Future waiting to enter a congested tunnel while we blew past it in our family Rambler station wagon, but that's just cruel. In the final chapter, we get to vicariously accompany the author on a drive in the Chrysler Turbine owned by Jay Leno, who contributes the foreword to this book.

Mark Olson's turbinecar.com has a wealth of information, photographs, and original documents relating to the Chrysler Turbine Car. The History Channel's documentary, The Chrysler Turbine, is available on DVD.

January 2011 Permalink

Mankins, John C. The Case for Space Solar Power. Houston: Virginia Edition, 2014. ISBN 978-0-9913370-0-2.
As world population continues to grow and people in the developing world improve their standard of living toward the level of residents of industrialised nations, demand for energy will increase enormously. Even taking into account anticipated progress in energy conservation and forecasts that world population will reach a mid-century peak and then stabilise, the demand for electricity alone is forecasted to quadruple in the century from 2000 to 2100. If electric vehicles shift a substantial part of the energy consumed for transportation from hydrocarbon fuels to electricity, the demand for electric power will be greater still.

Providing this electricity in an affordable, sustainable way is a tremendous challenge. Most electricity today is produced by burning fuels such as coal, natural gas, and petroleum; by nuclear fission reactors; and by hydroelectric power generated by dams. Quadrupling electric power generation by any of these means poses serious problems. Fossil fuels may be subject to depletion, pose environmental consequences both in extraction and release of combustion products into the atmosphere, and are distributed unevenly around the world, leading to geopolitical tensions between have and have-not countries. Uranium fission is a technology with few environmental drawbacks, but operating it in a safe manner is very demanding and requires continuous vigilance over the decades-long lifespan of a power station. Further, the risk exists that nuclear material can be diverted for weapons use, especially if nuclear power stations proliferate into areas which are politically unstable. Hydroelectric power is clean, generally reliable (except in the case of extreme droughts), and inexhaustible, but unfortunately most rivers which are suitable for its generation have already been dammed, and potential projects which might be developed are insufficient to meet the demand.

Well, what about those “sustainable energy” projects the environmentalists are always babbling about: solar panels, eagle shredders (wind turbines), and the like? They do generate energy without fuel, but they are not the solution to the problem. In order to understand why, we need to look into the nature of the market for electricity, which is segmented into two components, even though the current flows through the same wires. The first is “base load” power. The demand for electricity varies during the day, from day to day, and seasonally (for example, electricity for air conditioning peaks during the mid-day hours of summer). The base load is the electricity demand which is always present, regardless of these changes in demand. If you look at a long-term plot of electricity demand and draw a line through the troughs in the curve, everything below that line is base load power and everything above it is “peak” power. Base load power is typically provided by the sources discussed in the previous paragraph: hydrocarbon, nuclear, and hydroelectric. Because there is a continuous demand for the power they generate, these plants are designed to run non-stop (with excess capacity to cover stand-downs for maintenance), and may be complicated to start up or shut down. In Switzerland, for example, 56% of base load power is produced from hydroelectric plants and 39% from nuclear fission reactors.

The balance of electrical demand, peak power, is usually generated by smaller power plants which can be brought on-line and shut down quickly as demand varies. Peaking plants sell their power onto the grid at prices substantially higher than base load plants, which compensates for their less efficient operation and higher capital costs for intermittent operation. In Switzerland, most peak energy is generated by thermal plants which can burn either natural gas or oil.

Now the problem with “alternative energy” sources such as solar panels and windmills becomes apparent: they produce neither base load nor peak power. Solar panels produce electricity only during the day, and when the Sun is not obscured by clouds. Windmills, obviously, only generate when the wind is blowing. Since there is no way to efficiently store large quantities of energy (all existing storage technologies raise the cost of electricity to uneconomic levels), these technologies cannot be used for base load power, since they cannot be relied upon to continuously furnish power to the grid. Neither can they be used for peak power generation, since the times at which they are producing power may not coincide with times of peak demand. That isn't to say these energy sources cannot be useful. For example, solar panels on the roofs of buildings in the American southwest make a tremendous amount of sense since they tend to produce power at precisely the times the demand for air conditioning is greatest. This can smooth out, but not replace, the need for peak power generation on the grid.

If we wish to dramatically expand electricity generation without relying on fossil fuels for base load power, there are remarkably few potential technologies. Geothermal power is reliable and inexpensive, but is only available in a limited number of areas and cannot come close to meeting the demand. Nuclear fission, especially modern, modular designs is feasible, but faces formidable opposition from the fear-based community. If nuclear fusion ever becomes practical, we will have a limitless, mostly clean energy source, but after sixty years of research we are still decades away from an operational power plant, and it is entirely possible the entire effort may fail. The liquid fluoride thorium reactor, a technology demonstrated in the 1960s, could provide centuries of energy without the nuclear waste or weapons diversion risks of uranium-based nuclear power, but even if it were developed to industrial scale it's still a “nuclear reactor” and can be expected to stimulate the same hysteria as existing nuclear technology.

This book explores an entirely different alternative. Think about it: once you get above the Earth's atmosphere and sufficiently far from the Earth to avoid its shadow, the Sun provides a steady 1.368 kilowatts per square metre, and will continue to do so, non-stop, for billions of years into the future (actually, the Sun is gradually brightening, so on the scale of hundreds of millions of years this figure will increase). If this energy could be harvested and delivered efficiently to Earth, the electricity needs of a global technological civilisation could be met with a negligible impact on the Earth's environment. With present-day photovoltaic cells, we can convert 40% of incident sunlight to electricity, and wireless power transmission in the microwave band (to which the Earth's atmosphere is transparent, even in the presence of clouds and precipitation) has been demonstrated at 40% efficiency, with 60% end-to-end efficiency expected for future systems.

Thus, no scientific breakthrough of any kind is required to harvest abundant solar energy which presently streams past the Earth and deliver it to receiving stations on the ground which feed it into the power grid. Since the solar power satellites would generate energy 99.5% of the time (with short outages when passing through the Earth's shadow near the equinoxes, at which time another satellite at a different longitude could pick up the load), this would be base load power, with no fuel source required. It's “just a matter of engineering” to calculate what would be required to build the collector satellite, launch it into geostationary orbit (where it would stay above the same point on Earth), and build the receiver station on the ground to collect the energy beamed down by the satellite. Then, given a proposed design, one can calculate the capital cost to bring such a system into production, its operating cost, the price of power it would deliver to the grid, and the time to recover the investment in the system.

Solar power satellites are not a new idea. In 1968, Peter Glaser published a description of a system with photovoltaic electricity generation and microwave power transmission to an antenna on Earth; in 1973 he was granted U.S. patent 3,781,647 for the system. In the 1970s NASA and the Department of Energy conducted a detailed study of the concept, publishing a reference design in 1979 which envisioned a platform in geostationary orbit with solar arrays measuring 5 by 25 kilometres and requiring a monstrous space shuttle with payload of 250 metric tons and space factories to assemble the platforms. Design was entirely conventional, using much the same technologies as were later used in the International Space Station (ISS) (but for a structure twenty times its size). Given that the ISS has a cost estimated at US$ 150 billion, NASA's 1979 estimate that a complete, operational solar power satellite system comprising 60 power generation platforms and Earth-based infrastructure would cost (in 2014 dollars) between 2.9 and 8.7 trillion might be considered optimistic. Back then, a trillion dollars was a lot of money, and this study pretty much put an end to serious consideration of solar power satellites in the U.S.for almost two decades. In the late 1990s, NASA, realising that much progress has been made in many of the enabling technologies for space solar power, commissioned a “Fresh Look Study”, which concluded that the state of the art was still insufficiently advanced to make power satellites economically feasible.

In this book, the author, after a 25-year career at NASA, recounts the history of solar power satellites to date and presents a radically new design, SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), which he argues is congruent with 21st century manufacturing technology. There are two fundamental reasons previous cost estimates for solar power satellites have come up with such forbidding figures. First, space hardware is hideously expensive to develop and manufacture. Measured in US$ per kilogram, a laptop computer is around $200/kg, a Boeing 747 $1400/kg, and a smart phone $1800/kg. By comparison, the Space Shuttle Orbiter cost $86,000/kg and the International Space Station around $110,000/kg. Most of the exorbitant cost of space hardware has little to do with the space environment, but is due to its being essentially hand-built in small numbers, and thus never having the benefit of moving down the learning curve as a product is put into mass production nor of automation in manufacturing (which isn't cost-effective when you're only making a few of a product). Second, once you've paid that enormous cost per kilogram for the space hardware, you have launch it from the Earth into space and transport it to the orbit in which it will operate. For communication satellites which, like solar power satellites, operate in geostationary orbit, current launchers cost around US$ 50,000 per kilogram delivered there. New entrants into the market may substantially reduce this cost, but without a breakthrough such as full reusability of the launcher, it will stay at an elevated level.

SPS-ALPHA tackles the high cost of space hardware by adopting a “hyper modular” design, in which the power satellite is composed of huge numbers of identical modules of just eight different types. Each of these modules is on a scale which permits prototypes to be fabricated in facilities no more sophisticated than university laboratories and light enough they fall into the “smallsat” category, permitting inexpensive tests in the space environment as required. A production power satellite, designed to deliver 2 gigawatts of electricity to Earth, will have almost four hundred thousand of each of three types of these modules, assembled in space by 4,888 robot arm modules, using more than two million interconnect modules. These are numbers where mass production economies kick in: once the module design has been tested and certified you can put it out for bids for serial production. And a factory which invests in making these modules inexpensively can be assured of follow-on business if the initial power satellite is a success, since there will a demand for dozens or hundreds more once its practicality is demonstrated. None of these modules is remotely as complicated as an iPhone, and once they are made in comparable quantities shouldn't cost any more. What would an iPhone cost if they only made five of them?

Modularity also requires the design to be distributed and redundant. There is no single-point failure mode in the system. The propulsion and attitude control module is replicated 200 times in the full design. As modules fail, for whatever cause, they will have minimal impact on the performance of the satellite and can be swapped out as part of routine maintenance. The author estimates than on an ongoing basis, around 3% of modules will be replaced per year.

The problem of launch cost is addressed indirectly by the modular design. Since no module masses more than 600 kg (the propulsion module) and none of the others exceed 100 kg, they do not require a heavy lift launcher. Modules can simply be apportioned out among a large number of flights of the most economical launchers available. Construction of a full scale solar power satellite will require between 500 and 1000 launches per year of a launcher with a capacity in the 10 to 20 metric ton range. This dwarfs the entire global launch industry, and will provide motivation to fund the development of new, reusable, launcher designs and the volume of business to push their cost down the learning curve, with a goal of reducing cost for launch to low Earth orbit to US$ 300–500 per kilogram. Note that the SpaceX Falcon Heavy, under development with a projected first flight in 2015, already is priced around US$ 1000/kg without reusability of the three core stages which is expected to be introduced in the future.

The author lays out five “Design Reference Missions” which progress from small-scale tests of a few modules in low Earth orbit to a full production power satellite delivering 2 gigawatts to the electrical grid. He estimates a cost of around US$ 5 billion to the pilot plant demonstrator and 20 billion to the first full scale power satellite. This is not a small sum of money, but is comparable to the approximately US$ 26 billion cost of the Three Gorges Dam in China. Once power satellites start to come on line, each feeding power into the grid with no cost for fuel and modest maintenance expenses (comparable to those for a hydroelectric dam), the initial investment does not take long to be recovered. Further, the power satellite effort will bootstrap the infrastructure for routine, inexpensive access to space, and the power satellite modules can also be used in other space applications (for example, very high power communication satellites).

The most frequently raised objection when power satellites are mentioned is fear that they could be used as a “death ray”. This is, quite simply, nonsense. The microwave power beam arriving at the Earth's surface will have an intensity between 10–20% of summer sunlight, so a mirror reflecting the Sun would be a more effective death ray. Extensive tests were done to determine if the beam would affect birds, insects, and aircraft flying through it and all concluded there was no risk. A power satellite which beamed down its power with a laser could be weaponised, but nobody is proposing that, since it would have problems with atmospheric conditions and cost more than microwave transmission.

This book provides a comprehensive examination of the history of the concept of solar power from space, the various designs proposed over the years and studies conducted of them, and an in-depth presentation of the technology and economic rationale for the SPS-ALPHA system. It presents an energy future which is very different from that which most people envision, provides a way to bring the benefits of electrification to developing regions without any environmental consequences whatever, and ensure a secure supply of electricity for the foreseeable future.

This is a rewarding, but rather tedious read. Perhaps it's due to the author's 25 years at NASA, but the text is cluttered with acronyms—there are fourteen pages of them defined in a glossary at the end of the book—and busy charts, some of which are difficult to read as reproduced in the Kindle edition. Copy editing is so-so: I noted 28 errors, and I wasn't especially looking for them. The index in the Kindle edition lists page numbers in the print edition which are useless because the electronic edition does not contain page numbers.

June 2014 Permalink

McCullough, David. The Wright Brothers. New York: Simon & Schuster, 2015. ISBN 978-1-4767-2874-2.
On December 8th, 1903, all was in readiness. The aircraft was perched on its launching catapult, the brave airman at the controls. The powerful internal combustion engine roared to life. At 16:45 the catapult hurled the craft into the air. It rose straight up, flipped, and with its wings coming apart, plunged into the Potomac river just 20 feet from the launching point. The pilot was initially trapped beneath the wreckage but managed to free himself and swim to the surface. After being rescued from the river, he emitted what one witness described as “the most voluble series of blasphemies” he had ever heard.

So ended the last flight of Samuel Langley's “Aerodrome”. Langley was a distinguished scientist and secretary of the Smithsonian Institution in Washington D.C. Funded by the U.S. Army and the Smithsonian for a total of US$ 70,000 (equivalent to around 1.7 million present-day dollars), the Aerodrome crashed immediately on both of its test flights, and was the subject of much mockery in the press.

Just nine days later, on December 17th, two brothers, sons of a churchman, with no education beyond high school, and proprietors of a bicycle shop in Dayton, Ohio, readied their own machine for flight near Kitty Hawk, on the windswept sandy hills of North Carolina's Outer Banks. Their craft, called just the Flyer, took to the air with Orville Wright at the controls. With the 12 horsepower engine driving the twin propellers and brother Wilbur running alongside to stabilise the machine as it moved down the launching rail into the wind, Orville lifted the machine into the air and achieved the first manned heavier-than-air powered flight, demonstrating the Flyer was controllable in all three axes. The flight lasted just 12 seconds and covered a distance of 120 feet.

After the first flight, the brothers took turns flying the machine three more times on the 17th. On the final flight Wilbur flew a distance of 852 feet in a flight of 59 seconds (a strong headwind was blowing, and this flight was over half a mile through the air). After completion of the fourth flight, while being prepared to fly again, a gust of wind caught the machine and dragged it, along with assistant John T. Daniels, down the beach toward the ocean. Daniels escaped, but the Flyer was damaged beyond repair and never flew again. (The Flyer which can seen in the Smithsonian's National Air and Space Museum today has been extensively restored.)

Orville sent a telegram to his father in Dayton announcing the success, and the brothers packed up the remains of the aircraft to be shipped back to their shop. The 1903 season was at an end. The entire budget for the project between 1900 through the successful first flights was less than US$ 1000 (24,000 dollars today), and was funded entirely by profits from the brothers' bicycle business.

How did two brothers with no formal education in aerodynamics or engineering succeed on a shoestring budget while Langley, with public funds at his disposal and the resources of a major scientific institution fail so embarrassingly? Ultimately it was because the Wright brothers identified the key problem of flight and patiently worked on solving it through a series of experiments. Perhaps it was because they were in the bicycle business. (Although they are often identified as proprietors of a “bicycle shop”, they also manufactured their own bicycles and had acquired the machine tools, skills, and co-workers for the business, later applied to building the flying machine.)

The Wrights believed the essential problem of heavier than air flight was control. The details of how a bicycle is built don't matter much: you still have to learn to ride it. And the problem of control in free flight is much more difficult than riding a bicycle, where the only controls are the handlebars and, to a lesser extent, shifting the rider's weight. In flight, an airplane must be controlled in three axes: pitch (up and down), yaw (left and right), and roll (wings' angle to the horizon). The means for control in each of these axes must be provided, and what's more, just as for a child learning to ride a bike, the would-be aeronaut must master the skill of using these controls to maintain his balance in the air.

Through a patient program of subscale experimentation, first with kites controlled by from the ground by lines manipulated by the operators, then gliders flown by a pilot on board, the Wrights developed their system of pitch control by a front-mounted elevator, yaw by a rudder at the rear, and roll by warping the wings of the craft. Further, they needed to learn how to fly using these controls and verify that the resulting plane would be stable enough that a person could master the skill of flying it. With powerless kites and gliders, this required a strong, consistent wind. After inquiries to the U.S. Weather Bureau, the brothers selected the Kitty Hawk site on the North Carolina coast. Just getting there was an adventure, but the wind was as promised and the sand and lack of large vegetation was ideal for their gliding experiments. They were definitely “roughing it” at this remote site, and at times were afflicted by clouds of mosquitos of Biblical plague proportions, but starting in 1900 they tested a series of successively larger gliders and by 1902 had a design which provided three axis control, stability, and the controls for a pilot on board. In the 1902 season they made more than 700 flights and were satisfied the control problem had been mastered.

Now all that remained was to add an engine and propellers to the successful glider design, again scaling it up to accommodate the added weight. In 1903, you couldn't just go down to the hardware store and buy an engine, and automobile engines were much too heavy, so the Wrights' resourceful mechanic, Charlie Taylor, designed and built the four cylinder motor from scratch, using the new-fangled material aluminium for the engine block. The finished engine weighed just 152 pounds and produced 12 horsepower. The brothers could find no references for the design of air propellers and argued intensely over the topic, but eventually concluded they'd just have to make a best guess and test it on the real machine.

The Flyer worked the on the second attempt (an earlier try on December 14th ended in a minor crash when Wilbur over-controlled at the moment of take-off). But this stunning success was the product of years of incremental refinement of the design, practical testing, and mastery of airmanship through experience.

Those four flights in December of 1903 are now considered one of the epochal events of the twentieth century, but at the time they received little notice. Only a few accounts of the flights appeared in the press, and some of them were garbled and/or sensationalised. The Wrights knew that the Flyer (whose wreckage was now in storage crates at Dayton), while a successful proof of concept and the basis for a patent filing, was not a practical flying machine. It could only take off into the strong wind at Kitty Hawk and had not yet demonstrated long-term controlled flight including aerial maneuvers such as turns or flying around a closed course. It was just too difficult travelling to Kitty Hawk, and the facilities of their camp there didn't permit rapid modification of the machines based upon experimentation.

They arranged to use an 84 acre cow pasture called Huffman Prairie located eight miles from Dayton along an interurban trolley line which made it easy to reach. The field's owner let them use it without charge as long as they didn't disturb the livestock. The Wrights devised a catapult to launch their planes, powered by a heavy falling weight, which would allow them to take off in still air. It was here, in 1904, that they refined the design into a practical flying machine and fully mastered the art of flying it over the course of about fifty test flights. Still, there was little note of their work in the press, and the first detailed account was published in the January 1905 edition of Gleanings in Bee Culture. Amos Root, the author of the article and publisher of the magazine, sent a copy to Scientific American, saying they could republish it without fee. The editors declined, and a year later mocked the achievements of the Wright brothers.

For those accustomed to the pace of technological development more than a century later, the leisurely pace of progress in aviation and lack of public interest in the achievement of what had been a dream of humanity since antiquity seems odd. Indeed, the Wrights, who had continued to refine their designs, would not become celebrities nor would their achievements be widely acknowledged until a series of demonstrations Wilbur would perform at Le Mans in France in the summer of 1908. Le Figaro wrote, “It was not merely a success, but a triumph…a decisive victory for aviation, the news of which will revolutionize scientific circles throughout the world.” And it did: stories of Wilbur's exploits were picked up by the press on the Continent, in Britain, and, belatedly, by papers in the U.S. Huge crowds came out to see the flights, and the intrepid American aviator's name was on every tongue.

Meanwhile, Orville was preparing for a series of demonstration flights for the U.S. Army at Fort Myer, Virginia. The army had agreed to buy a machine if it passed a series of tests. Orville's flights also began to draw large crowds from nearby Washington and extensive press coverage. All doubts about what the Wrights had wrought were now gone. During a demonstration flight on September 17, 1908, a propeller broke in flight. Orville tried to recover, but the machine plunged to the ground from an altitude of 75 feet, severely injuring him and killing his passenger, Lieutenant Thomas Selfridge, who became the first person to die in an airplane crash. Orville's recuperation would be long and difficult, aided by his sister, Katharine.

In early 1909, Orville and Katharine would join Wilbur in France, where he was to do even more spectacular demonstrations in the south of the country, training pilots for the airplanes he was selling to the French. Upon their return to the U.S., the Wrights were awarded medals by President Taft at the White House. They were feted as returning heroes in a two day celebration in Dayton. The diligent Wrights continued their work in the shop between events.

The brothers would return to Fort Myer, the scene of the crash, and complete their demonstrations for the army, securing the contract for the sale of an airplane for US$ 30,000. The Wrights would continue to develop their company, defend their growing portfolio of patents against competitors, and innovate. Wilbur was to die of typhoid fever in 1912, aged only 45 years. Orville sold his interest in the Wright Company in 1915 and, in his retirement, served for 28 years on the National Advisory Committee for Aeronautics, the precursor of NASA. He died in 1948. Neither brother ever married.

This book is a superb evocation of the life and times of the Wrights and their part in creating, developing, promoting, and commercialising one of the key technologies of the modern world.

February 2016 Permalink

Morton, Oliver. The Planet Remade. Princeton: Princeton University Press, 2015. ISBN 978-0-691-17590-4.
We live in a profoundly unnatural world. Since the start of the industrial revolution, and rapidly accelerating throughout the twentieth century, the actions of humans have begun to influence the flow of energy and materials in the Earth's biosphere on a global scale. Earth's current human population and standard of living are made possible entirely by industrial production of nitrogen-based fertilisers and crop plants bred to efficiently exploit them. Industrial production of fixed (chemically reactive) nitrogen from the atmosphere now substantially exceeds all of that produced by the natural soil bacteria on the planet which, prior to 1950, accounted for almost all of the nitrogen required to grow plants. Fixing nitrogen by the Haber-Bosch process is energy-intensive, and consumes around 1.5 percent of all the world's energy usage and, as a feedstock, 3–5% of natural gas produced worldwide. When we eat these crops, or animals fed from them, we are, in a sense, eating fossil fuels. On the order of four out of five nitrogen molecules that make up your body were made in a factory by the Haber-Bosch process. We are the children, not of nature, but of industry.

The industrial production of fertiliser, along with crops tailored to use them, is entirely responsible for the rapid growth of the Earth's population, which has increased from around 2.5 billion in 1950, when industrial fertiliser and “green revolution” crops came into wide use, to more than 7 billion today. This was accompanied not by the collapse into global penury predicted by Malthusian doom-sayers, but rather a broad-based rise in the standard of living, with extreme poverty and malnutrition falling to all-time historical lows. In the lifetimes of many people, including this scribbler, our species has taken over the flow of nitrogen through the Earth's biosphere, replacing a process mediated by bacteria for billions of years with one performed in factories. The flow of nitrogen from atmosphere to soil, to plants and the creatures who eat them, back to soil, sea, and ultimately the atmosphere is now largely in the hands of humans, and their very lives have become dependent upon it.

This is an example of “geoengineering”—taking control of what was a natural process and replacing it with an engineered one to produce a desired outcome: in this case, the ability to feed a much larger population with an unprecedented standard of living. In the case of nitrogen fixation, there wasn't a grand plan drawn up to do all of this: each step made economic sense to the players involved. (In fact, one of the motivations for developing the Haber-Bosch process was not to produce fertiliser, but rather to produce feedstocks for the manufacture of military and industrial explosives, which had become dependent on nitrates obtained from guano imported to Europe from South America.) But the outcome was the same: ours is an engineered world. Those who are repelled by such an intervention in natural processes or who are concerned by possible detrimental consequences of it, foreseen or unanticipated, must come to terms with the reality that abandoning this world-changing technology now would result in the collapse of the human population, with at least half of the people alive today starving to death, and many of the survivors reduced to subsistence in abject poverty. Sadly, one encounters fanatic “greens” who think this would be just fine (and, doubtless, imagining they'd be among the survivors).

Just mentioning geoengineering—human intervention and management of previously natural processes on a global scale—may summon in the minds of many Strangelove-like technological megalomania or the hubris of Bond villains, so it's important to bear in mind that we're already doing it, and have become utterly dependent upon it. When we consider the challenges we face in accommodating a population which is expected to grow to ten billion by mid-century (and, absent catastrophe, this is almost a given: the parents of the ten billion are mostly alive today), who will demand and deserve a standard of living comparable to what they see in industrial economies, and while carefully weighing the risks and uncertainties involved, it may be unwise to rule out other geoengineering interventions to mitigate undesirable consequences of supporting the human population.

In parallel with the human takeover of the nitrogen cycle, another geoengineering project has been underway, also rapidly accelerating in the 20th century, driven both by population growth and industrialisation of previously agrarian societies. For hundreds of millions of years, the Earth also cycled carbon through the atmosphere, oceans, biosphere, and lithosphere. Carbon dioxide (CO₂) was metabolised from the atmosphere by photosynthetic plants, extracting carbon for their organic molecules and producing oxygen released to the atmosphere, then passed along as plants were eaten, returned to the soil, or dissolved in the oceans, where creatures incorporated carbonates into their shells, which eventually became limestone rock and, over geological time, was subducted as the continents drifted, reprocessed far below the surface, and expelled back into the atmosphere by volcanoes. (This is a gross oversimplification of the carbon cycle, but we don't need to go further into it for what follows. The point is that it's something which occurs on a time scale of tens to hundreds of millions of years and on which humans, prior to the twentieth century, had little influence.)

The natural carbon cycle is not leakproof. Only part of the carbon sequestered by marine organisms and immured in limestone is recycled by volcanoes; it is estimated that this loss of carbon will bring the era of multicellular life on Earth to an end around a billion years from now. The carbon in some plants is not returned to the biosphere when they die. Sometimes, the dead vegetation accumulates in dense beds where it is protected against oxidation and eventually forms deposits of peat, coal, petroleum, and natural gas. Other than natural seeps and releases of the latter substances, their carbon is also largely removed from the biosphere. Or at least it was until those talking apes came along….

The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.

Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂?

Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere.

But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet).

But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it?

(Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.)

We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.)

Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.)

So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system.

Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind!

But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond?

Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful.

Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather.

Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant.

Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships.

Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists.

We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.

October 2017 Permalink

Pooley, Charles and Ed LeBouthillier. Microlaunchers. Seattle: CreateSpace, 2013. ISBN 978-1-4912-8111-6.
Many fields of engineering are subject to scaling laws: as you make something bigger or smaller various trade-offs occur, and the properties of materials, cost, or other design constraints set limits on the largest and smallest practical designs. Rockets for launching payloads into Earth orbit and beyond tend to scale well as you increase their size. Because of the cube-square law, the volume of propellant a tank holds increases as the cube of the size while the weight of the tank goes as the square (actually a bit faster since a larger tank will require more robust walls, but for a rough approximation calling it the square will do). Viable rockets can get very big indeed: the Sea Dragon, although never built, is considered a workable design. With a length of 150 metres and 23 metres in diameter, it would have more than ten times the first stage thrust of a Saturn V and place 550 metric tons into low Earth orbit.

What about the other end of the scale? How small could a space launcher be, what technologies might be used in it, and what would it cost? Would it be possible to scale a launcher down so that small groups of individuals, from hobbyists to college class projects, could launch their own spacecraft? These are the questions explored in this fascinating and technically thorough book. Little practical work has been done to explore these questions. The smallest launcher to place a satellite in orbit was the Japanese Lambda 4S with a mass of 9400 kg and length of 16.5 metres. The U.S. Vanguard rocket had a mass of 10,050 kg and length of 23 metres. These are, though small compared to the workhorse launchers of today, still big, heavy machines, far beyond the capabilities of small groups of people, and sufficiently dangerous if something goes wrong that they require launch sites in unpopulated regions.

The scale of launchers has traditionally been driven by the mass of the payload they carry to space. Early launchers carried satellites with crude 1950s electronics, while many of their successors were derived from ballistic missiles sized to deliver heavy nuclear warheads. But today, CubeSats have demonstrated that useful work can be done by spacecraft with a volume of one litre and mass of 1.33 kg or less, and the PhoneSat project holds out the hope of functional spacecraft comparable in weight to a mobile telephone. While to date these small satellites have flown as piggy-back payloads on other launches, the availability of dedicated launchers sized for them would increase the number of launch opportunities and provide access to trajectories unavailable in piggy-back opportunities.

Just because launchers have tended to grow over time doesn't mean that's the only way to go. In the 1950s and '60s many people expected computers to continue their trend of getting bigger and bigger to the point where there were a limited number of “computer utilities” with vast machines which customers accessed over the telecommunication network. But then came the minicomputer and microcomputer revolutions and today the computing power in personal computers and mobile devices dwarfs that of all supercomputers combined. What would it take technologically to spark a similar revolution in space launchers?

With the smallest successful launchers to date having a mass of around 10 tonnes, the authors choose two weight budgets: 1000 kg on the high end and 100 kg as the low. They divide these budgets into allocations for payload, tankage, engines, fuel, etc. based upon the experience of existing sounding rockets, then explore what technologies exist which might enable such a vehicle to achieve orbital or escape velocity. The 100 kg launcher is a huge technological leap from anything with which we have experience and probably could be built, if at all, only after having gained experience from earlier generations of light launchers. But then the current state of the art in microchip fabrication would have seemed like science fiction to researchers in the early days of integrated circuits and it took decades of experience and generation after generation of chips and many technological innovations to arrive where we are today. Consequently, most of the book focuses on a three stage launcher with the 1000 kg mass budget, capable of placing a payload of between 150 and 200 grams on an Earth escape trajectory.

The book does not spare the rigour. The reader is introduced to the rocket equation, formulæ for aerodynamic drag, the standard atmosphere, optimisation of mixture ratios, combustion chamber pressure and size, nozzle expansion ratios, and a multitude of other details which make the difference between success and failure. Scaling to the size envisioned here without expensive and exotic materials and technologies requires out of the box thinking, and there is plenty on display here, including using beverage cans for upper stage propellant tanks.

A 1000 kg space launcher appears to be entirely feasible. The question is whether it can be done without the budget of hundreds of millions of dollars and years of development it would certainly take were the problem assigned to an aerospace prime contractor. The authors hold out the hope that it can be done, and observe that hobbyists and small groups can begin working independently on components: engines, tank systems, guidance and navigation, and so on, and then share their work precisely as open source software developers do so successfully today.

This is a field where prizes may work very well to encourage development of the required technologies. A philanthropist might offer, say, a prize of a million dollars for launching a 150 gram communicating payload onto an Earth escape trajectory, and a series of smaller prizes for engines which met the requirements for the various stages, flight-weight tankage and stage structures, etc. That way teams with expertise in various areas could work toward the individual prizes without having to take on the all-up integration required for the complete vehicle.

This is a well-researched and hopeful look at a technological direction few have thought about. The book is well written and includes all of the equations and data an aspiring rocket engineer will need to get started. The text is marred by a number of typographical errors (I counted two dozen) but only one trivial factual error. Although other references are mentioned in the text, a bibliography of works for those interested in exploring further would be a valuable addition. There is no index.

January 2014 Permalink

Portree, David S. F. Humans to Mars. Washington: National Aeronautics and Space Administration, 2001. NASA SP-2001-4521.
Ever since, in the years following World War II, people began to think seriously about the prospects for space travel, visionaries have looked beyond the near-term prospects for flights into Earth orbit, space stations, and even journeys to the Moon, toward the red planet: Mars. Unlike Venus, eternally shrouded by clouds, or the other planets which were too hot or cold to sustain life as we know it, Mars, about half the size of the Earth, had an atmosphere, a day just a little longer than the Earth's, seasons, and polar caps which grew and shrank with the seasons. There were no oceans, but water from the polar caps might sustain life on the surface, and there were dark markings which appeared to change during the Martian year, which some interpreted as plant life that flourished as polar caps melted in the spring and receded as they grew in the fall.

In an age where we have high-resolution imagery of the entire planet, obtained from orbiting spacecraft, telescopes orbiting Earth, and ground-based telescopes with advanced electronic instrumentation, it is often difficult to remember just how little was known about Mars in the 1950s, when people first started to think about how we might go there. Mars is the next planet outward from the Sun, so its distance and apparent size vary substantially depending upon its relative position to Earth in their respective orbits. About every two years, Earth “laps” Mars and it is closest (“at opposition”) and most easily observed. But because the orbit of Mars is elliptic, its distance varies from one opposition to the next, and it is only every 15 to 17 years that a near-simultaneous opposition and perihelion render Mars most accessible to Earth-based observation.

But even at a close opposition, Mars is a challenging telescopic target. At a close encounter, such as the one which will occur in the summer of 2018, Mars has an apparent diameter of only around 25 arc seconds. By comparison, the full Moon is about half a degree, or 1800 arc seconds: 72 times larger than Mars. To visual observers, even at a favourable opposition, Mars is a difficult object. Before the advent of electronic sensors in the 1980s, it was even more trying to photograph. Existing photographic film and plates were sufficiently insensitive that long exposures, measured in seconds, were required, and even from the best observing sites, the turbulence in the Earth's atmosphere smeared out details, leaving only the largest features recognisable. Visual observers were able to glimpse more detail in transient moments of still air, but had to rely upon their memory to sketch them. And the human eye is subject to optical illusions, seeing patterns where none exist. Were the extended linear features called “canals” real? Some observers saw and sketched them in great detail, while others saw nothing. Photography could not resolve the question.

Further, the physical properties of the planet were largely unknown. If you're contemplating a mission to land on Mars, it's essential to know the composition and density of its atmosphere, the temperatures expected at potential landing sites, and the terrain which a lander would encounter. None of these were known much beyond the level of educated guesses, which turned out to be grossly wrong once spacecraft probe data started to come in.

But ignorance of the destination didn't stop people from planning, or at least dreaming. In 1947–48, Wernher von Braun, then working with the U.S. Army at the White Sands Missile Range in New Mexico, wrote a novel called The Mars Project based upon a hypothetical Mars mission. A technical appendix presented detailed designs of the spacecraft and mission. While von Braun's talent as an engineer was legendary, his prowess as a novelist was less formidable, and the book never saw print, but in 1952 the appendix was published by itself.

One thing of which von Braun was never accused was thinking small, and in this first serious attempt to plan a Mars mission, he envisioned something more like an armada than the lightweight spacecraft we design today. At a time when the largest operational rocket, the V-2, had a payload of just one tonne, which it could throw no further than 320 km on a suborbital trajectory, von Braun's Mars fleet would consist of ten ships, each with a mass of 4,000 tons, and a total crew of seventy. The Mars ships would be assembled in orbit from parts launched on 950 flights of reusable three-stage ferry rockets. To launch all of the components of the Mars fleet and the fuel they would require would burn a total of 5.32 million tons of propellant in the ferry ships. Note that when von Braun proposed this, nobody had ever flown even a two stage rocket, and it would be ten years before the first unmanned Earth satellite was launched.

Von Braun later fleshed out his mission plans for an illustrated article in Collier's magazine as part of their series on the future of space flight. Now he envisioned assembling the Mars ships at the toroidal space station in Earth orbit which had figured in earlier installments of the series. In 1956, he published a book co-authored with Willy Ley, The Exploration of Mars, in which he envisioned a lean and mean expedition with just two ships and a crew of twelve, which would require “only” four hundred launches from Earth to assemble, provision, and fuel.

Not only was little understood about the properties of the destination, nothing at all was known about what human crews would experience in space, either in Earth orbit or en route to Mars and back. Could they even function in weightlessness? Would be they be zapped by cosmic rays or solar flares? Were meteors a threat to their craft and, if so, how serious a one? With the dawn of the space age after the launch of Sputnik in October, 1957, these data started to trickle in, and they began to inform plans for Mars missions at NASA and elsewhere.

Radiation was much more of a problem than had been anticipated. The discovery of the Van Allen radiation belts around the Earth and measurement of radiation from solar flares and galactic cosmic rays indicated that short voyages were preferable to long ones, and that crews would need shielding from routine radiation and a “storm shelter” during large solar flares. This motivated research into nuclear thermal and ion propulsion systems, which would not only reduce the transit time to and from Mars, but also, being much more fuel efficient than chemical rockets, dramatically reduce the mass of the ships compared to von Braun's flotilla.

Ernst Stuhlinger had been studying electric (ion) propulsion since 1953, and developed a design for constant-thrust, ion powered ships. These were featured in Walt Disney's 1957 program, “Mars and Beyond”, which aired just two months after the launch of Sputnik. This design was further developed by NASA in a 1962 mission design which envisioned five ships with nuclear-electric propulsion, departing for Mars in the early 1980s with a crew of fifteen and cargo and crew landers permitting a one month stay on the red planet. The ships would rotate to provide artificial gravity for the crew on the trip to and from Mars.

In 1965, the arrival of the Mariner 4 spacecraft seemingly drove a stake through the heart of the romantic view of Mars which had persisted since Percival Lowell. Flying by the southern hemisphere of the planet as close as 9600 km, it returned 21 fuzzy pictures which seemed to show Mars as a dead, cratered world resembling the Moon far more than the Earth. There was no evidence of water, nor of life. The atmosphere was determined to be only 1% as dense as that of Earth, not the 10% estimated previously, and composed mostly of carbon dioxide, not nitrogen. With such a thin and hostile atmosphere, there seemed no prospects for advanced life (anything more complicated than bacteria), and all of the ideas for winged Mars landers went away: the martian atmosphere proved just dense enough to pose a problem when slowing down on arrival, but not enough to allow a soft landing with wings or a parachute. The probe had detected more radiation than expected on its way to Mars, indicating crews would need more protection than anticipated, and it showed that robotic probes could do science at Mars without the need to put a crew at risk. I remember staying up and watching these pictures come in (the local television station didn't carry the broadcast, so I watched even more static-filled pictures than the original from a distant station). I can recall thinking, “Well, that's it then. Mars is dead. We'll probably never go there.”

Mars mission planning went on the back burner as the Apollo Moon program went into high gear in the 1960s. Apollo was conceived not as a single-destination project to land on the Moon, but to create the infrastructure for human expansion from the Earth into the solar system, including development of nuclear propulsion and investigation of planetary missions using Apollo derived hardware, mostly for flyby missions. In January of 1968, Boeing completed a study of a Mars landing mission, which would have required six launches of an uprated Saturn V, sending a crew of six to Mars in a 140 ton ship for a landing and a brief “flags and footprints” stay on Mars. By then, Apollo funding (even before the first lunar orbit and landing) was winding down, and it was clear there was no budget nor political support for such grandiose plans.

After the success of Apollo 11, NASA retrenched, reducing its ambition to a Space Shuttle. An ambitious Space Task Group plan for using the Shuttle to launch a Mars mission in the early 1980s was developed, but in an era of shrinking budgets and additional fly-by missions returning images of a Moon-like Mars, went nowhere. The Saturn V and the nuclear rocket which could have taken crews to Mars had been cancelled. It appeared the U.S. would remain stuck going around in circles in low Earth orbit. And so it remains today.

While planning for manned Mars missions stagnated, the 1970s dramatically changed the view of Mars. In 1971, Mariner 9 went into orbit around Mars and returned 7329 sharp images which showed the planet to be a complex world, with very different northern and southern hemispheres, a grand canyon almost as long as the United States, and features which suggested the existence, at least in the past, of liquid water. In 1976, two Viking orbiters and landers arrived at Mars, providing detailed imagery of the planet and ground truth. The landers were equipped with instruments intended to detect evidence of life, and they reported positive results, but later analyses attributed this to unusual soil chemistry. This conclusion is still disputed, including by the principal investigator for the experiment, but in any case the Viking results revealed a much more complicated and interesting planet than had been imagined from earlier missions. I had been working as a consultant at the Jet Propulsion Laboratory during the first Viking landing, helping to keep mission critical mainframe computers running, and I had the privilege of watching the first images from the surface of Mars arrive. I revised my view from 1965: now Mars was a place which didn't look much different from the high desert of California, where you could imagine going to explore and live some day. More importantly, detailed information about the atmosphere and surface of Mars was now in hand, so future missions could be planned accordingly.

And then…nothing. It was a time of malaise and retreat. After the last Viking landing in September of 1975, it would be more than twenty-one years until Mars Global Surveyor would orbit Mars and Mars Pathfinder would land there in 1996. And yet, with detailed information about Mars in hand, the intervening years were a time of great ferment in manned Mars mission planning, when the foundation of what may be the next great expansion of the human presence into the solar system was laid down.

President George H. W. Bush announced the Space Exploration Initiative on July 20th, 1989, the 20th anniversary of the Apollo 11 landing on the Moon. This was, in retrospect, the last gasp of the “Battlestar” concepts of missions to Mars. It became a bucket into which every NASA centre and national laboratory could throw their wish list: new heavy launchers, a Moon base, nuclear propulsion, space habitats: for a total price tag on the order of half a trillion dollars. It died, quietly, in congress.

But the focus was moving from leviathan bureaucracies of the coercive state to innovators in the private sector. In the 1990s, spurred by work of members of the “Mars Underground”, including Robert Zubrin and David Baker, the “Mars Direct” mission concept emerged. Earlier Mars missions assumed that all resources needed for the mission would have to be launched from Earth. But Zubrin and Baker realised that the martian atmosphere, based upon what we had learned from the Viking missions, contained everything needed to provide breathable air for the stay on Mars and rocket fuel for the return mission (with the addition of lightweight hydrogen brought from Earth). This turned the weight budget of a Mars mission upside-down. Now, an Earth return vehicle could be launched to Mars with empty propellant tanks. Upon arrival, it would produce fuel for the return mission and oxygen for the crew. After it was confirmed to have produced the necessary consumables, the crew of four would be sent in the next launch window (around 26 months later) and land near the return vehicle. They would use its oxygen while on the planet, and its fuel to return to Earth at the end of its mission. There would be no need for a space station in Earth orbit, nor orbital assembly, nor for nuclear propulsion: the whole mission could be done with hardware derived from that already in existence.

This would get humans to Mars, but it ran into institutional barriers at NASA, since many of its pet projects, including the International Space Station and Space Shuttle proved utterly unnecessary to getting to Mars. NASA responded with the Mars Design Reference Mission, published in various revisions between 1993 and 2014, which was largely based upon Mars Direct, but up-sized to a larger crew of six, and incorporating a new Earth Return Vehicle to bring the crew back to Earth in less austere circumstances than envisioned in Mars Direct.

NASA claim they are on a #JourneyToMars. They must be: there's a Twitter hashtag! But of course to anybody who reads this sad chronicle of government planning for planetary exploration over half a century, it's obvious they're on no such thing. If they were truly on a journey to Mars, they would be studying and building the infrastructure to get there using technologies such as propellant depots and in-orbit assembly which would get the missions done economically using resources already at hand. Instead, it's all about building a huge rocket which will cost so much it will fly every other year, at best, employing a standing army which will not only be costly but so infrequently used in launch operations they won't have the experience to operate the system safely, and whose costs will vacuum out the funds which might have been used to build payloads which would extend the human presence into space.

The lesson of this is that when the first humans set foot upon Mars, they will not be civil servants funded by taxes paid by cab drivers and hairdressers, but employees (and/or shareholders) of a private venture that sees Mars as a profit centre which, as its potential is developed, can enrich them beyond the dreams of avarice and provide a backup for human civilisation. I trust that when the history of that great event is written, it will not be as exasperating to read as this chronicle of the dead-end of government space programs making futile efforts to get to Mars.

This is an excellent history of the first half century of manned Mars mission planning. Although many proposed missions are omitted or discussed only briefly, the evolution of mission plans with knowledge of the destination and development of spaceflight hardware is described in detail, culminating with current NASA thinking about how best to accomplish such a mission. This book was published in 2001, but since existing NASA concepts for manned Mars missions are still largely based upon the Design Reference Mission described here, little has changed in the intervening fifteen years. In September of 2016, SpaceX plans to reveal its concepts for manned Mars missions, so we'll have to wait for the details to see how they envision doing it.

As a NASA publication, this book is in the public domain. The book can be downloaded for free as a PDF file from the NASA History Division. There is a paperback republication of this book available at Amazon, but at an outrageous price for such a short public domain work. If you require a paper copy, it's probably cheaper to download the PDF and print your own.

June 2016 Permalink

Pournelle, Jerry. A Step Farther Out. Studio City, CA: Chaos Manor Press, [1979, 1994] 2011. ASIN B004XTKFWW.
This book is a collection of essays originally published in Galaxy magazine between 1974 and 1978. They were originally collected into a book published in 1979, which was republished in 1994 with a new preface and notes from the author. This electronic edition includes all the material from the 1994 book plus a new preface which places the essays in the context of their time and the contemporary world.

I suspect that many readers of these remarks may be inclined to exclaim “Whatever possessed you to read a bunch of thirty-year-old columns from a science fiction magazine which itself disappeared from the scene in 1980?” I reply, “Because the wisdom in these explorations of science, technology, and the human prospect is just as relevant today as it was when I first read them in the original book, and taken together they limn the lost three decades of technological progress which have so blighted our lives.” Pournelle not only envisioned what was possible as humanity expanded its horizons from the Earth to become a spacefaring species drawing upon the resources of the solar system which dwarf those about which the “only one Earth” crowd fret, he also foresaw the constraint which would prevent us from today living in a perfectly achievable world, starting from the 1970s, with fusion, space power satellites, ocean thermal energy conversion, and innovative sources of natural gas providing energy; a robust private space infrastructure with low cost transport to Earth orbit; settlements on the Moon and Mars; exploration of the asteroids with an aim to exploit their resources; and compounded growth of technology which would not only permit human survival but “survival with style”—not only for those in the developed countries, but for all the ten billion who will inhabit this planet by the middle of the present century.

What could possibly go wrong? Well, Pournelle nails that as well. Recall whilst reading the following paragraph that it was written in 1978.

[…] Merely continue as we are now: innovative technology discouraged by taxes, environmental impact statements, reports, lawsuits, commission hearings, delays, delays, delays; space research not carried out, never officially abandoned but delayed, stretched-out, budgets cut and work confined to the studies without hardware; solving the energy crisis by conservation, with fusion research cut to the bone and beyond, continued at level-of-effort but never to a practical reactor; fission plants never officially banned, but no provision made for waste disposal or storage so that no new plants are built and the operating plants slowly are phased out; riots at nuclear power plant construction sites; legal hearings, lawyers, lawyers, lawyers…

Can you not imagine the dream being lost? Can you not imagine the nation slowly learning to “do without”, making “Smaller is Better” the national slogan, fussing over insulating attics and devoting attention to windmills; production falling, standards of living falling, until one day we discover the investments needed to go to space would be truly costly, would require cuts in essentials like food —

A world slowly settling into satisfaction with less, until there are no resources to invest in That Buck Rogers Stuff?

I can imagine that.

As can we all, as now we are living it. And yet, and yet…. One consequence of the Three Lost Decades is that the technological vision and optimistic roadmap of the future presented in these essays is just as relevant to our predicament today as when they were originally published, simply because with a few exceptions we haven't done a thing to achieve them. Indeed, today we have fewer resources with which to pursue them, having squandered our patrimony on consumption, armies of rent-seekers, and placed generations yet unborn in debt to fund our avarice. But for those who look beyond the noise of the headlines and the platitudes of politicians whose time horizon is limited to the next election, here is a roadmap for a true step farther out, in which the problems we perceive as intractable are not “managed” or “coped with”, but rather solved, just as free people have always done when unconstrained to apply their intellect, passion, and resources toward making their fortunes and, incidentally, creating wealth for all.

This book is available only in electronic form for the Kindle as cited above, under the given ASIN. The ISBN of the original 1979 paperback edition is 978-0-441-78584-1. The formatting in the Kindle edition is imperfect, but entirely readable. As is often the case with Kindle documents, “images and tables hardest hit”: some of the tables take a bit of head-scratching to figure out, as the Kindle (or at least the iPad application which I use) particularly mangles multi-column tables. (I mean, what's with that, anyway? LaTeX got this perfectly right thirty years ago, and in a manner even beginners could use; and this was pure public domain software anybody could adopt. Sigh—three lost decades….) Formatting quibbles aside, I'm as glad I bought and read this book as I was when I first bought it and read it all those years ago. If you want to experience not just what the future could have been, then, but what it can be, now, here is an excellent place to start.

The author's Web site is an essential resource for those interested in these big ideas, grand ambitions, and the destiny of humankind and its descendents.

June 2012 Permalink

Regis, Ed. Monsters. New York: Basic Books, 2015. ISBN 978-0-465-06594-3.
In 1863, as the American Civil War raged, Count Ferdinand von Zeppelin, an ambitious young cavalry officer from the German kingdom of Württemberg arrived in America to observe the conflict and learn its lessons for modern warfare. He arranged an audience with President Lincoln, who authorised him to travel among the Union armies. Zeppelin spent a month with General Joseph Hooker's Army of the Potomac. Accustomed to German military organisation, he was unimpressed with what he saw and left to see the sights of the new continent. While visiting Minnesota, he ascended in a tethered balloon and saw the landscape laid out below him like a military topographical map. He immediately grasped the advantage of such an eye in the sky for military purposes. He was impressed.

Upon his return to Germany, Zeppelin pursued a military career, distinguishing himself in the 1870 war with France, although being considered “a hothead”. It was this characteristic which brought his military career to an abrupt end in 1890. Chafing under what he perceived as stifling leadership by the Prussian officer corps, he wrote directly to the Kaiser to complain. This was a bad career move; the Kaiser “promoted” him into retirement. Adrift, looking for a new career, Zeppelin seized upon controlled aerial flight, particularly for its military applications. And he thought big.

By 1890, France was at the forefront of aviation. By 1885 the first dirigible, La France, had demonstrated aerial navigation over complex closed courses and carried passengers. Built for the French army, it was just a technology demonstrator, but to Zeppelin it demonstrated a capability with such potential that Germany must not be left behind. He threw his energy into the effort, formed a company, raised the money, and embarked upon the construction of Luftschiff Zeppelin 1 (LZ 1).

Count Zeppelin was not a man to make small plans. Eschewing sub-scale demonstrators or technology-proving prototypes, he went directly to a full scale airship intended to be militarily useful. It was fully 128 metres long, almost two and a half times the size of La France, longer than a football field. Its rigid aluminium frame contained 17 gas bags filled with hydrogen, and it was powered by two gasoline engines. LZ 1 flew just three times. An observer from the German War Ministry reported it to be “suitable for neither military nor for non-military purposes.” Zeppelin's company closed its doors and the airship was sold for scrap.

By 1905, Zeppelin was ready to try again. On its first flight, the LZ 2 lost power and control and had to make a forced landing. Tethered to the ground at the landing site, it was caught by the wind and destroyed. It was sold for scrap. Later the LZ 3 flew successfully, and Zeppelin embarked upon construction of the LZ 4, which would be larger still. While attempting a twenty-four hour endurance flight, it suffered motor failure, landed, and while tied down was caught by wind. Its gas bags rubbed against one another and static electricity ignited the hydrogen, which reduced the airship to smoking wreckage.

Many people would have given up at this point, but not the redoubtable Count. The LZ 5, delivered to the military, was lost when carried away by the wind after an emergency landing and dashed against a hill. LZ 6 burned in its hangar after an engine caught fire. LZ 7, the first civilian passenger airship, crashed into a forest on its first flight and was damaged beyond repair. LZ 8, its replacement, was destroyed by a gust of wind while being walked out of its hangar.

With the outbreak of war in 1914, the airship went to war. Germany operated 117 airships, using them for reconnaissance and even bombing targets in England. Of the 117, fully 81 were destroyed, about half due to enemy action and half by the woes which had wrecked so many airships prior to the conflict.

Based upon this stunning record of success, after the end of the Great War, Britain decided to embark in earnest on its own airship program, building even larger airships than Germany. Results were no better, culminating in the R100 and R101, built to provide air and cargo service on routes throughout the Empire. On its maiden flight to India in 1930, R101 crashed and burned in a storm while crossing France, killing 48 of the 54 on board. After the catastrophe, the R100 was retired and sold for scrap.

This did not deter the Americans, who, in addition to their technical prowess and “can do” spirit, had access to helium, produced as a by-product of their natural gas fields. Unlike hydrogen, helium is nonflammable, so the risk of fire, which had destroyed so many airships using hydrogen, was entirely eliminated. Helium does not provide as much lift as hydrogen, but this can be compensated for by increasing the size of the ship. Helium is also around fifty times more expensive than hydrogen, which makes managing an airship in flight more difficult. While the commander of a hydrogen airship can freely “valve” gas to reduce lift when required, doing this in a helium ship is forbiddingly expensive and restricted only to the most dire of emergencies.

The U.S. Navy believed the airship to be an ideal platform for long-range reconnaissance, anti-submarine patrols, and other missions where its endurance, speed, and the ability to operate far offshore provided advantages over ships and heavier than air craft. Between 1921 and 1935 the Navy operated five rigid airships, three built domestically and two abroad. Four of the five crashed in storms or due to structural failure, killing dozens of crew.

This sorry chronicle leads up to a detailed recounting of the history of the Hindenburg. Originally designed to use helium, it was redesigned for hydrogen after it became clear the U.S., which had forbidden export of helium in 1927, would not grant a waiver, especially to a Germany by then under Nazi rule. The Hindenburg was enormous: at 245 metres in length, it was longer than the U.S. Capitol building and more than three times the length of a Boeing 747. It carried between 50 and 72 passengers who were served by a crew of 40 to 61, with accommodations (apart from the spartan sleeping quarters) comparable to first class on ocean liners. In 1936, the great ship made 17 transatlantic crossings without incident. On its first flight to the U.S. in 1937, it was destroyed by fire while approaching the mooring mast at Lakehurst, New Jersey. The disaster and its aftermath are described in detail. Remarkably, given the iconic images of the flaming airship falling to the ground and the structure glowing from the intense heat of combustion, of the 97 passengers and crew on board, 62 survived the disaster. (One of the members of the ground crew also died.)

Prior to the destruction of the Hindenburg, a total of twenty-six hydrogen filled airships had been destroyed by fire, excluding those shot down in wartime, with a total of 250 people killed. The vast majority of all rigid airships built ended in disaster—if not due to fire then structural failure, weather, or pilot error. Why did people continue to pursue this technology in the face of abundant evidence that it was fundamentally flawed?

The author argues that rigid airships are an example of a “pathological technology”, which he characterises as:

  1. Embracing something huge, either in size or effects.
  2. Inducing a state bordering on enthralment among its proponents…
  3. …who underplay its downsides, risks, unintended consequences, and obvious dangers.
  4. Having costs out of proportion to the benefits it is alleged to provide.

Few people would argue that the pursuit of large airships for more than three decades in the face of repeated disasters was a pathological technology under these criteria. Even setting aside the risks from using hydrogen as a lifting gas (which I believe the author over-emphasises: prior to the Hindenburg accident nobody had ever been injured on a commercial passenger flight of a hydrogen airship, and nobody gives a second thought today about boarding an airplane with 140 tonnes of flammable jet fuel in the tanks and flying across the Pacific with only two engines). Seemingly hazardous technologies can be rendered safe with sufficient experience and precautions. Large lighter than air ships were, however, inherently unsafe because they were large and lighter than air: nothing could be done about that. They were are the mercy of the weather, and if they were designed to be strong enough to withstand whatever weather conditions they might encounter, they would have been too heavy to fly. As the experience of the U.S. Navy with helium airships demonstrated, it didn't matter if you were immune to the risks of hydrogen; the ship would eventually be destroyed in a storm.

The author then moves on from airships to discuss other technologies he deems pathological, and here, in my opinion, goes off the rails. The first of these technologies is Project Plowshare, a U.S. program to explore the use of nuclear explosions for civil engineering projects such as excavation, digging of canals, creating harbours, and fracturing rock to stimulate oil and gas production. With his characteristic snark, Regis mocks the very idea of Plowshare, and yet examination of the history of the program belies this ridicule. For the suggested applications, nuclear explosions were far more economical than chemical detonations and conventional earthmoving equipment. One principal goal of Plowshare was to determine the efficacy of such explosions and whether they would pose risks (for example, release of radiation) which were unacceptable. Over 11 years 26 nuclear tests were conducted under the program, most at the Nevada Test Site, and after a review of the results it was concluded the radiation risk was unacceptable and the results unpromising. Project Plowshare was shut down in 1977. I don't see what's remotely pathological about this. You have an idea for a new technology; you explore it in theory; conduct experiments; then decide it's not worth pursuing. Now maybe if you're Ed Regis, you may have been able to determine at the outset, without any of the experimental results, that the whole thing was absurd, but a great many people with in-depth knowledge of the issues involved preferred to run the experiments, take the data, and decide based upon the results. That, to me, seems the antithesis of pathological.

The next example of a pathological technology is the Superconducting Super Collider, a planned particle accelerator to be built in Texas which would have an accelerator ring 87.1 km in circumference and collide protons at a centre of mass energy of 40 TeV. The project was approved and construction begun in the 1980s. In 1993, Congress voted to cancel the project and work underway was abandoned. Here, the fit with “pathological technology” is even worse. Sure, the project was large, but it was mostly underground: hardly something to “enthral” anybody except physics nerds. There were no risks at all, apart from those in any civil engineering project of comparable scale. The project was cancelled because it overran its budget estimates but, even if completed, would probably have cost less than a tenth the expenditures to date on the International Space Station, which has produced little or nothing of scientific value. How is it pathological when a project, undertaken for well-defined goals, is cancelled when those funding it, seeing its schedule slip and budget balloon beyond that projected, pull the plug on it? Isn't that how things are supposed to work? Who were the seers who forecast all of this at the project's inception?

The final example of so-called pathological technology is pure spite. Ed Regis has a fine time ridiculing participants in the first 100 Year Starship symposium, a gathering to explore how and why humans might be able, within a century, to launch missions (robotic or crewed) to other star systems. This is not a technology at all, but rather an exploration of what future technologies might be able to do, and the limits imposed by the known laws of physics upon potential technologies. This is precisely the kind of “exploratory engineering” that Konstantin Tsiolkovsky engaged in when he worked out the fundamentals of space flight in the late 19th and early 20th centuries. He didn't know the details of how it would be done, but he was able to calculate, from first principles, the limits of what could be done, and to demonstrate that the laws of physics and properties of materials permitted the missions he envisioned. His work was largely ignored, which I suppose may be better than being mocked, as here.

You want a pathological technology? How about replacing reliable base load energy sources with inefficient sources at the whim of clouds and wind? Banning washing machines and dishwashers that work in favour of ones that don't? Replacing toilets with ones that take two flushes in order to “save water”? And all of this in order to “save the planet” from the consequences predicted by a theoretical model which has failed to predict measured results since its inception, through policies which impoverish developing countries and, even if you accept the discredited models, will have negligible results on the global climate. On this scandal of our age, the author is silent. He concludes:

Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped.

Or better yet, never begun.

Except, it seems, you can only recognise them in retrospect.

January 2016 Permalink

Shute, Nevil. Trustee from the Toolroom. New York: Vintage Books, [1960] 2010. ISBN 978-0-345-02663-7.
Keith Stewart is an unexceptional man. “[Y]ou may see a little man get in at West Ealing, dressed in a shabby raincoat over a blue suit. He is one of hundreds of thousands like him in industrial England, pale-faced, running to fat a little, rather hard up. His hands show evidence of manual work, his eyes and forehead evidence of intellect.” He earns his living by making mechanical models and writing articles about them which are published, with directions, in the London weekly Miniature Mechanic. His modest income from the magazine has allowed him to give up his toolroom job at an aircraft subcontractor. Along with the income his wife Katie earns from her job in a shop, they make ends meet and are paying down the mortgage on their house, half of which they rent out.

Keith's sister Jo married well. Her husband, John Dermott, is a retired naval officer and nephew of Lord Dungannon, with an independent income from the family fortune. Like many people in postwar Britain, the Dermotts have begun to chafe under the ceaseless austerity, grey collectivism, and shrinking freedom of what was once the vanguard of civilisation and have decided to emigrate to the west coast of Canada, to live the rest of their lives in freedom. They've decided to make their journey an adventure, making the voyage from Britain to Vancouver through the Panama Canal in their modest but oceangoing sailboat Shearwater. Keith and Katie agree to look after their young daughter Janice, whose parents don't want to take out of school and who might not tolerate a long ocean voyage well.

Tragedy befalls the Dermotts, as they are shipwrecked and drowned in a tropical storm in the Pacific. Keith and Katie have agreed to become Janice's trustees in such an event and, consulting the Dermotts' solicitor, are astonished to learn that their fortune, assumed substantial, has almost entirely vanished. While they can get along and support Janice, she'll not be able to receive the education they assumed her parents intended her to have.

Given the confiscatory capital controls in effect at the time, Keith has an idea what may have happened to the Dermott fortune. “And he was the trustee.” Keith Stewart, who had never set foot outside of England, and can barely afford a modest holiday, suddenly finds himself faced with figuring out how to travel to the other side of the world, to a location that isn't even on his map, and undertake a difficult and risky mission.

Keith discovers that while nobody would recognise him on the street or think him out of the ordinary, his writing for Miniature Mechanic has made him a celebrity in what, more than half a century later, would be called the “maker subculture”, and that these people are resourceful, creative, willing to bend the rules to get things done and help one another, and some dispose of substantial wealth. By a chain of connections which might have seemed implausible at the outset but is the kind of thing which happens all of the time in the real world, Keith Stewart, modelmaker and scribbler, sets out on an epic adventure.

This is a thoroughly satisfying and utterly charming story. It is charming because the characters are such good people; the kind you'd feel privileged to have as friends. But they are also realistic; the author's career was immersed in the engineering and entrepreneurial milieu, and understands these folks in detail. This is a world, devoid of much of what we consider to be modern, you'll find yourself admiring; it is a joy to visit it. The last two paragraphs will make you shiver.

This novel is currently unavailable in a print edition, so I have linked to the Kindle edition in the head. Used paperback copies are readily available. There is an unabridged audio version of this book.

August 2015 Permalink

Waldman, Jonathan. Rust. New York: Simon & Schuster, 2015. ISBN 978-1-4516-9159-7.
In May of 1980 two activists, protesting the imprisonment of a Black Panther convicted of murder, climbed the Statue of Liberty in New York harbour, planning to unfurl a banner high on the statue. After spending a cold and windy night aloft, they descended and surrendered to the New York Police Department's Emergency Service Unit. Fearful that the climbers may have damaged the fragile copper cladding of the monument, a comprehensive inspection was undertaken. What was found was shocking.

The structure of the Statue of Liberty was designed by Alexandre-Gustave Eiffel, and consists of an iron frame weighing 135 tons, which supports the 80 ton copper skin. As marine architects know well, a structure using two dissimilar metals such as iron and copper runs a severe risk of galvanic corrosion, especially in an environment such as the sea air of a harbour. If the iron and copper were to come into contact, a voltage would flow across the junction, and the iron would be consumed in the process. Eiffel's design prevented the iron and copper from touching one another by separating them with spacers made of asbestos impregnated with shellac.

What Eiffel didn't anticipate is that over the years superintendents of the statue would decide to “protect” its interior by applying various kinds of paint. By 1980 eight coats of paint had accumulated, almost as thick as the copper skin. The paint trapped water between the skin and the iron frame, and this set electrolysis into action. One third of the rivets in the frame were damaged or missing, and some of the frame's iron ribs had lost two thirds of their material. The asbestos insulators had absorbed water and were long gone. The statue was at risk of structural failure.

A private fund-raising campaign raised US$ 277 million to restore the statue, which ended up replacing most of its internal structure. On July 4th, 1986, the restored statue was inaugurated, marking its 100th anniversary.

Earth, uniquely among known worlds, has an atmosphere with free oxygen, produced by photosynthetic plants. While much appreciated by creatures like ourselves which breathe it, oxygen is a highly reactive gas and combines with many other elements, either violently in fire, or more slowly in reactions such as rusting metals. Further, 71% of the Earth's surface is covered by oceans, whose salty water promotes other forms of corrosion all too familiar to owners of boats. This book describes humanity's “longest war”: the battle against the corruption of our works by the inexorable chemical process of corrosion.

Consider an everyday object much more humble than the Statue of Liberty: the aluminium beverage can. The modern can is one of the most highly optimised products of engineering ever created. Around 180 billion cans are produced and consumed every year around the world: four six packs for every living human being. Reducing the mass of each can by just one gram will result in an annual saving of 180,000 metric tons of aluminium worth almost 300 million dollars at present prices, so a long list of clever tricks has been employed to reduce the mass of cans. But it doesn't matter how light or inexpensive the can is if it explodes, leaks, or changes the flavour of its contents. Coca-Cola, with a pH of 2.75 and a witches’ brew of ingredients, under a pressure of 6 atmospheres, is as corrosive to bare aluminium as battery acid. If the inside of the can were not coated with a proprietary epoxy lining (whose composition depends upon the product being canned, and is carefully guarded by can manufacturers), the Coke would corrode through the thin walls of the can in just three days. The process of scoring the pop-top removes the coating around the score, and risks corrosion and leakage if a can is stored on its side; don't do that.

The author takes us on an eclectic tour the history of corrosion and those who battle it, from the invention of stainless steel, inspecting the trans-Alaska oil pipeline by sending a “pig” (essentially a robot submarine equipped with electronic sensors) down its entire length, and evangelists for galvanizing (zinc coating) steel. We meet Dan Dunmire, the Pentagon's rust czar, who estimates that corrosion costs the military on the order of US$ 20 billion a year and describes how even the most humble of mitigation strategies can have huge payoffs. A new kind of gasket intended to prevent corrosion where radio antennas protrude through the fuselage of aircraft returned 175 times its investment in a single year. Overall return on investment in the projects funded by his office is estimated as fifty to one. We're introduced to the world of the corrosion engineer, a specialty which, while not glamorous, pays well and offers superb job security, since rust will always be with us.

Not everybody we encounter battles rust. Photographer Alyssha Eve Csük has turned corrosion into fine art. Working at the abandoned Bethlehem Steel Works in Pennsylvania, perhaps the rustiest part of the rust belt, she clandestinely scrambles around the treacherous industrial landscape in search of the beauty in corrosion.

This book mixes the science of corrosion with the stories of those who fight it, in the past and today. It is an enlightening and entertaining look into the most mundane of phenomena, but one which affects all the technological works of mankind.

January 2016 Permalink