History

Ackroyd, Peter. London Under. New York: Anchor Books, 2011. ISBN 978-0-307-47378-3.
Unlike New York, London grew from a swamp and its structure was moulded by the rivers that fed it. Over millennia, history has accreted in layer after layer as generations built atop the works of their ancestors. Descending into the caverns, buried rivers, sewers, subways, and infrastructure reveals the deep history, architecture and engineering, and legends of a great city.

May 2020 Permalink

Allin, Michael. Zarafa. New York: Walker and Company, 1998. ISBN 0-385-33411-7.

March 2003 Permalink

Allitt, Patrick. I'm the Teacher, You're the Student. Philadelphia: University of Pennsylvania Press, 2005. ISBN 0-8122-1887-6.
This delightfully written and enlightening book provides a look inside a present-day university classroom. The author, a professor of history at Emory University in Atlanta, presents a diary of a one semester course in U.S. history from 1877 to the present. Descriptions of summer programs at Oxford and experiences as a student of Spanish in Salamanca Spain (the description of the difficulty of learning a foreign language as an adult [pp. 65–69] is as good as any I've read) provide additional insight into the life of a professor. I wish I'd had a teacher explain the craft of expository writing as elegantly as Allitt in his “standard speech” (p. 82). The sorry state of undergraduate prose is sketched in stark detail, with amusing howlers like, “Many did not survive the harsh journey west, but still they trekked on.” Although an introductory class, students were a mix of all four undergraduate years; one doesn't get a sense the graduating seniors thought or wrote any more clearly than the freshmen. Along the way, Allitt provides a refresher course in the historical period covered by the class. You might enjoy answering the factual questions from the final examination on pp. 215–218 before and after reading the book and comparing your scores (answers are on p. 237—respect the honour code and don't peek!). The darker side of the educational scene is discussed candidly: plagiarism in the age of the Internet; clueless, lazy, and deceitful students; and the endless spiral of grade inflation. What grade would you give to students who, after a semester in an introductory undergraduate course, “have no aptitude for history, no appreciation for the connection between events, no sense of how a historical situation changes over time, [who] don't want to do the necessary hard work, … skimp on the reading, and can't write to save their lives” (p. 219)—certainly an F? Well, actually, no: “Most of them will get B− and a few really hard cases will come in with Cs”. And, refuting the notion that high mean grade point averages at elite schools simply reflect the quality of the student body and their work, about a quarter of Allitt's class are these intellectual bottom feeders he so vividly evokes.

January 2005 Permalink

Amundsen, Roald. The South Pole. New York: Cooper Square Press, [1913] 2001. ISBN 978-0-8154-1127-7.
In modern warfare, it has been observed that “generals win battles, but logisticians win wars.” So it is with planning an exploration mission to a remote destination where no human has ever set foot, and the truths are as valid for polar exploration in the early 20th century as they will be for missions to Mars in the 21st. On December 14th, 1911, Roald Amundsen and his five-man southern party reached the South Pole after a trek from the camp on the Ross Ice Shelf where they had passed the previous southern winter, preparing for an assault on the pole as early as the weather would permit. By over-wintering, they would be able to depart southward well before a ship would be able to land an expedition, since a ship would have to wait until the sea ice dispersed sufficiently to make a landing.

Amundsen's plan was built around what space mission architects call “in-situ resource utilisation” and “depots”, as well as “propulsion staging”. This allowed for a very lightweight push to the pole, both in terms of the amount of supplies which had to be landed by their ship, the Fram, and in the size of the polar party and the loading of their sledges. Upon arriving in Antarctica, Amundsen's party immediately began to hunt the abundant seals near the coast. More than two hundred seals were killed, processed, and stored for later use. (Since the temperature on the Ross Ice Shelf and the Antarctic interior never rises above freezing, the seal meat would keep indefinitely.) Then parties were sent out in the months remaining before the arrival of winter in 1911 to establish depots at every degree of latitude between the base camp and 82° south. These depots contained caches of seal meat for the men and dogs and kerosene for melting snow for water and cooking food. The depot-laying journeys familiarised the explorers with driving teams of dogs and operating in the Antarctic environment.

Amundsen had chosen dogs to pull his sledges. While his rival to be first at the pole, Robert Falcon Scott, experimented with pulling sledges by ponies, motorised sledges, and man-hauling, Amundsen relied upon the experience of indigenous people in Arctic environments that dogs were the best solution. Dogs reproduced and matured sufficiently quickly that attrition could be made up by puppies born during the expedition, they could be fed on seal meat, which could be obtained locally, and if a dog team were to fall into a crevasse (as was inevitable when crossing uncharted terrain), the dogs could be hauled out, no worse for wear, by the drivers of other sledges. For ponies and motorised sledges, this was not the case.

Further, Amundsen adopted a strategy which can best be described as “dog eat dog”. On the journey to the pole, he started with 52 dogs. Seven of these had died from exhaustion or other causes before the ascent to the polar plateau. (Dogs who died were butchered and fed to the other dogs. Greenland sled dogs, being only slightly removed from wolves, had no hesitation in devouring their erstwhile comrades.) Once reaching the plateau, 27 dogs were slaughtered, their meat divided between the surviving dogs and the five men. Only 18 dogs would proceed to the pole. Dog carcasses were cached for use on the return journey.

Beyond the depots, the polar party had to carry everything required for the trip. but knowing the depots would be available for the return allowed them to travel lightly. After reaching the pole, they remained for three days to verify their position, send out parties to ensure they had encircled the pole's position, and built a cairn to commemorate their achievement. Amundsen left a letter which he requested Captain Scott deliver to King Haakon VII of Norway should Amundsen's party be lost on its return to base. (Sadly, that was the fate which awaited Scott, who arrived at the pole on January 17th, 1912, only to find the Amundsen expedition's cairn there.)

This book is Roald Amundsen's contemporary memoir of the expedition. Originally published in two volumes, the present work includes both. Appendices describe the ship, the Fram, and scientific investigations in meteorology, geology, astronomy, and oceanography conducted during the expedition. Amundsen's account is as matter-of-fact as the memoirs of some astronauts, but a wry humour comes through when discussing dealing with sled dogs who have will of their own and also the foibles of humans cooped up in a small cabin in an alien environment during a night which lasts for months. He evinces great respect for his colleagues and competitors in polar exploration, particularly Scott and Shackleton, and worries whether his own approach to reaching the pole would be proved superior to theirs. At the time the book was published, the tragic fate of Scott's expedition was not known.

Today, we might not think of polar exploration as science, but a century ago it was as central to the scientific endeavour as robotic exploration of Mars is today. Here was an entire continent, known only in sketchy detail around its coast, with only a few expeditions into the interior. When Amundsen's party set out on their march to the pole, they had no idea whether they would encounter mountain ranges along the way and, if so, whether they could find a way over or around them. They took careful geographic and meteorological observations along their trek (as well as oceanographical measurements on the trip to Antarctica and back), and these provided some of the first data points toward understanding weather in the southern hemisphere.

In Norway, Amundsen was hailed as a hero. But it is clear from this narrative he never considered himself such. He wrote:

I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.

This work is in the public domain, and there are numerous editions of it available, in print and in electronic form, many from independent publishers. The independent publishers, for the most part, did not distinguish themselves in their respect for this work. Many of their editions were produced by running an optical character recognition program over a print copy of the book, then putting it together with minimal copy-editing. Some (including the one I was foolish enough to buy) elide all of the diagrams, maps, and charts from the original book, which renders parts of the text incomprehensible. The paperback edition cited above, while expensive, is a facsimile edition of the original 1913 two volume English translation of Amundsen's original work, including all of the illustrations. I know of no presently-available electronic edition which has comparable quality and includes all of the material in the original book. Be careful—if you follow the link to the paperback edition, you'll see a Kindle edition listed, but this is from a different publisher and is rife with errors and includes none of the illustrations. I made the mistake of buying it, assuming it was the same as the highly-praised paperback. It isn't; don't be fooled.

September 2014 Permalink

Andrew, Christopher and Vasili Mitrokhin. The Sword and the Shield. New York: Basic Books, 1999. ISBN 978-0-465-00312-9.
Vasili Mitrokhin joined the Soviet intelligence service as a foreign intelligence officer in 1948, at a time when the MGB (later to become the KGB) and the GRU were unified into a single service called the Committee of Information. By the time he was sent to his first posting abroad in 1952, the two services had split and Mitrokhin stayed with the MGB. Mitrokhin's career began in the paranoia of the final days of Stalin's regime, when foreign intelligence officers were sent on wild goose chases hunting down imagined Trotskyist and Zionist conspirators plotting against the regime. He later survived the turbulence after the death of Stalin and the execution of MGB head Lavrenti Beria, and the consolidation of power under his successors.

During the Khrushchev years, Mitrokhin became disenchanted with the regime, considering Khrushchev an uncultured barbarian whose banning of avant garde writers betrayed the tradition of Russian literature. He began to entertain dissident thoughts, not hoping for an overthrow of the Soviet regime but rather its reform by a new generation of leaders untainted by the legacy of Stalin. These thoughts were reinforced by the crushing of the reform-minded regime in Czechoslovakia in 1968 and his own observation of how his service, now called the KGB, manipulated the Soviet justice system to suppress dissent within the Soviet Union. He began to covertly listen to Western broadcasts and read samizdat publications by Soviet dissidents.

In 1972, the First Chief Directorate (FCD: foreign intelligence) moved from the cramped KGB headquarters in the Lubyanka in central Moscow to a new building near the ring road. Mitrokhin had sole responsibility for checking, inventorying, and transferring the entire archives, around 300,000 documents, of the FCD for transfer to the new building. These files documented the operations of the KGB and its predecessors dating back to 1918, and included the most secret records, those of Directorate S, which ran “illegals”: secret agents operating abroad under false identities. Probably no other individual ever read as many of the KGB's most secret archives as Mitrokhin. Appalled by much of the material he reviewed, he covertly began to make his own notes of the details. He started by committing key items to memory and then transcribing them every evening at home, but later made covert notes on scraps of paper which he smuggled out of KGB offices in his shoes. Each week-end he would take the notes to his dacha outside Moscow, type them up, and hide them in a series of locations which became increasingly elaborate as their volume grew.

Mitrokhin would continue to review, make notes, and add them to his hidden archive for the next twelve years until his retirement from the KGB in 1984. After Mikhail Gorbachev became party leader in 1985 and called for more openness (glasnost), Mitrokhin, shaken by what he had seen in the files regarding Soviet actions in Afghanistan, began to think of ways he might spirit his files out of the Soviet Union and publish them in the West.

After the collapse of the Soviet Union, Mitrokhin tested the new freedom of movement by visiting the capital of one of the now-independent Baltic states, carrying a sample of the material from his archive concealed in his luggage. He crossed the border with no problems and walked in to the British embassy to make a deal. After several more trips, interviews with British Secret Intelligence Service (SIS) officers, and providing more sample material, the British agreed to arrange the exfiltration of Mitrokhin, his entire family, and the entire archive—six cases of notes. He was debriefed at a series of safe houses in Britain and began several years of work typing handwritten notes, arranging the documents, and answering questions from the SIS, all in complete secrecy. In 1995, he arranged a meeting with Christopher Andrew, co-author of the present book, to prepare a history of KGB foreign intelligence as documented in the archive.

Mitrokhin's exfiltration (I'm not sure one can call it a “defection”, since the country whose information he disclosed ceased to exist before he contacted the British) and delivery of the archive is one of the most stunning intelligence coups of all time, and the material he delivered will be an essential primary source for historians of the twentieth century. This is not just a whistle-blower disclosing operations of limited scope over a short period of time, but an authoritative summary of the entire history of the foreign intelligence and covert operations of the Soviet Union from its inception until the time it began to unravel in the mid-1980s. Mitrokhin's documents name names; identify agents, both Soviet and recruits in other countries, by codename; describe secret operations, including assassinations, subversion, “influence operations” planting propaganda in adversary media and corrupting journalists and politicians, providing weapons to insurgents, hiding caches of weapons and demolition materials in Western countries to support special forces in case of war; and trace the internal politics and conflicts within the KGB and its predecessors and with the Party and rivals, particularly military intelligence (the GRU).

Any doubts about the degree of penetration of Western governments by Soviet intelligence agents are laid to rest by the exhaustive documentation here. During the 1930s and throughout World War II, the Soviet Union had highly-placed agents throughout the British and American governments, military, diplomatic and intelligence communities, and science and technology projects. At the same time, these supposed allies had essentially zero visibility into the Soviet Union: neither the American OSS nor the British SIS had a single agent in Moscow.

And yet, despite success in infiltrating other countries and recruiting agents within them (particularly prior to the end of World War II, when many agents, such as the “Magnificent Five” [Donald Maclean, Kim Philby, John Cairncross, Guy Burgess, and Anthony Blunt] in Britain, were motivated by idealistic admiration for the Soviet project, as opposed to later, when sources tended to be in it for the money), exploitation of this vast trove of purloined secret information was uneven and often ineffective. Although it reached its apogee during the Stalin years, paranoia and intrigue are as Russian as borscht, and compromised the interpretation and use of intelligence throughout the history of the Soviet Union. Despite having loyal spies in high places in governments around the world, whenever an agent provided information which seemed “too good” or conflicted with the preconceived notions of KGB senior officials or Party leaders, it was likely to be dismissed as disinformation, often suspected to have been planted by British counterintelligence, to which the Soviets attributed almost supernatural powers, or that their agents had been turned and were feeding false information to the Centre. This was particularly evident during the period prior to the Nazi attack on the Soviet Union in 1941. KGB archives record more than a hundred warnings of preparations for the attack having been forwarded to Stalin between January and June 1941, all of which were dismissed as disinformation or erroneous due to Stalin's idée fixe that Germany would not attack because it was too dependent on raw materials supplied by the Soviet Union and would not risk a two front war while Britain remained undefeated.

Further, throughout the entire history of the Soviet Union, the KGB was hesitant to report intelligence which contradicted the beliefs of its masters in the Politburo or documented the failures of their policies and initiatives. In 1985, shortly after coming to power, Gorbachev lectured KGB leaders “on the impermissibility of distortions of the factual state of affairs in messages and informational reports sent to the Central Committee of the CPSU and other ruling bodies.”

Another manifestation of paranoia was deep suspicion of those who had spent time in the West. This meant that often the most effective agents who had worked undercover in the West for many years found their reports ignored due to fears that they had “gone native” or been doubled by Western counterintelligence. Spending too much time on assignment in the West was not conducive to advancement within the KGB, which resulted in the service's senior leadership having little direct experience with the West and being prone to fantastic misconceptions about the institutions and personalities of the adversary. This led to delusional schemes such as the idea of recruiting stalwart anticommunist senior figures such as Zbigniew Brzezinski as KGB agents.

This is a massive compilation of data: 736 pages in the paperback edition, including almost 100 pages of detailed end notes and source citations. I would be less than candid if I gave the impression that this reads like a spy thriller: it is nothing of the sort. Although such information would have been of immense value during the Cold War, long lists of the handlers who worked with undercover agents in the West, recitations of codenames for individuals, and exhaustive descriptions of now largely forgotten episodes such as the KGB's campaign against “Eurocommunism” in the 1970s and 1980s, which it was feared would thwart Moscow's control over communist parties in Western Europe, make for heavy going for the reader.

The KGB's operations in the West were far from flawless. For decades, the Communist Party of the United States (CPUSA) received substantial subsidies from the KGB despite consistently promising great breakthroughs and delivering nothing. Between the 1950s and 1975, KGB money was funneled to the CPUSA through two undercover agents, brothers named Morris and Jack Childs, delivering cash often exceeding a million dollars a year. Both brothers were awarded the Order of the Red Banner in 1975 for their work, with Morris receiving his from Leonid Brezhnev in person. Unbeknownst to the KGB, both of the Childs brothers had been working for, and receiving salaries from, the FBI since the early 1950s, and reporting where the money came from and went—well, not the five percent they embezzled before passing it on. In the 1980s, the KGB increased the CPUSA's subsidy to two million dollars a year, despite the party's never having more than 15,000 members (some of whom, no doubt, were FBI agents).

A second doorstop of a book (736 pages) based upon the Mitrokhin archive, The World Was Going our Way, published in 2005, details the KGB's operations in the Third World during the Cold War. U.S. diplomats who regarded the globe and saw communist subversion almost everywhere were accurately reporting the situation on the ground, as the KGB's own files reveal.

The Kindle edition is free for Kindle Unlimited subscribers.

December 2019 Permalink

Barry, John M. The Great Influenza. New York: Penguin, [2004] 2005. ISBN 978-0-14-303649-4.
In the year 1800, the practice of medicine had changed little from that in antiquity. The rapid progress in other sciences in the 18th century had had little impact on medicine, which one historian called “the withered arm of science”. This began to change as the 19th century progressed. Researchers, mostly in Europe and especially in Germany, began to lay the foundations for a scientific approach to medicine and public health, understanding the causes of disease and searching for means of prevention and cure. The invention of new instruments for medical examination, anesthesia, and antiseptic procedures began to transform the practice of medicine and surgery.

All of these advances were slow to arrive in the United States. As late as 1900 only one medical school in the U.S. required applicants to have a college degree, and only 20% of schools required a high school diploma. More than a hundred U.S. medical schools accepted any applicant who could pay, and many graduated doctors who had never seen a patient or done any laboratory work in science. In the 1870s, only 10% of the professors at Harvard's medical school had a Ph.D.

In 1873, Johns Hopkins died, leaving his estate of US$ 3.5 million to found a university and hospital. The trustees embarked on an ambitious plan to build a medical school to be the peer of those in Germany, and began to aggressively recruit European professors and Americans who had studied in Europe to build a world class institution. By the outbreak of World War I in Europe, American medical research and education, still concentrated in just a few centres of excellence, had reached the standard set by Germany. It was about to face its greatest challenge.

With the entry of the United States into World War I in April of 1917, millions of young men conscripted for service were packed into overcrowded camps for training and preparation for transport to Europe. These camps, thrown together on short notice, often had only rudimentary sanitation and shelter, with many troops living in tent cities. Large number of doctors and especially nurses were recruited into the Army, and by the start of 1918 many were already serving in France. Doctors remaining in private practice in the U.S. were often older men, trained before the revolution in medical education and ignorant of modern knowledge of diseases and the means of treating them.

In all American wars before World War I, more men died from disease than combat. In the Civil War, two men died from disease for every death on the battlefield. Army Surgeon General William Gorgas vowed that this would not be the case in the current conflict. He was acutely aware that the overcrowded camps, frequent transfers of soldiers among far-flung bases, crowded and unsanitary troop transport ships, and unspeakable conditions in the trenches were a tinderbox just waiting for the spark of an infectious disease to ignite it. But the demand for new troops for the front in France caused his cautions to be overruled, and still more men were packed into the camps.

Early in 1918, a doctor in rural Haskell County, Kansas began to treat patients with a disease he diagnosed as influenza. But this was nothing like the seasonal influenza with which he was familiar. In typical outbreaks of influenza, the people at greatest risk are the very young (whose immune systems have not been previously exposed to the virus) and the very old, who lack the physical resilience to withstand the assault by the disease. Most deaths are among these groups, leading to a “bathtub curve” of mortality. This outbreak was different: the young and elderly were largely spared, while those in the prime of life were struck down, with many dying quickly of symptoms which resembled pneumonia. Slowly the outbreak receded, and by mid-March things were returning to normal. (The location and mechanism where the disease originated remain controversial to this day and we may never know for sure. After weighing competing theories, the author believes the Kansas origin most likely, but other origins have their proponents.)

That would have been the end of it, had not soldiers from Camp Funston, the second largest Army camp in the U.S., with 56,000 troops, visited their families in Haskell County while on leave. They returned to camp carrying the disease. The spark had landed in the tinderbox. The disease spread outward as troop trains travelled between camps. Often a train would leave carrying healthy troops (infected but not yet symptomatic) and arrive with up to half the company sick and highly infectious to those at the destination. Before long the disease arrived via troop ships at camps and at the front in France.

This was just the first wave. The spring influenza was unusual in the age group it hit most severely, but was not particularly more deadly than typical annual outbreaks. Then in the fall a new form of the disease returned in a much more virulent form. It is theorised that under the chaotic conditions of wartime a mutant form of the virus had emerged and rapidly spread among the troops and then passed into the civilian population. The outbreak rapidly spread around the globe, and few regions escaped. It was particularly devastating to aboriginal populations in remote regions like the Arctic and Pacific islands who had not developed any immunity to influenza.

The pathogen in the second wave could kill directly within a day by destroying the lining of the lung and effectively suffocating the patient. The disease was so virulent and aggressive that some medical researchers doubted it was influenza at all and suspected some new kind of plague. Even those who recovered from the disease had much of their immunity and defences against respiratory infection so impaired that some people who felt well enough to return to work would quickly come down with a secondary infection of bacterial pneumonia which could kill them.

All of the resources of the new scientific medicine were thrown into the battle with the disease, with little or no impact upon its progression. The cause of influenza was not known at the time: some thought it was a bacterial disease while others suspected a virus. Further adding to the confusion is that influenza patients often had a secondary infection of bacterial pneumonia, and the organism which causes that disease was mis-identified as the pathogen responsible for influenza. Heroic efforts were made, but the state of medical science in 1918 was simply not up to the challenge posed by influenza.

A century later, influenza continues to defeat every attempt to prevent or cure it, and another global pandemic remains a distinct possibility. Supportive treatment in the developed world and the availability of antibiotics to prevent secondary infection by pneumonia will reduce the death toll, but a mass outbreak of the virus on the scale of 1918 would quickly swamp all available medical facilities and bring society to the brink as it did then. Even regular influenza kills between a quarter and a half million people a year. The emergence of a killer strain like that of 1918 could increase this number by a factor of ten or twenty.

Influenza is such a formidable opponent due to its structure. It is an RNA virus which, unusually for a virus, has not a single strand of genetic material but seven or eight separate strands of RNA. Some researchers argue that in an organism infected with two or more variants of the virus these strands can mix to form new mutants, allowing the virus to mutate much faster than other viruses with a single strand of genetic material (this is controversial). The virus particle is surrounded by proteins called hemagglutinin (HA) and neuraminidase (NA). HA allows the virus to break into a target cell, while NA allows viruses replicated within the cell to escape to infect others.

What makes creating a vaccine for influenza so difficult is that these HA and NA proteins are what the body's immune system uses to identify the virus as an invader and kill it. But HA and NA come in a number of variants, and a specific strain of influenza may contain one from column H and one from column N, creating a large number of possibilities. For example, H1N2 is endemic in birds, pigs, and humans. H5N1 caused the bird flu outbreak in 2004, and H1N1 was responsible for the 1918 pandemic. It gets worse. As a child, when you are first exposed to influenza, your immune system will produce antibodies which identify and target the variant to which you were first exposed. If you were infected with and recovered from, say, H3N2, you'll be pretty well protected against it. But if, subsequently, you encounter H1N1, your immune system will recognise it sufficiently to crank out antibodies, but they will be coded to attack H3N2, not the H1N1 you're battling, against which they're useless. Influenza is thus a chameleon, constantly changing its colours to hide from the immune system.

Strains of influenza tend to come in waves, with one HxNy variant dominating for some number of years, then shifting to another. Developers of vaccines must play a guessing game about which you're likely to encounter in a given year. This explains why the 1918 pandemic particularly hit healthy adults. Over the decades preceding the 1918 outbreak, the primary variant had shifted from H1N1, then decades of another variant, and then after 1900 H1N1 came back to the fore. Consequently, when the deadly strain of H1N1 appeared in the fall of 1918, the immune systems of both young and elderly people were ready for it and protected them, but those in between had immune systems which, when confronted with H1N1, produced antibodies for the other variant, leaving them vulnerable.

With no medical defence against or cure for influenza even today, the only effective response in the case of an outbreak of a killer strain is public health measures such as isolation and quarantine. Influenza is airborne and highly infectious: the gauze face masks you see in pictures from 1918 were almost completely ineffective. The government response to the outbreak in 1918 could hardly have been worse. After creating military camps which were nothing less than a culture medium containing those in the most vulnerable age range packed in close proximity, once the disease broke out and reports began to arrive that this was something new and extremely lethal, the troop trains and ships continued to run due to orders from the top that more and more men had to be fed into the meat grinder that was the Western Front. This inoculated camp after camp. Then, when the disease jumped into the civilian population and began to devastate cities adjacent to military facilities such as Boston and Philadelphia, the press censors of Wilson's proto-fascist war machine decided that honest reporting of the extent and severity of the disease or measures aimed at slowing its spread would impact “morale” and war production, so newspapers were ordered to either ignore it or print useless happy talk which only accelerated the epidemic. The result was that in the hardest-hit cities, residents confronted with the reality before their eyes giving to lie to the propaganda they were hearing from authorities retreated into fear and withdrawal, allowing neighbours to starve rather than risk infection by bringing them food.

As was known in antiquity, the only defence against an infectious disease with no known medical intervention is quarantine. In Western Samoa, the disease arrived in September 1918 on a German steamer. By the time the disease ran its course, 22% of the population of the islands was dead. Just a few kilometres across the ocean in American Samoa, authorities imposed a rigid quarantine and not a single person died of influenza.

We will never know the worldwide extent of the 1918 pandemic. Many of the hardest-hit areas, such as China and India, did not have the infrastructure to collect epidemiological data and what they had collapsed under the impact of the crisis. Estimates are that on the order of 500 million people worldwide were infected and that between 50 and 100 million died: three to five percent of the world's population.

Researchers do not know why the 1918 second wave pathogen was so lethal. The genome has been sequenced and nothing jumps out from it as an obvious cause. Understanding its virulence may require recreating the monster and experimenting with it in animal models. Obviously, this is not something which should be undertaken without serious deliberation beforehand and extreme precautions, but it may be the only way to gain the knowledge needed to treat those infected should a similar wild strain emerge in the future. (It is possible this work may have been done but not published because it could provide a roadmap for malefactors bent on creating a synthetic plague. If this be the case, we'll probably never know about it.)

Although medicine has made enormous strides in the last century, influenza, which defeated the world's best minds in 1918, remains a risk, and in a world with global air travel moving millions between dense population centres, an outbreak today would be even harder to contain. Let us hope that in that dire circumstance authorities will have the wisdom and courage to take the kind of dramatic action which can make the difference between a regional tragedy and a global cataclysm.

October 2014 Permalink

Becker, Jasper. Hungry Ghosts: Mao's Secret Famine. New York: Henry Holt, [1996] 1998. ISBN 0-8050-5668-8.

December 2003 Permalink

Berlinski, Claire. Menace in Europe. New York: Crown Forum, 2006. ISBN 1-4000-9768-1.
This is a scary book. The author, who writes with a broad and deep comprehension of European history and its cultural roots, and a vocabulary which reminds one of William F. Buckley, argues that the deep divide which has emerged between the United States and Europe since the end of the cold war, and particularly in the last few years, is not a matter of misunderstanding, lack of sensitivity on the part of the U.S., or the personnel, policies, and style of the Bush administration, but deeply rooted in structural problems in Europe which are getting worse, not better. (That's not to say that there aren't dire problems in the U.S. as well, but that isn't the topic here.)

Surveying the contemporary scene in the Netherlands, Britain, France, Spain, Italy, and Germany, and tracing the roots of nationalism, peasant revolts (of which “anti-globalisation” is the current manifestation), and anti-Semitism back through the centuries, she shows that what is happening in Europe today is simply Europe—the continent of too many kings and too many wars—being Europe, adapted to present-day circumstances. The impression you're left with is that Europe isn't just the “sick man of the world”, but rather a continent afflicted with half a dozen or more separate diseases, all terminal: a large, un-assimilated immigrant population concentrated in ghettos; an unsustainable welfare state; a sclerotic economy weighed down by social charges, high taxes, and ubiquitous and counterproductive regulation; a collapsing birth rate and aging population; a “culture crash” (my term), where the religions and ideologies which have structured the lives of Europeans for millennia have evaporated, leaving nothing in their place; a near-total disconnect between elites and the general population on the disastrous project of European integration, most recently manifested in the controversy over the so-called European constitution; and signs that the rabid nationalism which plunged Europe into two disastrous wars in the last century and dozens, if not hundreds of wars in the centuries before, is seeping back up through the cracks in the foundation of the dystopian, ill-conceived European Union.

In some regards, the author does seem to overstate the case, or generalise from evidence so narrow it lacks persuasiveness. The most egregious example is chapter 8, which infers an emerging nihilist neo-Nazi nationalism in Germany almost entirely based on the popularity of the band Rammstein. Well, yes, but whatever the lyrics, the message of the music, and the subliminal message of the music videos, there is a lot more going on in Germany, a nation of more than 80 million people, than the antics of a single heavy metal band, however atavistic.

U.S. readers inclined to gloat over the woes of the old continent should keep in mind the author's observation, a conclusion I had come to long before I ever opened this book, that the U.S. is heading directly for the same confluence of catastrophes as Europe, and, absent a fundamental change of course, will simply arrive at the scene of the accident somewhat later; and that's only taking into account the problems they have in common; the European economy, unlike the American, is able to function without borrowing on the order of two billion dollars a day from China and Japan.

If you live in Europe, as I have for the last fifteen years (thankfully outside, although now encircled by, the would-be empire that sprouted from Brussels), you'll probably find little here that's new, but you may get a better sense of how the problems interact with one another to make a real crisis somewhere in the future a genuine possibility. The target audience in the U.S., which is so often lectured by their elite that Europe is so much more sophisticated, nuanced, socially and environmentally aware, and rational, may find this book an eye opener; 344,955 American soldiers perished in European wars in the last century, and while it may be satisfying to say, “To Hell with Europe!”, the lesson of history is that saying so is most unwise.

An Instapundit podcast interview with the author is freely available on-line.

July 2006 Permalink

Bernstein, Jeremy. Plutonium. Washington: Joseph Henry Press, 2007. ISBN 0-309-10296-0.
When the Manhattan Project undertook to produce a nuclear bomb using plutonium-239, the world's inventory of the isotope was on the order of a microgram, all produced by bombarding uranium with neutrons produced in cyclotrons. It wasn't until August of 1943 that enough had been produced to be visible under a microscope. When, in that month, the go-ahead was given to build the massive production reactors and separation plants at the Hanford site on the Columbia River, virtually nothing was known of the physical properties, chemistry, and metallurgy of the substance they were undertaking to produce. In fact, it was only in 1944 that it was realised that the elements starting with thorium formed a second group of “rare earth” elements: the periodic table before World War II had uranium in the column below tungsten and predicted that the chemistry of element 94 would resemble that of osmium. When the large-scale industrial production of plutonium was undertaken, neither the difficulty of separating the element from the natural uranium matrix in which it was produced nor the contamination with Pu-240 which would necessitate an implosion design for the plutonium bomb were known. Notwithstanding, by the end of 1947 a total of 500 kilograms of the stuff had been produced, and today there are almost 2000 metric tons of it, counting both military inventories and that produced in civil power reactors, which crank out about 70 more metric tons a year.

These are among the fascinating details gleaned and presented in this history and portrait of the most notorious of artificial elements by physicist and writer Jeremy Bernstein. He avoids getting embroiled in the building of the bomb, which has been well-told by others, and concentrates on how scientists around the world stumbled onto nuclear fission and transuranic elements, puzzled out what they were seeing, and figured out the bizarre properties of what they had made. Bizarre is not too strong a word for the chemistry and metallurgy of plutonium, which remains an active area of research today with much still unknown. When you get that far down on the periodic table, both quantum mechanics and special relativity get into the act (as they start to do even with gold), and you end up with six allotropic phases of the metal (in two of which volume decreases with increasing temperature), a melting point of just 640° C and an anomalous atomic radius which indicates its 5f electrons are neither localised nor itinerant, but somewhere in between.

As the story unfolds, we meet some fascinating characters, including Fritz Houtermans, whose biography is such that, as the author notes (p. 86), “if one put it in a novel, no one would find it plausible.” We also meet stalwarts of the elite 26-member UPPU Club: wartime workers at Los Alamos whose exposure to plutonium was sufficient that it continues to be detectable in their urine. (An epidemiological study of these people which continues to this day has found no elevated rates of mortality, which is not to say that plutonium is not a hideously hazardous substance.)

The text is thoroughly documented in the end notes, and there is an excellent index; the entire book is just 194 pages. I have two quibbles. On p. 110, the author states of the Little Boy gun-assembly uranium bomb dropped on Hiroshima, “This is the only weapon of this design that was ever detonated.” Well, I suppose you could argue that it was the only such weapon of that precise design detonated, but the implication is that it was the first and last gun-type bomb to be detonated, and this is not the case. The U.S. W9 and W33 weapons, among others, were gun-assembly uranium bombs, which between them were tested three times at the Nevada Test Site. The price for plutonium-239 quoted on p. 155, US$5.24 per milligram, seems to imply that the plutonium for a critical mass of about 6 kg costs about 31 million dollars. But this is because the price quoted is for 99–99.99% isotopically pure Pu-239, which has been electromagnetically separated from the isotopic mix you get from the production reactor. Weapons-grade plutonium can have up to 7% Pu-240 contamination, which doesn't require the fantastically expensive isotope separation phase, just chemical extraction of plutonium from reactor fuel. In fact, you can build a bomb from so-called “reactor-grade” plutonium—the U.S. tested one in 1962.

November 2007 Permalink

Biggs, Barton. Wealth, War, and Wisdom. Hoboken, NJ: John Wiley & Sons, 2008. ISBN 978-0-470-22307-9.
Many people, myself included, who have followed financial markets for an extended period of time, have come to believe what may seem, to those who have not, a very curious and even mystical thing: that markets, aggregating the individual knowledge and expectations of their multitude of participants, have an uncanny way of “knowing” what the future holds. In retrospect, one can often look at a chart of broad market indices and see that the market “called” grand turning points by putting in a long-term bottom or top, even when those turning points were perceived by few if any people at the time. One of the noisiest buzzwords of the “Web 2.0” hype machine is “crowdsourcing”, yet financial markets have been doing precisely that for centuries, and in an environment in which the individual participants are not just contributing to some ratty, ephemeral Web site, but rather putting their own net worth on the line.

In this book the author, who has spent his long career as a securities analyst and hedge fund manager, and was a pioneer of investing in emerging global markets, looks at the greatest global cataclysm of the twentieth century—World War II—and explores how well financial markets in the countries involved identified the key trends and turning points in the conflict. The results persuasively support the “wisdom of the market” viewpoint and are a convincing argument that “the market knows”, even when its individual participants, media and opinion leaders, and politicians do not. Consider: the British stock market put in an all-time historic bottom in June 1940, just as Hitler toured occupied Paris and, in retrospect, Nazi expansionism in the West reached its peak. Many Britons expected a German invasion in the near future, and the Battle of Britain and the Blitz were still in the future, and yet the market rallied throughout these dark days. Somehow the market seems to have known that with the successful evacuation of the British Expeditionary Force from Dunkerque and the fall of France, the situation, however dire, was as bad as it was going to get.

In the United States, the Dow Jones Industrial Average declined throughout 1941 as war clouds darkened, fell further after Pearl Harbor and the fall of the Philippines, but put in an all-time bottom in 1942 coincident with the battles of the Coral Sea and Midway which, in retrospect, but not at the time, were seen as the key inflection point of the Pacific war. Note that at this time the U.S. was also at war with Germany and Italy but had not engaged either in a land battle, and yet somehow the market “knew” that, whatever the sacrifices to come, the darkest days were behind.

The wisdom of the markets was also apparent in the ultimate losers of the conflict, although government price-fixing and disruption of markets as things got worse obscured the message. The German CDAX index peaked precisely when the Barbarossa invasion of the Soviet Union was turned back within sight of the spires of the Kremlin. At this point the German army was intact, the Soviet breadbasket was occupied, and the Red Army was in disarray, yet somehow the market knew that this was the high point. The great defeat at Stalingrad and the roll-back of the Nazi invaders were all in the future, but despite propaganda, censorship of letters from soldiers at the front, and all the control of information a totalitarian society can employ, once again the market called the turning point. In Italy, where rampant inflation obscured nominal price indices, the inflation-adjusted BCI index put in its high at precisely the moment Mussolini made his alliance with Hitler, and it was all downhill from there, both for Italy and its stock market, despite rampant euphoria at the time. In Japan, the market was heavily manipulated by the Ministry of Finance and tight control of war news denied investors information to independently assess the war situation, but by 1943 the market had peaked in real terms and declined into a collapse thereafter.

In occupied countries, where markets were allowed to function, they provided insight into the sympathies of their participants. The French market is particularly enlightening. Clearly, the investor class was completely on-board with the German occupation and Vichy. In real terms, the market soared after the capitulation of France and peaked with the defeat at Stalingrad, then declined consistently thereafter, with only a little blip with the liberation of Paris. But then the French stock market wouldn't be French if it weren't perverse, would it?

Throughout, the author discusses how individuals living in both the winners and losers of the war could have best preserved their wealth and selves, and this is instructive for folks interested in saving their asses and assets the next time the Four Horsemen sortie from Hell's own stable. Interestingly, according to Biggs's analysis, so-called “defensive” investments such as government and top-rated corporate bonds and short-term government paper (“Treasury Bills”) performed poorly as stores of wealth in the victor countries and disastrously in the vanquished. In those societies where equity markets survived the war (obviously, this excludes those countries in Eastern Europe occupied by the Soviet Union), stocks were the best financial instrument in preserving value, although in many cases they did decline precipitously over the period of the war. How do you ride out a cataclysm like World War II? There are three key ways: diversification, diversification, and diversification. You need to diversify across financial and real assets, including (diversified) portfolios of stocks, bonds, and bills, as well as real assets such as farmland, real estate, and hard assets (gold, jewelry, etc.) for really hard times. You further need to diversify internationally: not just in the assets you own, but where you keep them. Exchange controls can come into existence with the stroke of a pen, and that offshore bank account you keep “just in case” may be all you have if the worst comes to pass. Thinking about it in that way, do you have enough there? Finally, you need to diversify your own options in the world and think about what you'd do if things really start to go South, and you need to think about it now, not then. As the author notes in the penultimate paragraph:

…the rich are almost always too complacent, because they cherish the illusion that when things start to go bad, they will have time to extricate themselves and their wealth. It never works that way. Events move much faster than anyone expects, and the barbarians are on top of you before you can escape. … It is expensive to move early, but it is far better to be early than to be late.
This is a quirky book, and not free of flaws. Biggs is a connoisseur of amusing historical anecdotes and sprinkles them throughout the text. I found them a welcome leavening of a narrative filled with human tragedy, folly, and destruction of wealth, but some may consider them a distraction and out of place. There are far more copy-editing errors in this book (including dismayingly many difficulties with the humble apostrophe) than I would expect in a Wiley main catalogue title. But that said, if you haven't discovered the wisdom of the markets for yourself, and are worried about riding out the uncertainties of what appears to be a bumpy patch ahead, this is an excellent place to start.

June 2008 Permalink

Bolton, Andrew. Bravehearts: Men in Skirts. London: V&A Publications, 2003. ISBN 0-8109-6558-5.

January 2004 Permalink

Bragg, Melvyn. The Adventure of English. London: Sceptre, 2003. ISBN 0-340-82993-1.
How did a language spoken by 150,000 or so Germanic warriors who invaded the British Isles in the fifth century A.D. become the closest thing so far to a global language, dominating the worlds of science and commerce which so define the modern age? Melvyn Bragg, who earlier produced a television series (which I haven't seen) with the same name for the British ITV network follows the same outline in this history of English. The tremendous contingency in the evolution of a language is much to be seen here: had Shakespeare, Dr. Johnson, or William Tyndale (who first translated the Bible into English and paid with his life for having done so) died in infancy, how would we speak today, and in what culture would we live? The assembly of the enormous vocabulary of English by devouring words from dozens of other languages is well documented, as well as the differentiation of British English into distinct American, Caribbean, Australian, South African, Indian, and other variants which enrich the mother tongue with both vocabulary and grammar. Fair dinkum, innit man?

As English has grown by accretion, it has also cast out a multitude of words into the “Obs.” bin of the OED, many in the “Inkhorn Controversy” in the 16th century. What a loss! The more words, the richer the language, and I hereby urge we reinstate “abstergify”, last cited in the OED in 1612, defined as the verb “To cleanse”. I propose this word to mean “to clean up, ęsthetically, without any change in function”. For example, “I spent all day abstergifying the configuration files for the Web server”.

The mystery of why such an ill-structured language with an almost anti-phonetic spelling should have become so widespread is discussed here only on the margin, often in apologetic terms invoking the guilt of slavery and colonialism. (But speakers of other languages pioneered these institutions, so why didn't they triumph?) Bragg suggests, almost in passing, what I think is very significant. The very irregularity of English permits it to assimilate the vocabulary of every language it encounters. In Greek, Latin, Spanish, or French, there are rules about the form of verbs and the endings of nouns and agreement of adjectives which cannot accommodate words from fundamentally different languages. But in English, there are no rules whatsoever—bring your own vocabulary—there's room for everybody and every word. Come on in, it's great—the more the better!

A U.S edition is now available, but as of this date only in hardcover.

February 2005 Permalink

Brookhiser, Richard. Founding Father. New York: Free Press, 1996. ISBN 0-684-83142-2.
This thin (less than 200 pages of main text) volume is an enlightening biography of George Washington. It is very much a moral biography in the tradition of Plutarch's Lives; the focus is on Washington's life in the public arena and the events in his life which formed his extraordinary character. Reading Washington's prose, one might assume that he, like many other framers of the U.S. Constitution, had an extensive education in the classics, but in fact his formal education ended at age 15, when he became an apprentice surveyor—among U.S. presidents, only Andrew Johnson had less formal schooling. Washington's intelligence and voracious reading—his library numbered more than 900 books at his death—made him the intellectual peer of his just sprouting Ivy League contemporaries. One historical footnote I'd never before encountered is the tremendous luck the young U.S. republic had in escaping the risk of dynasty—among the first five U.S. presidents, only John Adams had a son who survived to adulthood (and his eldest son, John Quincy Adams, became the sixth president).

May 2005 Permalink

[Audiobook] Bryson, Bill. A Short History of Nearly Everything (Audiobook, Unabridged). Westminster, MD: Books on Tape, 2003. ISBN 0-7366-9320-3.
What an astonishing achievement! Toward the end of the 1990s, Bill Bryson, a successful humorist and travel writer, found himself on a flight across the Pacific and, looking down on the ocean, suddenly realised that he didn't know how it came to be, how it affected the clouds above it, what lived in its depths, or hardly anything else about the world and universe he inhabited, despite having lived in an epoch in which science made unprecedented progress in understanding these and many other things. Shortly thereafter, he embarked upon a three year quest of reading popular science books and histories of science, meeting with their authors and with scientists in numerous fields all around the globe, and trying to sort it all out into a coherent whole.

The result is this stunning book, which neatly packages the essentials of human knowledge about the workings of the universe, along with how we came to know all of these things and the stories of the often fascinating characters who figured it all out, into one lucid, engaging, and frequently funny package. Unlike many popular works, Bryson takes pains to identify what we don't know, of which there is a great deal, not just in glamourous fields like particle physics but in stuffy endeavours such as plant taxonomy. People who find themselves in Bryson's position at the outset—entirely ignorant of science—can, by reading this single work, end up knowing more about more things than even most working scientists who specialise in one narrow field. The scope is encyclopedic: from quantum mechanics and particles to galaxies and cosmology, with chemistry, the origin of life, molecular biology, evolution, genetics, cell biology, paleontology and paleoanthropology, geology, meteorology, and much, much more, all delightfully told, with only rare errors, and with each put into historical context. I like to think of myself as reasonably well informed about science, but as I listened to this audiobook over a period of several weeks on my daily walks, I found that every day, in the 45 to 60 minutes I listened, there was at least one and often several fascinating things of which I was completely unaware.

This audiobook is distributed in three parts, totalling 17 hours and 48 minutes. The book is read by British narrator Richard Matthews, who imparts an animated and light tone appropriate to the text. He does, however mispronounce the names of several scientists, for example physicists Robert Dicke (whose last name he pronounces “Dick”, as opposed to the correct “Dickey”) and Richard Feynman (“Fane-man” instead of “Fine-man”), and when he attempts to pronounce French names or phrases, his accent is fully as affreux as my own, but these are minor quibbles which hardly detract from an overall magnificent job. If you'd prefer to read the book, it's available in paperback now, and there's an illustrated edition, which I haven't seen. I would probably never have considered this book, figuring I already knew it all, had I not read Hugh Hewitt's encomium to it and excerpts therefrom he included (parts 1, 2, 3).

November 2007 Permalink

Buckley, Christopher. The Relic Master. New York: Simon & Schuster, 2015. ISBN 978-1-5011-2575-1.
The year is 1517. The Holy Roman Empire sprawls across central Europe, from the Mediterranean in the south to the North Sea and Baltic in the north, from the Kingdom of France in the west to the Kingdoms of Poland and Hungary in the east. In reality the structure of the empire is so loose and complicated it defies easy description: independent kings, nobility, and prelates all have their domains of authority, and occasionally go to war against one another. Although the Reformation is about to burst upon the scene, the Roman Catholic Church is supreme, and religion is big business. In particular, the business of relics and indulgences.

Commit a particularly heinous sin? If you're sufficiently well-heeled, you can obtain an indulgence through prayer, good works, or making a pilgrimage to a holy site. Over time, “good works” increasingly meant, for the prosperous, making a contribution to the treasury of the local prince or prelate, a percentage of which was kicked up to higher-ranking clergy, all the way to Rome. Or, an enterprising noble or churchman could collect relics such as the toe bone of a saint, a splinter from the True Cross, or a lock of hair from one of the camels the Magi rode to Bethlehem. Pilgrims would pay a fee to see, touch, have their sins erased, and be healed by these holy trophies. In short, the indulgence and relic business was selling “get out of purgatory for a price”. The very best businesses are those in which the product is delivered only after death—you have no problems with dissatisfied customers.

To flourish in this trade, you'll need a collection of relics, all traceable to trustworthy sources. Relics were in great demand, and demand summons supply into being. All the relics of the True Cross, taken together, would have required the wood from a medium-sized forest, and even the most sacred and unique of relics, the burial shroud of Christ, was on display in several different locations. It's the “trustworthy” part that's difficult, and that's where Dismas comes in. A former Swiss mercenary, his resourcefulness in obtaining relics had led to his appointment as Relic Master to His Grace Albrecht, Archbishop of Brandenburg and Mainz, and also to Frederick the Wise, Elector of Saxony. These two customers were rivals in the relic business, allowing Dismas to play one against the other to his advantage. After visiting the Basel Relic Fair and obtaining some choice merchandise, he visits his patrons to exchange them for gold. While visiting Frederick, he hears that a monk has nailed ninety-five denunciations of the Church, including the sale of indulgences, to the door of the castle church. This is interesting, but potentially bad for business.

Dismas meets his friend, Albrecht Dürer, who he calls “Nars” due to Dürer's narcissism: among other things including his own visage in most of his paintings. After months in the south hunting relics, he returns to visit Dürer and learns that the Swiss banker with whom he's deposited his fortune has been found to be a 16th century Bernie Madoff and that he has only the money on his person.

Destitute, Dismas and Dürer devise a scheme to get back into the game. This launches them into a romp across central Europe visiting the castles, cities, taverns, dark forbidding forests, dungeons, and courts of nobility. We encounter historical figures including Philippus Aureolus Theophrastus Bombastus von Hohenheim (Paracelsus), who lends his scientific insight to the effort. All of this is recounted with the mix of wry and broad humour which Christopher Buckley uses so effectively in all of his novels. There is a tableau of the Last Supper, identity theft, and bombs. An appendix gives background on the historical figures who appear in the novel.

This is a pure delight and illustrates how versatile is the talent of the author. Prepare yourself for a treat; this novel delivers. Here is an interview with the author.

May 2016 Permalink

Burrough, Bryan. Days of Rage. New York: Penguin Press, 2015. ISBN 978-0-14-310797-2.
In the year 1972, there were more than 1900 domestic bombings in the United States. Think about that—that's more than five bombings a day. In an era when the occasional terrorist act by a “lone wolf” nutcase gets round the clock coverage on cable news channels, it's hard to imagine that not so long ago, most of these bombings and other mayhem, committed by “revolutionary” groups such as Weatherman, the Black Liberation Army, FALN, and The Family, often made only local newspapers on page B37, below the fold.

The civil rights struggle and opposition to the Vietnam war had turned out large crowds and radicalised the campuses, but in the opinion of many activists, yielded few concrete results. Indeed, in the 1968 presidential election, pro-war Democrat Humphrey had been defeated by pro-war Republican Nixon, with anti-war Democrats McCarthy marginalised and Robert Kennedy assassinated.

In this bleak environment, a group of leaders of one of the most radical campus organisations, the Students for a Democratic Society (SDS), gathered in Chicago to draft what became a sixteen thousand word manifesto bristling with Marxist jargon that linked the student movement in the U.S. to Third World guerrilla insurgencies around the globe. They advocated a Che Guevara-like guerrilla movement in America led, naturally, by themselves. They named the manifesto after the Bob Dylan lyric, “You don't need a weatherman to know which way the wind blows.” Other SDS members who thought the idea of armed rebellion in the U.S. absurd and insane quipped, “You don't need a rectal thermometer to know who the assholes are.”

The Weatherman faction managed to blow up (figuratively) the SDS convention in June 1969, splitting the organisation but effectively taking control of it. They called a massive protest in Chicago for October. Dubbed the “National Action”, it would soon become known as the “Days of Rage”.

Almost immediately the Weatherman plans began to go awry. Their plans to rally the working class (who the Ivy League Weatherman élite mocked as “greasers”) got no traction, with some of their outrageous “actions” accomplishing little other than landing the perpetrators in the slammer. Come October, the Days of Rage ended up in farce. Thousands had been expected, ready to take the fight to the cops and “oppressors”, but come the day, no more than two hundred showed up, most SDS stalwarts who already knew one another. They charged the police and were quickly routed with six shot (none seriously), many beaten, and more than 120 arrested. Bail bonds alone added up to US$ 2.3 million. It was a humiliating defeat. The leadership decided it was time to change course.

So what did this intellectual vanguard of the masses decide to do? Well, obviously, destroy the SDS (their source of funding and pipeline of recruitment), go underground, and start blowing stuff up. This posed a problem, because these middle-class college kids had no idea where to obtain explosives (they didn't know that at the time you could buy as much dynamite as you could afford over the counter in many rural areas with, at most, showing a driver's license), what to do with it, and how to build an underground identity. This led to, not Keystone Kops, but Klueless Kriminal misadventures, culminating in March 1970 when they managed to blow up an entire New York townhouse where a bomb they were preparing to attack a dance at Fort Dix, New Jersey detonated prematurely, leaving three of the Weather collective dead in the rubble. In the aftermath, many Weather hangers-on melted away.

This did not deter the hard core, who resolved to learn more about their craft. They issued a communiqué declaring their solidarity with the oppressed black masses (not one of whom, oppressed or otherwise, was a member of Weatherman), and vowed to attack symbols of “Amerikan injustice”. Privately, they decided to avoid killing people, confining their attacks to property. And one of their members hit the books to become a journeyman bombmaker.

The bungling Bolsheviks of Weatherman may have had Marxist theory down pat, but they were lacking in authenticity, and acutely aware of it. It was hard for those whose addresses before going underground were élite universities to present themselves as oppressed. The best they could do was to identify themselves with the cause of those they considered victims of “the system” but who, to date, seemed little inclined to do anything about it themselves. Those who cheered on Weatherman, then, considered it significant when, in the spring of 1971, a new group calling itself the “Black Liberation Army” (BLA) burst onto the scene with two assassination-style murders of New York City policemen on routine duty. Messages delivered after each attack to Harlem radio station WLIB claimed responsibility. One declared,

Every policeman, lackey or running dog of the ruling class must make his or her choice now. Either side with the people: poor and oppressed, or die for the oppressor. Trying to stop what is going down is like trying to stop history, for as long as there are those who will dare to live for freedom there are men and women who dare to unhorse the emperor.

All power to the people.

Politicians, press, and police weren't sure what to make of this. The politicians, worried about the opinion of their black constituents, shied away from anything which sounded like accusing black militants of targeting police. The press, although they'd never write such a thing or speak it in polite company, didn't think it plausible that street blacks could organise a sustained revolutionary campaign: certainly that required college-educated intellectuals. The police, while threatened by these random attacks, weren't sure there was actually any organised group behind the BLA attacks: they were inclined to believe it was a matter of random cop killers attributing their attacks to the BLA after the fact. Further, the BLA had no visible spokesperson and issued no manifestos other than the brief statements after some attacks. This contributed to the mystery, which largely persists to this day because so many participants were killed and the survivors have never spoken out.

In fact, the BLA was almost entirely composed of former members of the New York chapter of the Black Panthers, which had collapsed in the split between factions following Huey Newton and those (including New York) loyal to Eldridge Cleaver, who had fled to exile in Algeria and advocated violent confrontation with the power structure in the U.S. The BLA would perpetrate more than seventy violent attacks between 1970 and 1976 and is said to be responsible for the deaths of thirteen police officers. In 1982, they hijacked a domestic airline flight and pocketed a ransom of US$ 1 million.

Weatherman (later renamed the “Weather Underground” because the original name was deemed sexist) and the BLA represented the two poles of the violent radicals: the first, intellectual, college-educated, and mostly white, concentrated mostly on symbolic bombings against property, usually with warnings in advance to avoid human casualties. As pressure from the FBI increased upon them, they became increasingly inactive; a member of the New York police squad assigned to them quipped, “Weatherman, Weatherman, what do you do? Blow up a toilet every year or two.” They managed the escape of Timothy Leary from a minimum-security prison in California. Leary basically just walked away, with a group of Weatherman members paid by Leary supporters picking him up and arranging for he and his wife Rosemary to obtain passports under assumed names and flee the U.S. for exile in Algeria with former Black Panther leader Eldridge Cleaver.

The Black Liberation Army, being composed largely of ex-prisoners with records of violent crime, was not known for either the intelligence or impulse control of its members. On several occasions, what should have been merely tense encounters with the law turned into deadly firefights because a BLA militant opened fire for no apparent reason. Had they not been so deadly to those they attacked and innocent bystanders, the exploits of the BLA would have made a fine slapstick farce.

As the dour decade of the 1970s progressed, other violent underground groups would appear, tending to follow the model of either Weatherman or the BLA. One of the most visible, it not successful, was the “Symbionese Liberation Army” (SLA), founded by escaped convict and grandiose self-styled revolutionary Daniel DeFreeze. Calling himself “General Field Marshal Cinque”, which he pronounced “sin-kay”, and ending his fevered communications with “DEATH TO THE FASCIST INSECT THAT PREYS UPON THE LIFE OF THE PEOPLE”, this band of murderous bozos struck their first blow for black liberation by assassinating Marcus Foster, the first black superintendent of the Oakland, California school system for his “crimes against the people” of suggesting that police be called into deal with violence in the city's schools and that identification cards be issued to students. Sought by the police for the murder, they struck again by kidnapping heiress, college student, and D-list celebrity Patty Hearst, whose abduction became front page news nationwide. If that wasn't sufficiently bizarre, the abductee eventually issued a statement saying she had chosen to “stay and fight”, adopting the name “Tania”, after the nom de guerre of a Cuban revolutionary and companion of Che Guevara. She was later photographed by a surveillance camera carrying a rifle during a San Francisco bank robbery perpetrated by the SLA. Hearst then went underground and evaded capture until September 1975 after which, when being booked into jail, she gave her occupation as “Urban Guerrilla”. Hearst later claimed she had agreed to join the SLA and participate in its crimes only to protect her own life. She was convicted and sentenced to 35 years in prison, later reduced to 7 years. The sentence was later commuted to 22 months by U.S. President Jimmy Carter and she was released in 1979, and was the recipient of one of Bill Clinton's last day in office pardons in January, 2001. Six members of the SLA, including DeFreeze, died in a house fire during a shootout with the Los Angeles Police Department in May, 1974.

Violence committed in the name of independence for Puerto Rico was nothing new. In 1950, two radicals tried to assassinate President Harry Truman, and in 1954, four revolutionaries shot up the U.S. House of Representatives from the visitors' gallery, wounding five congressmen on the floor, none fatally. The Puerto Rican terrorists had the same problem as their Weatherman, BLA, or SLA bomber brethren: they lacked the support of the people. Most of the residents of Puerto Rico were perfectly happy being U.S. citizens, especially as this allowed them to migrate to the mainland to escape the endemic corruption and the poverty it engendered in the island. As the 1960s progressed, the Puerto Rico radicals increasingly identified with Castro's Cuba (which supported them ideologically, if not financially), and promised to make a revolutionary Puerto Rico a beacon of prosperity and liberty like Cuba had become.

Starting in 1974, a new Puerto Rican terrorist group, the Fuerzas Armadas de Liberación Nacional (FALN) launched a series of attacks in the U.S., most in the New York and Chicago areas. One bombing, that of the Fraunces Tavern in New York in January 1975, killed four people and injured more than fifty. Between 1974 and 1983, a total of more than 130 bomb attacks were attributed to the FALN, most against corporate targets. In 1975 alone, twenty-five bombs went off, around one every two weeks.

Other groups, such as the “New World Liberation Front” (NWLF) in northern California and “The Family” in the East continued the chaos. The NWLF, formed originally from remains of the SLA, detonated twice as many bombs as the Weather Underground. The Family carried out a series of robberies, including the deadly Brink's holdup of October 1981, and jailbreaks of imprisoned radicals.

In the first half of the 1980s, the radical violence sputtered out. Most of the principals were in prison, dead, or living underground and keeping a low profile. A growing prosperity had replaced the malaise and stagflation of the 1970s and there were abundant jobs for those seeking them. The Vietnam War and draft were receding into history, leaving the campuses with little to protest, and the remaining radicals had mostly turned from violent confrontation to burrowing their way into the culture, media, administrative state, and academia as part of Gramsci's “long march through the institutions”.

All of these groups were plagued with the “step two problem”. The agenda of Weatherman was essentially:

  1. Blow stuff up, kill cops, and rob banks.
  2. ?
  3. Proletarian revolution.

Other groups may have had different step threes: “Black liberation” for the BLA, “¡Puerto Rico libre!” for FALN, but none of them seemed to make much progress puzzling out step two. Deep thinker Bill Harris of the SLA's best attempt was, when he advocated killing policemen at random, arguing that “If they killed enough, … the police would crack down on the oppressed minorities of the Bay Area, who would then rise up and begin the revolution.”—sure thing.

In sum, all of this violence and the suffering that resulted from it accomplished precisely none of the goals of those who perpetrated it (which is a good thing: they mostly advocated for one flavour or another of communist enslavement of the United States). All it managed to do is contribute the constriction of personal liberty in the name of “security”, with metal detectors, bomb-sniffing dogs, X-ray machines, rent-a-cops, surveillance cameras, and the first round of airport security theatre springing up like mushrooms everywhere. The amount of societal disruption which can be caused by what amounted to around one hundred homicidal nutcases is something to behold. There were huge economic losses not just due to bombings, but by evacuations due to bomb threats, many doubtless perpetrated by copycats motivated by nothing more political than the desire for a day off from work. Violations of civil liberties by the FBI and other law enforcement agencies who carried out unauthorised wiretaps, burglaries, and other invasions of privacy and property rights not only discredited them, but resulted in many of the perpetrators of the mayhem walking away scot-free. Weatherman founders Bill Ayres and Bernardine Dohrn would, in 1995, launch the political career of Barack Obama at a meeting in their home in Chicago, where Ayers is now a Distinguished Professor at the University of Illinois at Chicago. Ayres, who bombed the U.S. Capitol in 1971 and the Pentagon in 1972, remarked in the 1980s that he was “Guilty as hell, free as a bird—America is a great country.”

This book is an excellent account of a largely-forgotten era in recent history. In a time when slaver radicals (a few of them the same people who set the bombs in their youth) declaim from the cultural heights of legacy media, academia, and their new strongholds in the technology firms which increasingly mediate our communications and access to information, advocate “active resistance”, “taking to the streets”, or “occupying” this or that, it's a useful reminder of where such action leads, and that it's wise to work out step two before embarking on step one.

December 2018 Permalink

Butler, Smedley D. War Is a Racket. San Diego, CA: Dauphin Publications, [1935] 2018. ISBN 978-1-939438-58-4.
Smedley Butler knew a thing or two about war. In 1898, a little over a month before his seventeenth birthday, he lied about his age and enlisted in the U.S. Marine Corps, which directly commissioned him a second lieutenant. After completing training, he was sent to Cuba, arriving shortly after the end of the Spanish-American War. Upon returning home, he was promoted to first lieutenant and sent to the Philippines as part of the American garrison. There, he led Marines in combat against Filipino rebels. In 1900 he was deployed to China during the Boxer Rebellion and was wounded in the Gaselee Expedition, being promoted to captain for his bravery.

He then served in the “Banana Wars” in Central America and the Caribbean. In 1914, during a conflict in Mexico, he carried out an undercover mission in support of a planned U.S. intervention. For his command in the battle of Veracruz, he was awarded the Medal of Honor. Next, he was sent to Haiti, where he commanded Marines and Navy troops in an attack on Fort Rivière in November 1915. For this action, he won a second Medal of Honor. To this day, he is only one of nineteen people to have twice won the Medal of Honor.

In World War I he did not receive a combat command, but for his work in commanding the debarkation camp in France for American troops, he was awarded both the Army and Navy Distinguished Service Medals. Returning to the U.S. after the armistice, he became commanding general of the Marine training base at Quantico, Virginia. Between 1927 and 1929 he commanded the Marine Expeditionary Force in China, and returning to Quantico in 1929, he was promoted to Major General, then the highest rank available in the Marine Corps (which was subordinate to the Navy), becoming the youngest person in the Corps to attain that rank. He retired from the Marine Corps in 1931.

In this slim pamphlet (just 21 pages in the Kindle edition I read), Butler demolishes the argument that the U.S. military actions in which he took part in his 33 years as a Marine had anything whatsoever to do with the defence of the United States. Instead, he saw lives and fortune squandered on foreign adventures largely in the interest of U.S. business interests, with those funding and supplying the military banking large profits from the operation. With the introduction of conscription in World War I, the cynical exploitation of young men reached a zenith with draftees paid US$30 a month, with half taken out to support dependants, and another bite for mandatory insurance, leaving less than US$9 per month for putting their lives on the line. And then, in a final insult, there was powerful coercion to “invest” this paltry sum in “Liberty Bonds” which, after the war, were repaid well below the price of purchase and/or in dollars which had lost half their purchasing power.

Want to put an end to endless, futile, and tragic wars? Forget disarmament conferences and idealistic initiatives, Butler says,

The only way to smash this racket is to conscript capital and industry and labor before the nations [sic] manhood can be conscripted. One month before the Government can conscript the young men of the nation—it must conscript capital and industry. Let the officers and the directors and the high-powered executives of our armament factories and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted—to get $30 a month, the same wage as the lads in the trenches get.

Let the workers in these plants get the same wages—all the workers, all presidents, all directors, all managers, all bankers—yes, and all generals and all admirals and all officers and all politicians and all government office holders—everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches!

Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors [I think “mayors” was intended —JW] pay half their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds.

Why shouldn't they?

Butler goes on to recommend that any declaration of war require approval by a national plebiscite in which voting would be restricted to those subject to conscription in a military conflict. (Writing in 1935, he never foresaw that young men and women would be sent into combat without so much as a declaration of war being voted by Congress.) Further, he would restrict all use of military force to genuine defence of the nation, in particular, limiting the Navy to operating no more than 200 miles (320 km) from the coastline.

This is an impassioned plea against the folly of foreign wars by a man whose career was as a warrior. One can argue that there is a legitimate interest in, say assuring freedom of navigation in international waters, but looking back on the results of U.S. foreign wars in the 21st century, it is difficult to argue they can be justified any more than the “Banana Wars” Butler fought in his time.

August 2019 Permalink

Byers, Bruce K. Destination Moon. Washington: National Aeronautics and Space Administration, 1977. NASA TM X-3487.
In the mid 1960s, the U.S. Apollo lunar landing program was at the peak of its budget commitment and technical development. The mission mode had already been chosen and development of the flight hardware was well underway, along with the ground infrastructure required to test and launch it and the global network required to track missions in flight. One nettlesome problem remained. The design of the lunar module made assumptions about the properties of the lunar surface upon which it would alight. If the landing zone had boulders which were too large, craters sufficiently deep and common that the landing legs could not avoid, or slopes too steep to avoid an upset on landing or tipping over afterward, lunar landing missions would all be aborted by the crew when they reached decision height, judging there was no place they could set down safely. Even if all the crews returned safely without having landed, this would be an ignominious end to the ambitions of Project Apollo.

What was needed in order to identify safe landing zones was high-resolution imagery of the Moon. The most capable Earth-based telescopes, operating through Earth's turbulent and often murky atmosphere, produced images which resolved objects at best a hundred times larger that those which could upset a lunar landing mission. What was required was a large area, high resolution mapping of the Moon and survey of potential landing zones, which could only be done, given the technology of the 1960s, by going there, taking pictures, and returning them to Earth. So was born the Lunar Orbiter program, which in 1966 and 1967 sent lightweight photographic reconnaissance satellites into lunar orbit, providing both the close-up imagery needed to select landing sites for the Apollo missions, but also mapping imagery which covered 99% of the near side of the Moon and 85% of the far side, In fact, Lunar Orbiter provided global imagery of the Moon far more complete than that which would be available for the Earth many years thereafter.

Accomplishing this goal with the technology of the 1960s was no small feat. Electronic imaging amounted to analogue television, which, at the altitude of a lunar orbit, wouldn't produce images any better than telescopes on Earth. The first spy satellites were struggling to return film from Earth orbit, and returning film from the Moon was completely impossible given the mass budget of the launchers available. After a fierce competition, NASA contracted with Boeing to build the Lunar Orbiter, designed to fit on NASA's workhorse Atlas-Agena launcher, which seriously constrained its mass. Boeing subcontracted with Kodak to build the imaging system and RCA for the communications hardware which would relay the images back to Earth and allow the spacecraft to be controlled from the ground.

The images were acquired by a process which may seem absurd to those accustomed to present-day digital technologies but which seemed miraculous in its day. In lunar orbit, the spacecraft would aim its cameras (it had two: a mapping camera which produced overlapping wide-angle views and a high-resolution camera that photographed clips of each frame with a resolution of about one metre) at the Moon and take a series of photos. Because the film used had a very low light sensitivity (ASA [now ISO] 1.6), on low-altitude imaging passes the film would have to be moved to compensate for the motion of the spacecraft to avoid blurring. (The low light sensitivity of the film was due to its very high spatial resolution, but also reduced its likelihood of being fogged by exposure to cosmic rays or energetic particles from solar flares.)

After being exposed, the film would subsequently be processed on-board by putting it in contact with a band containing developer and fixer, and then the resulting negative would be read back for transmission to Earth by scanning it with a moving point of light, measuring the transmission through the negative, and sending the measured intensity back as an analogue signal. At the receiving station, that signal would be used to modulate the intensity of a spot of light scanned across film which, when developed and assembled into images from strips, revealed the details of the Moon. The incoming analogue signal was recorded on tape to provide a backup for the film recording process, but nothing was done with the tapes at the time. More about this later….

Five Lunar Orbiter missions were launched, and although some experienced problems, all achieved their primary mission objectives. The first three missions provided all of the data required by Apollo, so the final two could be dedicated to mapping the Moon from near-polar orbits. After the completion of their primary imaging missions, Lunar Orbiters continued to measure the radiation and micrometeoroid environment near the Moon, and contributed to understanding the Moon's gravitational field, which would be important in planning later Apollo missions that would fly in very low orbits around the Moon. On August 23rd, 1966, the first Lunar Orbiter took one of the most iconic pictures of the 20th century: Earthrise from the Moon. The problems experienced by Lunar Orbiter missions and the improvisation by ground controllers to work around them set the pattern for subsequent NASA robotic missions, with their versatile, reconfigurable flight hardware and fine-grained control from the ground.

You might think the story of Lunar Orbiter a footnote to space exploration history which has scrolled off the screen with subsequent Apollo lunar landings and high-resolution lunar mapping by missions such as Clementine and the Lunar Reconnaissance Orbiter, but that fails to take into account the exploits of 21st century space data archaeologists. Recall that I said that all of the image data from Lunar Orbiter missions was recorded on analogue tapes. These tapes contained about 10 bits of dynamic range, as opposed to the 8 bits which were preserved by the optical recording process used in receiving the images during the missions. This, combined with contemporary image processing techniques, makes for breathtaking images recorded almost half a century ago, but never seen before. Here are a document and video which record the exploits of the Lunar Orbiter Image Recovery Project (LOIRP). Please visit the LOIRP Web site for more restored images and details of the process of restoration.

September 2014 Permalink

Byrd, Richard E. Alone. Washington: Island Press [1938, 1966] 2003. ISBN 978-1-55963-463-2.
To generations of Americans, Richard Byrd was the quintessential explorer of unknown terrain. First to fly over the North Pole (although this feat has been disputed from shortly after he claimed it to the present day), recipient of the Medal of Honor for this claimed exploit, pioneer in trans-Atlantic flight (although beaten by Lindbergh after a crash on a practice takeoff, he successfully flew from New York to France in June 1927), Antarctic explorer and first to fly over the South Pole, and leader of four more expeditions to the Antarctic, including commanding the operation which established the permanent base at the South Pole which remains there to this day.

In 1934, on his second Antarctic expedition, Byrd set up and manned a meteorological station on the Ross Ice Shelf south of 80°, in which he would pass the Antarctic winter—alone. He originally intended the station to be emplaced much further south and manned by three people (he goes into extensive detail why “cabin fever” makes a two man crew a prescription for disaster), and then, almost on a lark it seems from the narrative, decides, when forced by constraints of weather and delivery of supplies for the winter, to go it alone. In anticipation, he welcomes the isolation from distractions of daily events, the ability to catch up reading, thinking, and listening to music.

His hut was well designed and buried in the ice to render it immune from the high winds and drifting snow of the Antarctic winter. It was well provisioned to survive the winter: food and fuel tunnels cached abundant supplies. Less thought out was the stove and its ventilation. As winter set in, Byrd succumbed to carbon monoxide poisoning, made more severe by fumes from the gasoline generator he used to power the radio set which was his only link to those wintering at the Little America base on the coast.

Byrd comes across in this narrative as an extraordinarily complex character. One moment, he's describing how his lamp failed when, at −52° C, its kerosene froze, next he's recounting how easily the smallest mistake: loss of sight of the flags leading back to shelter or a jammed hatch back into the hut can condemn one to despair and death by creeping cold, and then he goes all philosophical:

The dark side of a man's mind seems to be a sort of antenna tuned to catch gloomy thoughts from all directions. I found it so with mine. That was an evil night. It was as if all the world's vindictiveness were concentrated upon me as upon a personal enemy. I sank to depths of disillusionment which I had not believed possible. It would be tedious to discuss them. Misery, after all, is the tritest of emotions.

Here we have a U.S. Navy Rear Admiral, Medal of Honor winner, as gonzo journalist in the Antarctic winter—extraordinary. Have any other great explorers written so directly from the deepest recesses of their souls?

Byrd's complexity deepens further as he confesses to fabricating reports of his well-being in radio reports to Little America, intended, he says, to prevent them from launching a rescue mission which he feared would end in failure and the deaths of those who undertook it. And yet Byrd's increasingly bizarre communications eventually caused such a mission to be launched, and once it was, his diary pinned his entire hope upon its success.

If you've ever imagined yourself first somewhere, totally alone and living off the supplies you've brought with you: in orbit, on the Moon, on Mars, or beyond, here is a narrative of what it's really like to do that, told with brutal honesty by somebody who did. Admiral Byrd's recounting of his experience is humbling to any who aspire to the noble cause of exploration.

January 2013 Permalink

Cadbury, Deborah. Space Race. London: Harper Perennial, 2005. ISBN 0-00-720994-0.
This is an utterly compelling history of the early years of the space race, told largely through the parallel lives of mirror-image principals Sergei Korolev (anonymous Chief Designer of the Soviet space program, and beforehand slave labourer in Stalin's Gulag) and Wernher von Braun, celebrity driving force behind the U.S. push into space, previously a Nazi party member, SS officer, and user of slave labour to construct his A-4/V-2 weapons. Drawing upon material not declassified by the United States until the 1980s and revealed after the collapse of the Soviet Union, the early years of these prime movers of space exploration are illuminated, along with how they were both exploited by and deftly manipulated their respective governments. I have never seen the story of the end-game between the British, Americans, and Soviets to spirit the V-2 hardware, technology, and team from Germany in the immediate post-surrender chaos told so well in a popular book. The extraordinary difficulties of trying to get things done in the Soviet command economy are also described superbly, and underline how inspired and indefatigable Korolev must have been to accomplish what he did.

Although the book covers the 1930s through the 1969 Moon landing, the main focus is on the competition between the U.S. and the Soviet Union between the end of World War II and the mid-1960s. Out of 345 pages of main text, the first 254 are devoted to the period ending with the flights of Yuri Gagarin and Alan Shepard in 1961. But then, that makes sense, given what we now know about the space race (and you'll know, if you don't already, after reading this book). Although nobody in the West knew at the time, the space race was really over when the U.S. made the massive financial commitment to Project Apollo and the Soviets failed to match it. Not only was Korolev compelled to work within budgets cut to half or less of his estimated requirements, the modest Soviet spending on space was divided among competing design bureaux whose chief designers engaged in divisive and counterproductive feuds. Korolev's N-1 Moon rocket used 30 first stage engines designed by a jet engine designer with modest experience with rockets because Korolev and supreme Soviet propulsion designer Valentin Glushko were not on speaking terms, and he was forced to test the whole grotesque lash-up for the first time in flight, as there wasn't the money for a ground test stand for the complete first stage. Unlike the “all-up” testing of the Apollo-Saturn program, where each individual component was exhaustively ground tested in isolation before being committed to flight, it didn't work. It wasn't just the Soviets who took risks in those wild and wooly days, however. When an apparent fuel leak threatened to delay the launch of Explorer-I, the U.S. reply to Sputnik, brass in the bunker asked for a volunteer “without any dependants” to go out and scope out the situation beneath the fully-fuelled rocket, possibly leaking toxic hydrazine (p. 175).

There are a number of factual goofs. I'm not sure the author fully understands orbital mechanics which is, granted, a pretty geeky topic, but one which matters when you're writing about space exploration. She writes that the Jupiter C re-entry experiment reached a velocity (p. 154) of 1600 mph (actually 16,000 mph), that Yuri Gararin's Vostok capsule orbited (p. 242) at 28,000 mph (actually 28,000 km/h), and that if Apollo 8's service module engine had failed to fire after arriving at the Moon (p. 325), the astronauts “would sail on forever, lost in space” (actually, they were on a “free return” trajectory, which would have taken them back to Earth even if the engine failed—the critical moment was actually when they fired the same engine to leave lunar orbit on Christmas Day 1968, which success caused James Lovell to radio after emerging from behind the Moon after the critical burn, “Please be informed, there is a Santa Claus”). Orbital attitude (the orientation of the craft) is confused with altitude (p. 267), and retro-rockets are described as “breaking rockets” (p. 183)—let's hope not! While these and other quibbles will irk space buffs, they shouldn't deter you from enjoying this excellent narrative.

A U.S. edition is now available. The author earlier worked on the production of a BBC docu-drama also titled Space Race, which is now available on DVD. Note, however, that this is a PAL DVD with a region code of 2, and will not play unless you have a compatible DVD player and television; I have not seen this programme.

October 2007 Permalink

[Audiobook] Caesar, Gaius Julius and Aulus Hirtius. The Commentaries. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [ca. 52–51 B.C., ca. 45 B.C.] 2004. ISBN 1-929718-44-6.
This audiobook is an unabridged reading of English translations of Caesar's commentaries on the Gallic (Commentarii de Bello Gallico) and Civil (Commentarii de Bello Civili) wars between 58 and 48 B.C. (The eighth book of the Gallic wars commentary, covering the minor campaigns of 51 B.C., was written by his friend Aulus Hirtius after Caesar's assassination.) The recording is based upon the rather eccentric Rex Warner translation, which is now out of print. In the original Latin text, Caesar always referred to himself in the third person, as “Caesar”. Warner rephrased the text (with the exception of the book written by Hirtius) as a first person narrative. For example, the first sentence of paragraph I.25 of The Gallic Wars:
Caesar primum suo, deinde omnium ex conspectu remotis equis, ut aequato omnium periculo spem fugae tolleret, cohortatus suos proelium commisit.
in Latin, is conventionally translated into English as something like this (from the rather stilted 1869 translation by W. A. McDevitte and W. S. Bohn):
Caesar, having removed out of sight first his own horse, then those of all, that he might make the danger of all equal, and do away with the hope of flight, after encouraging his men, joined battle.
but the Warner translation used here renders this as:
I first of all had my own horse taken out of the way and then the horses of other officers. I wanted the danger to be the same for everyone, and for no one to have any hope of escape by flight. Then I spoke a few words of encouragement to the men before joining battle.   [1:24:17–30]
Now, whatever violence this colloquial translation does to the authenticity of Caesar's spare and eloquent Latin, from a dramatic standpoint it works wonderfully with the animated reading of award-winning narrator Charlton Griffin; the listener has the sense of being across the table in a tavern from GJC as he regales all present with his exploits.

This is “just the facts” war reporting. Caesar viewed this work not as history, but rather the raw material for historians in the future. There is little discussion of grand strategy nor, even in the commentaries on the civil war, the political conflict which provoked the military confrontation between Caesar and Pompey. While these despatches doubtless served as propaganda on Caesar's part, he writes candidly of his own errors and the cost of the defeats they occasioned. (Of course, since these are the only extant accounts of most of these events, there's no way to be sure there isn't some Caesarian spin in his presentation, but since these commentaries were published in Rome, which received independent reports from officers and literate legionaries in Caesar's armies, it's unlikely he would have risked embellishing too much.)

Two passages of unknown length in the final book of the Civil war commentaries have been lost—these are handled by the reader stopping in mid-sentence, with another narrator explaining the gap and the historical consensus of the events in the lost text.

This audiobook is distributed in three parts, totalling 16 hours and 40 minutes. That's a big investment of time in the details of battles which took place more than two thousand years ago, but I'll confess I found it fascinating, especially since some of the events described took place within sight of where I take the walks on which I listened to this recording over several weeks. An Audio CD edition is available.

August 2007 Permalink

Cahill, Thomas. Sailing the Wine-Dark Sea: Why the Greeks Matter. New York: Doubleday, 2003. ISBN 0-385-49553-6.

November 2003 Permalink

Carlson, W. Bernard. Tesla: Inventor of the Electrical Age. Princeton: Princeton University Press, 2013. ISBN 978-0-691-16561-5.
Nicola Tesla was born in 1858 in a village in what is now Croatia, then part of the Austro-Hungarian Empire. His father and grandfather were both priests in the Orthodox church. The family was of Serbian descent, but had lived in Croatia since the 1690s among a community of other Serbs. His parents wanted him to enter the priesthood and enrolled him in school to that end. He excelled in mathematics and, building on a boyhood fascination with machines and tinkering, wanted to pursue a career in engineering. After completing high school, Tesla returned to his village where he contracted cholera and was near death. His father promised him that if he survived, he would “go to the best technical institution in the world.” After nine months of illness, Tesla recovered and, in 1875 entered the Joanneum Polytechnic School in Graz, Austria.

Tesla's university career started out brilliantly, but he came into conflict with one of his physics professors over the feasibility of designing a motor which would operate without the troublesome and unreliable commutator and brushes of existing motors. He became addicted to gambling, lost his scholarship, and dropped out in his third year. He worked as a draftsman, taught in his old high school, and eventually ended up in Prague, intending to continue his study of engineering at the Karl-Ferdinand University. He took a variety of courses, but eventually his uncles withdrew their financial support.

Tesla then moved to Budapest, where he found employment as chief electrician at the Budapest Telephone Exchange. He quickly distinguished himself as a problem solver and innovator and, before long, came to the attention of the Continental Edison Company of France, which had designed the equipment used in Budapest. He was offered and accepted a job at their headquarters in Ivry, France. Most of Edison's employees had practical, hands-on experience with electrical equipment, but lacked Tesla's formal education in mathematics and physics. Before long, Tesla was designing dynamos for lighting plants and earning a handsome salary. With his language skills (by that time, Tesla was fluent in Serbian, German, and French, and was improving his English), the Edison company sent him into the field as a trouble-shooter. This further increased his reputation and, in 1884 he was offered a job at Edison headquarters in New York. He arrived and, years later, described the formalities of entering the U.S. as an immigrant: a clerk saying “Kiss the Bible. Twenty cents!”.

Tesla had never abandoned the idea of a brushless motor. Almost all electric lighting systems in the 1880s used direct current (DC): electrons flowed in only one direction through the distribution wires. This is the kind of current produced by batteries, and the first electrical generators (dynamos) produced direct current by means of a device called a commutator. As the generator is turned by its power source (for example, a steam engine or water wheel), power is extracted from the rotating commutator by fixed brushes which press against it. The contacts on the commutator are wired to the coils in the generator in such a way that a constant direct current is maintained. When direct current is used to drive a motor, the motor must also contain a commutator which converts the direct current into a reversing flow to maintain the motor in rotary motion.

Commutators, with brushes rubbing against them, are inefficient and unreliable. Brushes wear and must eventually be replaced, and as the commutator rotates and the brushes make and break contact, sparks may be produced which waste energy and degrade the contacts. Further, direct current has a major disadvantage for long-distance power transmission. There was, at the time, no way to efficiently change the voltage of direct current. This meant that the circuit from the generator to the user of the power had to run at the same voltage the user received, say 120 volts. But at such a voltage, resistance losses in copper wires are such that over long runs most of the energy would be lost in the wires, not delivered to customers. You can increase the size of the distribution wires to reduce losses, but before long this becomes impractical due to the cost of copper it would require. As a consequence, Edison electric lighting systems installed in the 19th century had many small powerhouses, each supplying a local set of customers.

Alternating current (AC) solves the problem of power distribution. In 1881 the electrical transformer had been invented, and by 1884 high-efficiency transformers were being manufactured in Europe. Powered by alternating current (they don't work with DC), a transformer efficiently converts current from one voltage and current to another. For example, power might be transmitted from the generating station to the customer at 12000 volts and 1 ampere, then stepped down to 120 volts and 100 amperes by a transformer at the customer location. Losses in a wire are purely a function of current, not voltage, so for a given level of transmission loss, the cables to distribute power at 12000 volts will cost a hundredth as much as if 120 volts were used. For electric lighting, alternating current works just as well as direct current (as long as the frequency of the alternating current is sufficiently high that lamps do not flicker). But electricity was increasingly used to power motors, replacing steam power in factories. All existing practical motors ran on DC, so this was seen as an advantage to Edison's system.

Tesla worked only six months for Edison. After developing an arc lighting system only to have Edison put it on the shelf after acquiring the rights to a system developed by another company, he quit in disgust. He then continued to work on an arc light system in New Jersey, but the company to which he had licensed his patents failed, leaving him only with a worthless stock certificate. To support himself, Tesla worked repairing electrical equipment and even digging ditches, where one of his foremen introduced him to Alfred S. Brown, who had made his career in telegraphy. Tesla showed Brown one of his patents, for a “thermomagnetic motor”, and Brown contacted Charles F. Peck, a lawyer who had made his fortune in telegraphy. Together, Peck and Brown saw the potential for the motor and other Tesla inventions and in April 1887 founded the Tesla Electric Company, with its laboratory in Manhattan's financial district.

Tesla immediately set to make his dream of a brushless AC motor a practical reality and, by using multiple AC currents, out of phase with one another (the polyphase system), he was able to create a magnetic field which itself rotated. The rotating magnetic field induced a current in the rotating part of the motor, which would start and turn without any need for a commutator or brushes. Tesla had invented what we now call the induction motor. He began to file patent applications for the motor and the polyphase AC transmission system in the fall of 1887, and by May of the following year had been granted a total of seven patents on various aspects of the motor and polyphase current.

One disadvantage of the polyphase system and motor was that it required multiple pairs of wires to transmit power from the generator to the motor, which increased cost and complexity. Also, existing AC lighting systems, which were beginning to come into use, primarily in Europe, used a single phase and two wires. Tesla invented the split-phase motor, which would run on a two wire, single phase circuit, and this was quickly patented.

Unlike Edison, who had built an industrial empire based upon his inventions, Tesla, Peck, and Brown had no interest in founding a company to manufacture Tesla's motors. Instead, they intended to shop around and license the patents to an existing enterprise with the resources required to exploit them. George Westinghouse had developed his inventions of air brakes and signalling systems for railways into a successful and growing company, and was beginning to compete with Edison in the electric light industry, installing AC systems. Westinghouse was a prime prospect to license the patents, and in July 1888 a deal was concluded for cash, notes, and a royalty for each horsepower of motors sold. Tesla moved to Pittsburgh, where he spent a year working in the Westinghouse research lab improving the motor designs. While there, he filed an additional fifteen patent applications.

After leaving Westinghouse, Tesla took a trip to Europe where he became fascinated with Heinrich Hertz's discovery of electromagnetic waves. Produced by alternating current at frequencies much higher than those used in electrical power systems (Hertz used a spark gap to produce them), here was a demonstration of transmission of electricity through thin air—with no wires at all. This idea was to inspire much of Tesla's work for the rest of his life. By 1891, he had invented a resonant high frequency transformer which we now call a Tesla coil, and before long was performing spectacular demonstrations of artificial lightning, illuminating lamps at a distance without wires, and demonstrating new kinds of electric lights far more efficient than Edison's incandescent bulbs. Tesla's reputation as an inventor was equalled by his talent as a showman in presentations before scientific societies and the public in both the U.S. and Europe.

Oddly, for someone with Tesla's academic and practical background, there is no evidence that he mastered Maxwell's theory of electromagnetism. He believed that the phenomena he observed with the Tesla coil and other apparatus were not due to the Hertzian waves predicted by Maxwell's equations, but rather something he called “electrostatic thrusts”. He was later to build a great edifice of mistaken theory on this crackpot idea.

By 1892, plans were progressing to harness the hydroelectric power of Niagara Falls. Transmission of this power to customers was central to the project: around one fifth of the American population lived within 400 miles of the falls. Westinghouse bid Tesla's polyphase system and with Tesla's help in persuading the committee charged with evaluating proposals, was awarded the contract in 1893. By November of 1896, power from Niagara reached Buffalo, twenty miles away, and over the next decade extended throughout New York. The success of the project made polyphase power transmission the technology of choice for most electrical distribution systems, and it remains so to this day. In 1895, the New York Times wrote:

Even now, the world is more apt to think of him as a producer of weird experimental effects than as a practical and useful inventor. Not so the scientific public or the business men. By the latter classes Tesla is properly appreciated, honored, perhaps even envied. For he has given to the world a complete solution of the problem which has taxed the brains and occupied the time of the greatest electro-scientists for the last two decades—namely, the successful adaptation of electrical power transmitted over long distances.

After the Niagara project, Tesla continued to invent, demonstrate his work, and obtain patents. With the support of patrons such as John Jacob Astor and J. P. Morgan he pursued his work on wireless transmission of power at laboratories in Colorado Springs and Wardenclyffe on Long Island. He continued to be featured in the popular press, amplifying his public image as an eccentric genius and mad scientist. Tesla lived until 1943, dying at the age of 86 of a heart attack. Over his life, he obtained around 300 patents for devices as varied as a new form of turbine, a radio controlled boat, and a vertical takeoff and landing airplane. He speculated about wireless worldwide distribution of news to personal mobile devices and directed energy weapons to defeat the threat of bombers. While in Colorado, he believed he had detected signals from extraterrestrial beings. In his experiments with high voltage, he accidently detected X-rays before Röntgen announced their discovery, but he didn't understand what he had observed.

None of these inventions had any practical consequences. The centrepiece of Tesla's post-Niagara work, the wireless transmission of power, was based upon a flawed theory of how electricity interacts with the Earth. Tesla believed that the Earth was filled with electricity and that if he pumped electricity into it at one point, a resonant receiver anywhere else on the Earth could extract it, just as if you pump air into a soccer ball, it can be drained out by a tap elsewhere on the ball. This is, of course, complete nonsense, as his contemporaries working in the field knew, and said, at the time. While Tesla continued to garner popular press coverage for his increasingly bizarre theories, he was ignored by those who understood they could never work. Undeterred, Tesla proceeded to build an enormous prototype of his transmitter at Wardenclyffe, intended to span the Atlantic, without ever, for example, constructing a smaller-scale facility to verify his theories over a distance of, say, ten miles.

Tesla's invention of polyphase current distribution and the induction motor were central to the electrification of nations and continue to be used today. His subsequent work was increasingly unmoored from the growing theoretical understanding of electromagnetism and many of his ideas could not have worked. The turbine worked, but was uncompetitive with the fabrication and materials of the time. The radio controlled boat was clever, but was far from the magic bullet to defeat the threat of the battleship he claimed it to be. The particle beam weapon (death ray) was a fantasy.

In recent decades, Tesla has become a magnet for Internet-connected crackpots, who have woven elaborate fantasies around his work. Finally, in this book, written by a historian of engineering and based upon original sources, we have an authoritative and unbiased look at Tesla's life, his inventions, and their impact upon society. You will understand not only what Tesla invented, but why, and how the inventions worked. The flaky aspects of his life are here as well, but never mocked; inventors have to think ahead of accepted knowledge, and sometimes they will inevitably get things wrong.

February 2016 Permalink

Chambers, Whittaker. Witness. Washington: Regnery Publishing, [1952] 2002. ISBN 0-89526-789-6.

September 2003 Permalink

Chancellor, Henry. Colditz. New York: HarperCollins, 2001. ISBN 0-06-001252-8.

March 2003 Permalink

Chertok, Boris E. Rockets and People. Vol. 1. Washington: National Aeronautics and Space Administration, [1999] 2005. ISBN 978-1-4700-1463-6 NASA SP-2005-4110.
This is the first book of the author's monumental four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Chertok saw it all, from the earliest Soviet experiments with rocketry in the 1930s, uncovering the secrets of the German V-2 amid the rubble of postwar Germany (he was the director of the Institute RABE, where German and Soviet specialists worked side by side laying the foundations of postwar Soviet rocketry), the glory days of Sputnik and Gagarin, the anguish of losing the Moon race, and the emergence of Soviet preeminence in long-duration space station operations.

The first volume covers Chertok's career up to the conclusion of his work in Germany in 1947. Unlike Challenge to Apollo, which is a scholarly institutional and technical history (and consequently rather dry reading), Chertok gives you a visceral sense of what it was like to be there: sometimes chilling, as in his descriptions of the 1930s where he matter-of-factly describes his supervisors and colleagues as having been shot or sent to Siberia just as an employee in the West would speak of somebody being transferred to another office, and occasionally funny, as when he recounts the story of the imperious Valentin Glushko showing up at his door in a car belching copious smoke. It turns out that Glushko had driven all the way with the handbrake on, and his subordinate hadn't dared mention it because Glushko didn't like to be distracted when at the wheel.

When the Soviets began to roll out their space spectaculars in the late 1950s and early '60s, some in the West attributed their success to the Soviets having gotten the “good German” rocket scientists while the West ended up with the second team. Chertok's memoir puts an end to such speculation. By the time the Americans and British vacated the V-2 production areas, they had packed up and shipped out hundreds of rail cars of V-2 missiles and components and captured von Braun and all of his senior staff, who delivered extensive technical documentation as part of their surrender. This left the Soviets with pretty slim pickings, and Chertok and his staff struggled to find components, documents, and specialists left behind. This put them at a substantial disadvantage compared to the U.S., but forced them to reverse-engineer German technology and train their own people in the disciplines of guided missilery rather than rely upon a German rocket team.

History owes a great debt to Boris Chertok not only for the achievements in his six decade career (for which he was awarded Hero of Socialist Labour, the Lenin Prize, the Order of Lenin [twice], and the USSR State Prize), but for living so long and undertaking to document the momentous events he experienced at the first epoch at which such a candid account was possible. Only after the fall of the Soviet Union could the events chronicled here be freely discussed, and the merits and shortcomings of the Soviet system in accomplishing large technological projects be weighed.

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. Words are occasionally run together and capitalisation is missing on some proper nouns. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

May 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 2. Washington: National Aeronautics and Space Administration, [1999] 2006. ISBN 978-1-4700-1508-4 NASA SP-2006-4110.
This is the second book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of Chertok's chronicle begins with his return from Germany to the Soviet Union, where he discovers, to his dismay, that day-to-day life in the victorious workers' state is much harder than in the land of the defeated fascist enemy. He becomes part of the project, mandated by Stalin, to first launch captured German V-2 missiles and then produce an exact Soviet copy, designated the R-1. Chertok and his colleagues discover that making a copy of foreign technology may be more difficult than developing it from scratch—the V-2 used a multitude of steel and non-ferrous metal alloys, as well as numerous non-metallic components (seals, gaskets, insulation, etc.) which were not produced by Soviet industry. But without the experience of the German rocket team (which, by this time, was in the United States), there was no way to know whether the choice of a particular material was because its properties were essential to its function or simply because it was readily available in Germany. Thus, making an “exact copy” involved numerous difficult judgement calls where the designers had to weigh the risk of deviation from the German design against the cost of standing up a Soviet manufacturing capacity which might prove unnecessary.

After the difficult start which is the rule for missile projects, the Soviets managed to turn the R-1 into a reliable missile and, through patience and painstaking analysis of telemetry, solved a mystery which had baffled the Germans: why between 10% and 20% of V-2 warheads had detonated in a useless airburst high above the intended target. Chertok's instrumentation proved that the cause was aerodynamic heating during re-entry which caused the high explosive warhead to outgas, deform, and trigger the detonator.

As the Soviet missile program progresses, Chertok is a key player, participating in the follow-on R-2 project (essentially a Soviet Redstone—a V-2 derivative, but entirely of domestic design), the R-5 (an intermediate range ballistic missile eventually armed with nuclear warheads), and the R-7, the world's first intercontinental ballistic missile, which launched Sputnik, Gagarin, and whose derivatives remain in service today, providing the only crewed access to the International Space Station as of this writing.

Not only did the Soviet engineers have to build ever larger and more complicated hardware, they essentially had to invent the discipline of systems engineering all by themselves. While even in aviation it is often possible to test components in isolation and then integrate them into a vehicle, working out interface problems as they manifest themselves, in rocketry everything interacts, and when something goes wrong, you have only the telemetry and wreckage upon which to base your diagnosis. Consider: a rocket ascending may have natural frequencies in its tankage structure excited by vibration due to combustion instabilities in the engine. This can, in turn, cause propellant delivery to the engine to oscillate, which will cause pulses in thrust, which can cause further structural stress. These excursions may cause control actuators to be over-stressed and possibly fail. When all you have to go on is a ragged cloud in the sky, bits of metal raining down on the launch site, and some telemetry squiggles for a second or two before everything went pear shaped, it can be extraordinarily difficult to figure out what went wrong. And none of this can be tested on the ground. Only a complete systems approach can begin to cope with problems like this, and building that kind of organisation required a profound change in Soviet institutions, which had previously been built around imperial chief designers with highly specialised missions. When everything interacts, you need a different structure, and it was part of the genius of Sergei Korolev to create it. (Korolev, who was the author's boss for most of the years described here, is rightly celebrated as a great engineer and champion of missile and space projects, but in Chertok's view at least equally important was his talent in quickly evaluating the potential of individuals and filling jobs with the people [often improbable candidates] best able to do them.)

In this book we see the transformation of the Soviet missile program from slavishly copying German technology to world-class innovation, producing, in short order, the first ICBM, earth satellite, lunar impact, images of the lunar far side, and interplanetary probes. The missile men found themselves vaulted from an obscure adjunct of Red Army artillery to the vanguard of Soviet prestige in the world, with the Soviet leadership urging them on to ever greater exploits.

There is a tremendous amount of detail here—so much that some readers have deemed it tedious: I found it enlightening. The author dissects the Nedelin disaster in forensic detail, as well as the much less known 1980 catastrophe at Plesetsk where 48 died because a component of the rocket used the wrong kind of solder. Rocketry is an exacting business, and it is a gift to generations about to embark upon it to imbibe the wisdom of one who was present at its creation and learned, by decades of experience, just how careful one must be to succeed at it. I could go on regaling you with anecdotes from this book but, hey, if you've made it this far, you're probably going to read it yourself, so what's the point? (But if you do, I'd suggest you read Volume 1 [May 2012] first.)

As with all NASA publications, the work is in the public domain, and an online PDF edition is available.

A Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a workable PDF application on your reading device, I'd go with the NASA PDF, which is not only better formatted but free.

The original Russian edition is available online.

August 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 3. Washington: National Aeronautics and Space Administration, [1999] 2009. ISBN 978-1-4700-1437-7 NASA SP-2009-4110.
This is the third book of the author's four-volume autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

Volume 2 of this memoir chronicled the achievements which thrust the Soviet Union's missile and space program into the consciousness of people world-wide and sparked the space race with the United States: the development of the R-7 ICBM, Sputnik and its successors, and the first flights which photographed the far side of the Moon and impacted on its surface. In this volume, the author describes the projects and accomplishments which built upon this base and persuaded many observers of the supremacy of Soviet space technology. Since the author's speciality was control systems and radio technology, he had an almost unique perspective upon these events: unlike other designers who focussed upon one or a few projects, he was involved in almost all of the principal efforts, from intermediate range, intercontinental, and submarine-launched ballistic missiles; air and anti-missile defence; piloted spaceflight; reconnaissance, weather, and navigation satellites; communication satellites; deep space missions and the ground support for them; soft landing on the Moon; and automatic rendezvous and docking. He was present when it looked like the rudimentary R-7 ICBM might be launched in anger during the Cuban missile crisis, at the table as chief designers battled over whether combat missiles should use cryogenic or storable liquid propellants or solid fuel, and sat on endless boards of inquiry after mission failures—the first eleven attempts to soft-land on the Moon failed, and Chertok was there for each launch, subsequent tracking, and sorting through what went wrong.

This was a time of triumph for the Soviet space program: the first manned flight, endurance record after endurance record, dual flights, the first woman in space, the first flight with a crew of more than one, and the first spacewalk. But from Chertok's perspective inside the programs, and the freedom he had to write candidly in the 1990s about his experiences, it is clear that the seeds of tragedy were being sown. With the quest for one spectacular after another, each surpassing the last, the Soviets became inoculated with what NASA came to call “go fever”—a willingness to brush anomalies under the rug and normalise the abnormal because you'd gotten away with it before.

One of the most stunning examples of this is Gagarin's flight. The Vostok spacecraft consisted of a spherical descent module (basically a cannonball covered with ablative thermal protection material) and an instrument compartment containing the retro-rocket, attitude control system, and antennas. After firing the retro-rocket, the instrument compartment was supposed to separate, allowing the descent module's heat shield to protect it through atmospheric re-entry. (The Vostok performed a purely ballistic re-entry, and had no attitude control thrusters in the descent module; stability was maintained exclusively by an offset centre of gravity.) In the two unmanned test flights which preceded Garagin's mission, the instrument module had failed to cleanly separate from the descent module, but the connection burned through during re-entry and the descent module survived. Gagarin was launched in a spacecraft with the same design, and the same thing happened: there were wild oscillations, but after the link burned through his spacecraft stabilised. Astonishingly, Vostok 2 was launched with Gherman Titov on board with precisely the same flaw, and suffered the same failure during re-entry. Once again, the cosmonaut won this orbital game of Russian roulette. One wonders what lessons were learned from this. In this narrative, Chertok is simply aghast at the decision making here, but one gets the sense that you had to be there, then, to appreciate what was going through people's heads.

The author was extensively involved in the development of the first Soviet communications satellite, Molniya, and provides extensive insights into its design, testing, and early operations. It is often said that the Molniya orbit was chosen because it made the satellite visible from the Soviet far North where geostationary satellites would be too close to the horizon for reliable communication. It is certainly true that today this orbit continues to be used for communications with Russian arctic territories, but its adoption for the first Soviet communications satellite had an entirely different motivation. Due to the high latitude of the Soviet launch site in Kazakhstan, Korolev's R-7 derived booster could place only about 100 kilograms into a geostationary orbit, which was far too little for a communication satellite with the technology of the time, but it could loft 1,600 kilograms into a high-inclination Molniya orbit. The only alternative would have been for Korolev to have approached Chelomey to launch a geostationary satellite on his UR-500 (Proton) booster, which was unthinkable because at the time the two were bitter rivals. So much for the frictionless efficiency of central planning!

In engineering, one learns that every corner cut will eventually come back to cut you. Korolev died at just the time he was most needed by the Soviet space program due to a botched operation for a routine condition performed by a surgeon who had spent most of his time as a Minister of the Soviet Union and not in the operating room. Gagarin died in a jet fighter training accident which has been the subject of such an extensive and multi-layered cover-up and spin that the author simply cites various accounts and leaves it to the reader to judge. Komarov died in Soyuz 1 due to a parachute problem which would have been discovered had an unmanned flight preceded his. He was a victim of “go fever”.

There is so much insight and wisdom here I cannot possibly summarise it all; you'll have to read this book to fully appreciate it, ideally after having first read Volume 1 (May 2012) and Volume 2 (August 2012). Apart from the unique insider's perspective on the Soviet missile and space program, as a person elected a corresponding member of the Soviet Academy of Sciences in 1968 and a full member (academician) of the Russian Academy of Sciences in 2000, he provides a candid view of the politics of selection of members of the Academy and how they influence policy and projects at the national level. Chertok believes that, even as one who survived Stalin's purges, there were merits to the Soviet system which have been lost in the “new Russia”. His observations are worth pondering by those who instinctively believe the market will always converge upon the optimal solution.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is perfectly readable but rather cheaply produced. Footnotes simply appear in the text in-line somewhere after the reference, set in small red type. The index references page numbers from the print edition which are not included in the Kindle version, and hence are completely useless. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They are not only better formatted but free.

The original Russian edition is available online.

December 2012 Permalink

Chertok, Boris E. Rockets and People. Vol. 4. Washington: National Aeronautics and Space Administration, [1999] 2011. ISBN 978-1-4700-1437-7 NASA SP-2011-4110.
This is the fourth and final book of the author's autobiographical history of the Soviet missile and space program. Boris Chertok was a survivor, living through the Bolshevik revolution, the Russian civil war, Stalin's purges of the 1930s, World War II, all of the postwar conflict between chief designers and their bureaux and rival politicians, and the collapse of the Soviet Union. Born in Poland in 1912, he died in 2011 in Moscow. As he says in this volume, “I was born in the Russian Empire, grew up in Soviet Russia, achieved a great deal in the Soviet Union, and continue to work in Russia.” After retiring from the RKK Energia organisation in 1992 at the age of 80, he wrote this work between 1994 and 1999. Originally published in Russian in 1999, this annotated English translation was prepared by the NASA History Office under the direction of Asif A. Siddiqi, author of Challenge to Apollo (April 2008), the definitive Western history of the Soviet space program.

This work covers the Soviet manned lunar program and the development of long-duration space stations and orbital rendezvous, docking, and assembly. As always, Chertok was there, and participated in design and testing, was present for launches and in the control centre during flights, and all too often participated in accident investigations.

In retrospect, the Soviet manned lunar program seems almost bizarre. It did not begin in earnest until two years after NASA's Apollo program was underway, and while the Gemini and Apollo programs were a step-by-step process of developing and proving the technologies and operational experience for lunar missions, the Soviet program was a chaotic bag of elements seemingly driven more by the rivalries of the various chief designers than a coherent plan for getting to the Moon. First of all, there were two manned lunar programs, each using entirely different hardware and mission profiles. The Zond program used a modified Soyuz spacecraft launched on a Proton booster, intended to send two cosmonauts on a circumlunar mission. They would simply loop around the Moon and return to Earth without going into orbit. A total of eight of these missions were launched unmanned, and only one completed a flight which would have been safe for cosmonauts on board. After Apollo 8 accomplished a much more ambitious lunar orbital mission in December 1968, a Zond flight would simply demonstrate how far behind the Soviets were, and the program was cancelled in 1970.

The N1-L3 manned lunar landing program was even more curious. In the Apollo program, the choice of mission mode and determination of mass required for the lunar craft came first, and the specifications of the booster rocket followed from that. Work on Korolev's N1 heavy lifter did not get underway until 1965—four years after the Saturn V, and it was envisioned as a general purpose booster for a variety of military and civil space missions. Korolev wanted to use very high thrust kerosene engines on the first stage and hydrogen engines on the upper stages as did the Saturn V, but he was involved in a feud with Valentin Glushko, who championed the use of hypergolic, high boiling point, toxic propellants and refused to work on the engines Korolev requested. Hydrogen propellant technology in the Soviet Union was in its infancy at the time, and Korolev realised that waiting for it to mature would add years to the schedule.

In need of engines, Korolev approached Nikolai Kuznetsov, a celebrated designer of jet turbine engines, but who had no previous experience at all with rocket engines. Kuznetsov's engines were much smaller than Korolev desired, and to obtain the required thrust, required thirty engines on the first stage alone, each with its own turbomachinery and plumbing. Instead of gimballing the engines to change the thrust vector, pairs of engines on opposite sides of the stage were throttled up and down. The gargantuan scale of the lower stages of the N-1 meant they were too large to transport on the Soviet rail network, so fabrication of the rocket was done in a huge assembly hall adjacent to the launch site. A small city had to be built to accommodate the work force.

All Soviet rockets since the R-2 in 1949 had used “integral tanks”: the walls of the propellant tanks were load-bearing and formed the skin of the rocket. The scale of the N1 was such that load-bearing tanks would have required a wall thickness which exceeded the capability of Soviet welding technology at the time, forcing a design with an external load-bearing shell and separate propellant tanks within it. This increased the complexity of the rocket and added dead weight to the design. (NASA's contractors had great difficulty welding the integral tanks of the Saturn V, but NASA simply kept throwing money at the problem until they figured out how to do it.)

The result was a rocket which was simultaneously huge, crude, and bewilderingly complicated. There was neither money in the budget nor time in the schedule to build a test stand to permit ground firings of the first stage. The first time those thirty engines fired up would be on the launch pad. Further, Kuznetsov's engines were not reusable. After every firing, they had to be torn down and overhauled, and hence were essentially a new and untested engine every time they fired. The Saturn V engines, by contrast, while expended in each flight, could be and were individually test fired, then ground tested together installed on the flight stage before being stacked into a launch vehicle.

The weight and less efficient fuel of the N-1 made its performance anæmic. While it had almost 50% more thrust at liftoff than the Saturn V, its payload to low Earth orbit was 25% less. This meant that performing a manned lunar landing mission in a single launch was just barely possible. The architecture would have launched two cosmonauts in a lunar orbital ship. After entering orbit around the Moon, one would spacewalk to the separate lunar landing craft (an internal docking tunnel as used in Apollo would have been too heavy) and descend to the Moon. Fuel constraints meant the cosmonaut only had ten to fifteen seconds to choose a landing spot. After the footprints, flag, and grabbing a few rocks, it was back to the lander to take off to rejoin the orbiter. Then it took another spacewalk to get back inside. Everybody involved at the time was acutely aware how marginal and risky this was, but given that the N-1 design was already frozen and changing it or re-architecting the mission to two or three launches would push out the landing date four or five years, it was the only option that would not forfeit the Moon race to the Americans.

They didn't even get close. In each of its test flights, the N-1 did not even get to the point of second stage ignition (although in its last flight it got within seven seconds of that milestone). On the second test flight the engines cut off shortly after liftoff and the vehicle fell back onto the launch pad, completely obliterating it in the largest artificial non-nuclear explosion known to this date: the equivalent of 7 kilotons of TNT. After four consecutive launch failures, having lost the Moon race, with no other mission requiring its capabilities, and the military opposing an expensive program for which they had no use, work on the N-1 was suspended in 1974 and the program officially cancelled in 1976.

When I read Challenge to Apollo, what struck me was the irony that the Apollo program was the very model of a centrally-planned state-directed effort along Soviet lines, while the Soviet Moon program was full of the kind of squabbling, turf wars, and duplicative competitive efforts which Marxists decry as flaws of the free market. What astounded me in reading this book is that the Soviets were acutely aware of this in 1968. In chapter 9, Chertok recounts a Central Committee meeting in which Minister of Defence Dmitriy Ustinov remarked:

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

In addition to the Moon program, there is extensive coverage of the development of automated rendezvous and docking and the long duration orbital station programs (Almaz, Salyut, and Mir). There is also an enlightening discussion, building on Chertok's career focus on control systems, of the challenges in integrating humans and automated systems into the decision loop and coping with off-nominal situations in real time.

I could go on and on, but there is so much to learn from this narrative, I'll just urge you to read it. Even if you are not particularly interested in space, there is much experience and wisdom to be gained from it which are applicable to all kinds of large complex systems, as well as insight into how things were done in the Soviet Union. It's best to read Volume 1 (May 2012), Volume 2 (August 2012), and Volume 3 (December 2012) first, as they will introduce you to the cast of characters and the events which set the stage for those chronicled here.

As with all NASA publications, the work is in the public domain, and an online edition in PDF, EPUB, and MOBI formats is available.

A commercial Kindle edition is available which is much better produced than the Kindle editions of the first three volumes. If you have a suitable application on your reading device for one of the electronic book formats provided by NASA, I'd opt for it. They're free.

The original Russian edition is available online.

March 2013 Permalink

Chivers, C. J. The Gun. New York: Simon & Schuster, 2010. ISBN 978-0-7432-7173-8.
Ever since the introduction of firearms into infantry combat, technology and military doctrine have co-evolved to optimise the effectiveness of the weapons carried by the individual soldier. This process requires choosing a compromise among a long list of desiderata including accuracy, range, rate of fire, stopping power, size, weight (of both the weapon and its ammunition, which determines how many rounds an infantryman can carry), reliability, and the degree of training required to operate the weapon in both normal and abnormal circumstances. The “sweet spot” depends upon the technology available at the time (for example, smokeless powder allowed replacing heavy, low muzzle velocity, large calibre rounds with lighter supersonic ammunition), and the environment in which the weapon will be used (long range and high accuracy over great distances are largely wasted in jungle and urban combat, where most engagements are close-up and personal).

Still, ever since the advent of infantry firearms, the rate of fire an individual soldier can sustain has been considered a key force multiplier. All things being equal, a solider who can fire sixteen rounds per minute can do the work of four soldiers equipped with muzzle loading arms which can fire only four rounds a minute. As infantry arms progressed from muzzle loaders to breech loaders to magazine fed lever and bolt actions, the sustained rate of fire steadily increased. The logical endpoint of this evolution was a fully automatic infantry weapon: a rifle which, as long as the trigger was held down and ammunition remained, would continue to send rounds downrange at a high cyclic rate. Such a rifle could also be fired in semiautomatic mode, firing one round every time the trigger was pulled without any other intervention by the rifleman other than to change magazines as they were emptied.

This book traces the history of automatic weapons from primitive volley guns; through the Gatling gun, the first successful high rate of fire weapon (although with the size and weight of a field artillery piece and requiring a crew to hand crank it and feed ammunition, it was hardly an infantry weapon); the Maxim gun, the first true machine gun which was responsible for much of the carnage in World War I; to the Thompson submachine gun, which could be carried and fired by a single person but, using pistol ammunition, lacked the range and stopping power of an infantry rifle. At the end of World War II, the vast majority of soldiers carried bolt action or semiautomatic weapons: fully automatic fire was restricted to crew served support weapons operated by specially trained gunners.

As military analysts reviewed combat as it happened on the ground in the battles of World War II, they discovered that long range aimed fire played only a small part in infantry actions. Instead, infantry weapons had been used mostly at relatively short ranges to lay down suppressive fire. In this application, rate of fire and the amount of ammunition a soldier can carry into combat come to the top of the priority list. Based upon this analysis, even before the end of the war Soviet armourers launched a design competition for a next generation rifle which would put automatic fire into the hands of the ordinary infantryman. After grueling tests under all kinds of extreme conditions such a weapon might encounter in the field, the AK-47, initially designed by Mikhail Kalashnikov, a sergeant tank commander injured in battle, was selected. In 1956 the AK-47 became the standard issue rifle of the Soviet Army, and it and its subsequent variants, the AKM (an improved design which was also lighter and less expensive to manufacture—most of the weapons one sees today which are called “AK-47s” are actually based on the AKM design), and the smaller calibre AK-74. These weapons and the multitude of clones and variants produced around the world have become the archetypal small arms of the latter half of the twentieth century and are likely to remain so for the foreseeable future in the twenty-first. Nobody knows how many were produced but almost certainly the number exceeds 100 million, and given the ruggedness and reliability of the design, most remain operational today.

This weapon, designed to outfit forces charged with maintaining order in the Soviet Empire and expanding it to new territories, quickly slipped the leash and began to circulate among insurgent forces around the globe—initially infiltrated by Soviet and Eastern bloc countries to equip communist revolutionaries, an “after-market” quickly developed which allowed almost any force wishing to challenge an established power to obtain a weapon and ammunition which made its irregular fighters the peer of professional troops. The worldwide dissemination of AK weapons and their availability at low cost has been a powerful force destabilising regimes which before could keep their people down with a relatively small professional army. The author recounts the legacy of the AK in incidents over the decades and around the world, and the tragic consequences for those who have found themselves on the wrong end of this formidable weapon.

United States forces first encountered the AK first hand in Vietnam, and quickly realised that their M14 rifles, an attempt to field a full automatic infantry weapon which used the cartridge of a main battle rifle, was too large, heavy, and limiting in the amount of ammunition a soldier could carry to stand up to the AK. The M14's only advantages: long range and accuracy, were irrelevant in the Vietnam jungle. While the Soviet procurement and development of the AK-47 was deliberate and protracted, Pentagon whiz kids in the U.S. rushed the radically new M16 into production and the hands of U.S. troops in Vietnam. The new rifle, inadequately tested in the field conditions it would encounter, and deployed with ammunition different from that used in the test phase, failed frequently and disastrously in the hands of combat troops with results which were often tragic. What was supposed to be the most advanced infantry weapon on the planet often ended up being used as bayonet mount or club by troops in their last moments of life. The Pentagon responded to this disaster in the making by covering up the entire matter and destroying the careers of those who attempted to speak out. Eventually reports from soldiers in the field made their way to newspapers and congressmen and the truth began to come out. It took years for the problems of the M16 to be resolved, and to this day the M16 is considered less reliable (although more accurate) than the AK. As an example, compare what it takes to field strip an M16 compared to an AK-47. The entire ugly saga of the M16 is documented in detail here.

This is a fascinating account of the origins, history, and impact of the small arms which dominate the world today. The author does an excellent job of sorting through the many legends (especially from the Soviet era) surrounding these weapons, and sketching the singular individuals behind their creation.

In the Kindle edition, the table of contents, end notes, and index are all properly linked to the text. All of the photographic illustrations are collected at the very end, after the index.

December 2011 Permalink

[Audiobook] Churchill, Winston S. The Birth of Britain. (Audiobook, Unabridged). London: BBC Audiobooks, [1956] 2006. ISBN 978-0-304-36389-6.
This is the first book in Churchill's sprawling four-volume A History of the English-Speaking Peoples. Churchill began work on the history in the 1930s, and by the time he set it aside to go to the Admiralty in 1939, about half a million words had been delivered to his publisher. His wartime service as Prime Minister, postwar writing of the six-volume history The Second World War, and second term as Prime Minister from 1951 to 1955 caused the project to be postponed repeatedly, and it wasn't until 1956–1958, when Churchill was in his 80s, that the work was published. Even sections which existed as print proofs from the 1930s were substantially revised based upon scholarship in the intervening years.

The Birth of Britain covers the period from Julius Caesar's invasion of Britain in 55 B.C. through Richard III's defeat and death at the hands of Henry Tudor's forces at the Battle of Bosworth in 1485, bringing to an end both the Wars of the Roses and the Plantagenet dynasty. This is very much history in the “kings, battles, and dates” mould; there is little about cultural, intellectual, and technological matters—the influence of the monastic movement, the establishment and growth of universities, and the emergence of guilds barely figure at all in the narrative. But what a grand narrative it is, the work of one of the greatest masters of the language spoken by those whose history he chronicles. In accounts of early periods where original sources are scanty and it isn't necessarily easy to distinguish historical accounts from epics and legends, Churchill takes pains to note this and distinguish his own conclusions from alternative interpretations.

This audiobook is distributed in seven parts, totalling 17 hours. A print edition is available in the UK.

January 2008 Permalink

Churchill, Winston S. The World Crisis. London: Penguin, [1923–1931, 2005] 2007. ISBN 978-0-14-144205-1.
Churchill's history of the Great War (what we now call World War I) was published in five volumes between 1923 and 1931. The present volume is an abridgement of the first four volumes, which appeared simultaneously with the fifth volume of the complete work. This abridged edition was prepared by Churchill himself; it is not a cut and paste job by an editor. Volume Four and this abridgement end with the collapse of Germany and the armistice—the aftermath of the war and the peace negotiations covered in Volume Five of the full history are not included here.

When this work began to appear in 1923, the smart set in London quipped, “Winston's written a book about himself and called it The World Crisis”. There's a lot of truth in that: this is something somewhere between a history and memoir of a politician in wartime. Description of the disastrous attempts to break the stalemate of trench warfare in 1915 barely occupies a chapter, while the Dardanelles Campaign, of which Churchill was seen as the most vehement advocate, and for which he was blamed after its tragic failure, makes up almost a quarter of the 850 page book.

If you're looking for a dispassionate history of World War I, this is not the book to read: it was written too close to the events of the war, before the dire consequences of the peace came to pass, and by a figure motivated as much to defend his own actions as to provide a historical narrative. That said, it does provide an insight into how Churchill's experiences in the war forged the character which would cause Britain to turn to him when war came again. It also goes a long way to explaining precisely why Churchill's warnings were ignored in the 1930s. This book is, in large part, a recital of disaster after disaster in which Churchill played a part, coupled with an explanation of why, in each successive case, it wasn't his fault. Whether or not you accept his excuses and justifications for his actions, it's pretty easy to understand how politicians and the public in the interwar period could look upon Churchill as somebody who, when given authority, produced calamity. It was not just that others were blind to the threat, but rather than Churchill's record made him a seriously flawed messenger on an occasion where his message was absolutely correct.

At this epoch, Churchill was already an excellent writer and delivers some soaring prose on occasions, but he has not yet become the past master of the English language on display in The Second World War (which won the Nobel Prize for Literature when it really meant something). There are numerous tables, charts, and maps which illustrate the circumstances of the war.

Americans who hold to the common view that “The Yanks came to France and won the war for the Allies” may be offended by Churchill's speaking of them only in passing. He considers their effect on the actual campaigns of 1918 as mostly psychological: reinforcing French and British morale and confronting Germany with an adversary with unlimited resources.

Perhaps the greatest lesson to be drawn from this work is that of the initial part, which covers the darkening situation between 1911 and the outbreak of war in 1914. What is stunning, as sketched by a person involved in the events of that period, is just how trivial the proximate causes of the war were compared to the apocalyptic bloodbath which ensued. It is as if the crowned heads, diplomats, and politicians had no idea of the stakes involved, and indeed they did not—all expected the war to be short and decisive, none anticipating the consequences of the superiority conferred on the defence by the machine gun, entrenchments, and barbed wire. After the outbreak of war and its freezing into a trench war stalemate in the winter of 1914, for three years the Allies believed their “offensives”, which squandered millions of lives for transitory and insignificant gains of territory, were conducting a war of attrition against Germany. In fact, due to the supremacy of the defender, Allied losses always exceeded those of the Germans, often by a factor of two to one (and even more for officers). Further, German losses were never greater than the number of new conscripts in each year of the war up to 1918, so in fact this “war of attrition” weakened the Allies every year it continued. You'd expect intelligence services to figure out such a fundamental point, but it appears the “by the book” military mentality dismissed such evidence and continued to hurl a generation of their countrymen into the storm of steel.

This is a period piece: read it not as a history of the war but rather to experience the events of the time as Churchill saw them, and to appreciate how they made him the wartime leader he was to be when, once again, the lights went out all over Europe.

A U.S. edition is available.

February 2010 Permalink

Churchill, Winston S. and Dwight D. Eisenhower. The Churchill-Eisenhower Correspondence, 1953–1955. Edited by Peter G. Boyle. Chapel Hill, NC: University of North Carolina Press, 1990. ISBN 0-8078-4951-0.

October 2001 Permalink

Conquest, Robert. The Great Terror: A Reassessment. New York: Oxford University Press, 1990. ISBN 0-19-507132-8.

January 2002 Permalink

Copeland, B. Jack, ed. Colossus. Oxford: Oxford University Press, 2006. ISBN 978-0-19-953680-1.
During World War II the British codebreakers at Bletchley Park provided intelligence to senior political officials and military commanders which was vital in winning the Battle of the Atlantic and discerning German strategic intentions in the build-up to the invasion of France and the subsequent campaign in Europe. Breaking the German codes was just barely on the edge of possibility with the technology of the time, and required recruiting a cadre of exceptionally talented and often highly eccentric individuals and creating tools which laid the foundations for modern computer technology.

At the end of the war, all of the work of the codebreakers remained under the seal of secrecy: in Winston Churchill's history of the war it was never mentioned. Part of this was due to the inertia of the state to relinquish its control over information, but also because the Soviets, emerging as the new adversary, might adopt some of the same cryptographic techniques used by the Germans and concealing that they had been compromised might yield valuable information from intercepts of Soviet communications.

As early as the 1960s, publications in the United States began to describe the exploits of the codebreakers, and gave the mistaken impression that U.S. codebreakers were in the vanguard simply because they were the only ones allowed to talk about their wartime work. The heavy hand of the Official Secrets Act suppressed free discussion of the work at Bletchley Park until June 2000, when the key report, written in 1945, was allowed to be published.

Now it can be told. Fortunately, many of the participants in the work at Bletchley were young and still around when finally permitted to discuss their exploits. This volume is largely a collection of their recollections, many in great technical detail. You will finally understand precisely which vulnerabilities of the German cryptosystems permitted them to be broken (as is often the case, it was all-too-clever innovations by the designers intended to make the encryption “unbreakable” which provided the door into it for the codebreakers) and how sloppy key discipline among users facilitated decryption. For example, it was common to discover two or more messages encrypted with the same key. Since encryption was done by a binary exclusive or (XOR) of the bits of the Baudot teleprinter code, with that of the key (generated mechanically from a specified starting position of the code machine's wheels), if you have two messages encrypted with the same key, you can XOR them together, taking out the key and leaving you with the XOR of the plaintext of the two messages. This, of course, will be gibberish, but you can then take common words and phrases which occur in messages and “slide” them along the text, XORing as you go, to see if the result makes sense. If it does, you've recovered part of the other message, and by XORing with either message, that part of the key. This is something one could do in microseconds today with the simplest of computer programs, but in the day was done in kiloseconds by clerks looking up the XOR of Baudot codes in tables one by one (at least until they memorised them, which the better ones did).

The chapters are written by people with expertise in the topic discussed, many of whom were there. The people at Bletchley had to make up the terminology for the unprecedented things they were doing as they did it. Due to the veil of secrecy dropped over their work, many of their terms were orphaned. What we call “bits” they called “pulses”, “binary addition” XOR, and ones and zeroes of binary notation crosses and dots. It is all very quaint and delightful, and used in most of these documents.

After reading this book you will understand precisely how the German codes were broken, what Colossus did, how it was built and what challenges were overcome in constructing it, and how it was integrated into a system incorporating large numbers of intuitive humans able to deliver near-real-time intelligence to decision makers. The level of detail may be intimidating to some, but for the first time it's all there. I have never before read any description of the key flaw in the Lorenz cipher which Colossus exploited and how it processed messages punched on loops of paper tape to break into them and recover the key.

The aftermath of Bletchley was interesting. All of the participants were sworn to secrecy and all of their publications kept under high security. But the know-how they had developed in electronic computation was their own, and many of them went to Manchester to develop the pioneering digital computers developed there. The developers of much of this technology could not speak of whence it came, and until recent years the history of computing has been disconnected from its roots.

As a collection of essays, this book is uneven and occasionally repetitive. But it is authentic, and an essential document for anybody interested in how codebreaking was done in World War II and how electronic computation came to be.

March 2013 Permalink

Courland, Robert. Concrete Planet. Amherst, NY: Prometheus Books, 2011. ISBN 978-1-61614-481-4.
Visitors to Rome are often stunned when they see the Pantheon and learn it was built almost 19 centuries ago, during the reign of the emperor Hadrian. From the front, the building has a classical style echoed in neo-classical government buildings around the world, but as visitors walk inside, it is the amazing dome which causes them to gasp. At 43.3 metres in diameter, it was the largest dome ever built in its time, and no larger dome has, in all the centuries since, ever been built in the same way. The dome of the Pantheon is a monolithic structure of concrete, whose beauty and antiquity attests to the versatility and durability of this building material which has become a ubiquitous part of the modern world.

To the ancients, who built from mud, stone, and later brick, it must have seemed like a miracle to discover a material which, mixed with water, could be moulded into any form and would harden into stone. Nobody knows how or where it was discovered that by heating natural limestone to a high temperature it could be transformed into quicklime (calcium oxide), a corrosive substance which reacts exothermically with water, solidifying into a hard substance. The author speculates that the transformation of limestone into quicklime due to lightning strikes may have been discovered in Turkey and applied to production of quicklime by a kilning process, but the evidence for this is sketchy. But from the neolithic period, humans discovered how to make floors from quicklime and a binder, and this technology remained in use until the 19th century.

All of these early lime-based mortars could not set underwater and were vulnerable to attack by caustic chemicals. It was the Romans who discovered that by mixing volcanic ash (pozzolan), which was available to them in abundance from the vicinity of Mt. Vesuvius, it was possible to create a “hydraulic cement” which could set underwater and was resistant to attack from the elements. In addition to structures like the Pantheon, the Colosseum, roads, and viaducts, Roman concrete was used to build the artificial harbour at Caesarea in Judea, the largest application of hydraulic concrete before the 20th century.

Jane Jacobs has written that the central aspect of a dark age is not that specific things have been forgotten, but that a society has forgotten what it has forgotten. It is indicative of the dark age which followed the fall of the Roman empire that even with the works of the Roman engineers remaining for all to see, the technology of Roman concrete used to build them, hardly a secret, was largely forgotten until the 18th century, when a few buildings were constructed from similar formulations.

It wasn't until the middle of the 19th century that the precursors of modern cement and concrete construction emerged. The adoption of this technology might have been much more straightforward had it not been the case that a central player in it was William Aspdin, a world-class scoundrel whose own crookedness repeatedly torpedoed ventures in which he was involved which, had he simply been honest and straightforward in his dealings, would have made him a fortune beyond the dreams of avarice.

Even with the rediscovery of waterproof concrete, its adoption was slow in the 19th century. The building of the Thames Tunnel by the great engineers Marc Brunel and his son Isambard Kingdom Brunel was a milestone in the use of concrete, albeit one achieved only after a long series of setbacks and mishaps over a period of 18 years.

Ever since antiquity, and despite numerous formulations, concrete had one common structural property: it was very strong in compression (it resisted forces which tried to crush it), but had relatively little tensile strength (if you tried to pull it apart, it would easily fracture). This meant that concrete structures had to be carefully designed so that the concrete was always kept in compression, which made it difficult to build cantilevered structures or others requiring tensile strength, such as many bridge designs employing iron or steel. In the latter half of the 19th century, a number of engineers and builders around the world realised that by embedding iron or steel reinforcement within concrete, its tensile strength could be greatly increased. The advent of reinforced concrete allowed structures impossible to build with pure concrete. In 1903, the 16-story Ingalls Building in Cincinnati became the first reinforced concrete skyscraper, and the tallest building today, the Burj Khalifa in Dubai, is built from reinforced concrete.

The ability to create structures with the solidity of stone, the strength of steel, in almost any shape a designer can imagine, and at low cost inspired many in the 20th century and beyond, with varying degrees of success. Thomas Edison saw in concrete a way to provide affordable houses to the masses, complete with concrete furniture. It was one of his less successful ventures. Frank Lloyd Wright quickly grasped the potential of reinforced concrete, and used it in many of his iconic buildings. The Panama Canal made extensive use of reinforced concrete, and the Hoover Dam demonstrated that there was essentially no limit to the size of a structure which could be built of it (the concrete of the dam is still curing to this day). The Sydney Opera House illustrated (albeit after large schedule slips, cost overruns, and acrimony between the architect and customer) that just about anything an architect can imagine could be built of reinforced concrete.

To see the Pantheon or Colosseum is to think “concrete is eternal” (although the Colosseum is not in its original condition, this is mostly due to its having been mined for building materials over the centuries). But those structures were built with unreinforced Roman concrete. Just how long can we expect our current structures, built from a different kind of concrete and steel reinforcing bars to last? Well, that's…interesting. Steel is mostly composed of iron, and iron is highly reactive in the presence of water and oxygen: it rusts. You'll observe that water and oxygen are abundant on Earth, so unprotected steel can be expected to eventually crumble into rust, losing its structural strength. This is why steel bridges, for example, must be regularly stripped and repainted to provide a barrier which protects the steel against the elements. In reinforced concrete, it is the concrete itself which protects the steel reinforcement, initially by providing an alkali environment which inhibits rust and then, after the concrete cures, by physically excluding water and the atmosphere from the reinforcement. But, as builders say, “If it ain't cracked, it ain't concrete.” Inevitably, cracks will allow air and water to reach the reinforcement, which will begin to rust. As it rusts, it loses its structural strength and, in addition, expands, which further cracks the concrete and allows more air and moisture to enter. Eventually you'll see the kind of crumbling used to illustrate deteriorating bridges and other infrastructure.

How long will reinforced concrete last? That depends upon the details. Port and harbour facilities in contact with salt water have failed in less than fifty years. Structures in less hostile environments are estimated to have a life of between 100 and 200 years. Now, this may seem like a long time compared to the budget cycle of the construction industry, but eternity it ain't, and when you consider the cost of demolition and replacement of structures such as dams and skyscrapers, it's something to think about. But obviously, if the Romans could build concrete structures which have lasted millennia, so can we. The author discusses alternative formulations of concrete and different kinds of reinforcing which may dramatically increase the life of reinforced concrete construction.

This is an interesting and informative book, but I found the author's style a bit off-putting. In the absence of fact, which is usually the case when discussing antiquity, the author simply speculates. Speculation is always clearly identified, but rather than telling a story about a shaman discovering where lightning struck limestone and spinning it unto a legend about the discovery of manufacture of quicklime, it might be better to say, “nobody really knows how it happened”. Eleven pages are spent discussing the thoroughly discredited theory that the Egyptian pyramids were made of concrete, coming to the conclusion that the theory is bogus. So why mention it? There are a number of typographical errors and a few factual errors (no, the Mesoamericans did not build pyramids “a few of which would equal those in Egypt”).

Still, if you're interested in the origin of the material which surrounds us in the modern world, how it was developed by the ancients, largely forgotten, and then recently rediscovered and used to revolutionise construction, this is a worthwhile read.

October 2015 Permalink

Crocker, George N. Roosevelt's Road To Russia. Whitefish, MT: Kessinger Publishing, [1959] 2010. ISBN 978-1-163-82408-5.
Before Barack Obama, there was Franklin D. Roosevelt. Unless you lived through the era, imbibed its history from parents or grandparents, or have read dissenting works which have survived rounds of deaccessions by libraries, it is hard to grasp just how visceral the animus was against Roosevelt by traditional, constitutional, and free-market conservatives. Roosevelt seized control of the economy, extended the tentacles of the state into all kinds of relations between individuals, subdued the judiciary and bent it to his will, manipulated a largely supine media which, with a few exceptions, became his cheering section, and created programs which made large sectors of the population directly dependent upon the federal government and thus a reliable constituency for expanding its power. He had the audacity to stand for re-election an unprecedented three times, and each time the American people gave him the nod.

But, as many old-timers, even those who were opponents of Roosevelt at the time and appalled by what the centralised super-state he set into motion has become, grudgingly say, “He won the war.” Well, yes, by the time he died in office on April 12, 1945, Germany was close to defeat; Japan was encircled, cut off from the resources needed to continue the war, and being devastated by attacks from the air; the war was sure to be won by the Allies. But how did the U.S. find itself in the war in the first place, how did Roosevelt's policies during the war affect its conduct, and what consequences did they have for the post-war world?

These are the questions explored in this book, which I suppose contemporary readers would term a “paleoconservative” revisionist account of the epoch, published just 14 years after the end of the war. The work is mainly an account of Roosevelt's personal diplomacy during meetings with Churchill or in the Big Three conferences with Churchill and Stalin. The picture of Roosevelt which emerges is remarkably consistent with what Churchill expressed in deepest confidence to those closest to him which I summarised in my review of The Last Lion, Vol. 3 (January 2013) as “a lightweight, ill-informed and not particularly engaged in military affairs and blind to the geopolitical consequences of the Red Army's occupying eastern and central Europe at war's end.” The events chronicled here and Roosevelt's part in them is also very much the same as described in Freedom Betrayed (June 2012), which former president Herbert Hoover worked on from shortly after Pearl Harbor until his death in 1964, but which was not published until 2011.

While Churchill was constrained in what he could say by the necessity of maintaining Britain's alliance with the U.S., and Hoover adopts a more scholarly tone, the present volume voices the outrage over Roosevelt's strutting on the international stage, thinking “personal diplomacy” could “bring around ‘Uncle Joe’ ”, condemning huge numbers of military personnel and civilians on both the Allied and Axis sides to death by blurting out “unconditional surrender” without any consultation with his staff or Allies, approving the genocidal Morgenthau Plan to de-industrialise defeated Germany, and, discarding the high principles of his own Atlantic Charter, delivering millions of Europeans into communist tyranny and condoning one of the largest episodes of ethnic cleansing in human history.

What is remarkable is how difficult it is to come across an account of this period which evokes the author's passion, shared with many of his time, of how the bumblings of a naïve, incompetent, and narcissistic chief executive had led directly to so much avoidable tragedy on a global scale. Apart from Hoover's book, finally published more than half a century after this account, there are few works accessible to the general reader which present the view that the tragic outcome of World War II was in large part preventable, and that Roosevelt and his advisers were responsible, in large part, for what happened.

Perhaps there are parallels in this account of wickedness triumphing through cluelessness for our present era.

This edition is a facsimile reprint of the original edition published by Henry Regnery Company in 1959.

January 2014 Permalink

L. D. Cross. Code Name Habbakuk. Toronto: Heritage House, 2012. ISBN 978-1-927051-47-4.
World War II saw the exploration, development, and in some cases deployment, of ideas which, without the pressure of war, would be considered downright wacky. Among the most outlandish was the concept of building an enormous aircraft carrier (or floating airbase) out of reinforced ice. This book recounts the story of the top secret British/Canadian/U.S. project to develop and test this technology. (The title is not misspelled: the World War II project was spelled “Habbakuk”, as opposed to the Old Testament prophet, whose name was “Habakkuk”. The reason for the difference in spelling has been lost in the mists of time.)

January 2021 Permalink

Dana, Richard Henry. Two Years Before the Mast. New York: Signet, [1840, 1869] 2000. ISBN 0-451-52759-3.

March 2001 Permalink

De-la-Noy, Michael. Scott of the Antarctic. Stroud, Gloucestershire, England: Sutton Publishing, 1997. ISBN 0-7509-1512-9.

December 2001 Permalink

Dean, Josh. The Taking of K-129. New York: Dutton, 2012. ISBN 978-1-101-98443-7.
On February 24, 1968, Soviet Golf class submarine K-129 sailed from its base in Petropavlovsk for a routine patrol in the Pacific Ocean. These ballistic missile submarines were, at the time, a key part of the Soviet nuclear deterrent. Each carried three SS-N-5 missiles armed with one 800 kiloton nuclear warhead per missile. This was an intermediate range missile which could hit targets inside an enemy country if the submarine approached sufficiently close to the coast. For defence and attacking other ships, Golf class submarines carried two torpedoes with nuclear warheads as well as conventional high explosive warhead torpedoes.

Unlike the U.S. nuclear powered Polaris submarines, the Golf class had conventional diesel-electric propulsion. When submerged, the submarine was powered by batteries which provided limited speed and range and required surfacing or running at shallow snorkel depth for regular recharging by the diesel engines. They would be the last generation of Soviet diesel-electric ballistic missile submarines: the Hotel class and subsequent boats would be nuclear powered.

K-129's mission was to proceed stealthily to a region of open ocean north of Midway Atoll and patrol there, ready to launch its missiles at U.S. assets in the Pacific in case of war. Submarines on patrol would send coded burst transmissions on a prearranged schedule to indicate that their mission was proceeding as planned.

On March 8, a scheduled transmission from K-129 failed to arrive. This wasn't immediately cause for concern, since equipment failure was not uncommon, and a submarine commander might choose not to transmit if worried that surfacing and sending the message might disclose his position to U.S. surveillance vessels and aircraft. But when K-129 remained silent for a second day, the level of worry escalated rapidly. Losing a submarine armed with nuclear weapons was a worst-case scenario, and one which had never happened in Soviet naval operations.

A large-scale search and rescue fleet of 24 vessels, including four submarines, set sail from the base in Kamchatka, all communicating in the open on radio and pinging away with active sonar. They were heard to repeatedly call a ship named Red Star with no reply. The search widened, and eventually included thirty-six vessels and fifty-three aircraft, continuing over a period of seventy-three days. Nothing was found, and six months after the disappearance, the Soviet Navy issued a statement that K-129 had been lost while on duty in the Pacific with all on board presumed dead. This was not only a wrenching emotional blow to the families of the crew, but also a financial gut-shot, depriving them of the pension due families of men lost in the line of duty and paying only the one-time accidental death payment and partial pension for industrial accidents.

But if the Soviets had no idea where their submarine was, this was not the case for the U.S. Navy. Sound travels huge distances through the oceans, and starting in the 1950s, the U.S. began to install arrays of hydrophones (undersea sound detectors) on the floors of the oceans around the world. By the 1960s, these arrays, called SOSUS (SOund SUrveillance System) were deployed and operational in both the Atlantic and Pacific and used to track the movements of Soviet submarines. When K-129 went missing, SOSUS analysts went back over their archived data and found a sharp pulse just a few seconds after midnight local time on March 11 around 180° West and 40° North: 2500 km northeast of Hawaii. Not only did the pulse appear nothing like the natural sounds often picked up by SOSUS, events like undersea earthquakes don't tend to happen at socially constructed round number times and locations like this one. The pulse was picked up by multiple sensors, allowing its position to be determined accurately. The U.S. knew where the K-129 lay on the ocean floor. But what to do with that knowledge?

One thing was immediately clear. If the submarine was in reasonably intact condition, it would be an intelligence treasure unparalleled in the postwar era. Although it did not represent the latest Soviet technology, it would provide analysts their first hands-on examination of Soviet ballistic missile, nuclear weapon, and submarine construction technologies. Further, the boat would certainly be equipped with cryptographic and secure radio communications gear which might provide an insight into penetrating the secret communications to and from submarines on patrol. (Recall that British breaking of the codes used to communicate with German submarines in World War II played a major part in winning the Battle of the Atlantic.) But a glance at a marine chart showed how daunting it would be to reach the site of the wreck. The ocean in the vicinity of the co-ordinates identified by SOSUS was around 5000 metres deep. Only a very few special-purpose research vessels can operate at such a depth, where the water pressure is around 490 times that of the atmosphere at sea level.

The U.S. intelligence community wanted that sub. The first step was to make sure they'd found it. The USS Halibut, a nuclear-powered Regulus cruise missile launching submarine converted for special operations missions, was dispatched to the area where the K-129 was thought to lie. Halibut could not dive anywhere near as deep as the ocean floor, but was equipped with a remote-controlled, wire-tethered “fish”, which could be lowered near the bottom and then directed around the search area, observing with side-looking sonar and taking pictures. After seven weeks searching in vain, with fresh food long exhausted and crew patience wearing thin, the search was abandoned and course set back to Pearl Harbor.

But the prize was too great to pass up. So Halibut set out again, and after another month of operating the fish, developing thousands of pictures, and fraying tempers, there it was! Broken into two parts, but with both apparently largely intact, lying on the ocean bottom. Now what?

While there were deep sea research vessels able to descend to such depths, they were completely inadequate to exploit the intelligence haul that K-129 promised. That would require going inside the structure, dismantling the missiles and warheads, examining and testing the materials, and searching for communications and cryptographic gear. The only way to do this was to raise the submarine. To say that this was a challenge is to understate its difficulty—adjectives fail. The greatest mass which had ever been raised from such a depth was around 50 tonnes and K-129 had a mass of 1,500 tonnes—thirty times greater. But hey, why not? We're Americans! We've landed on the Moon! (By then it was November, 1969, four months after that “one small step”.) And so, Project Azorian was born.

When it comes to doing industrial-scale things in the deep ocean, all roads (or sea lanes) lead to Global Marine. A publicly-traded company little known to those outside the offshore oil exploration industry, this company and its genius naval architect John Graham had pioneered deep-sea oil drilling. While most offshore oil rigs, like those on terra firma, were firmly anchored to the land around the drill hole, Global Marine had pioneered the technology which allowed a ship, with a derrick mounted amidships, to precisely station-keep above the bore-hole on the ocean floor far beneath the ship. The required dropping sonar markers on the ocean floor which the ship used to precisely maintain its position with respect to them. This was just one part of the puzzle.

To recover the submarine, the ship would need to lower what amounted to a giant claw (“That's claw, not craw!”, you “Get Smart” fans) to the abyssal plain, grab the sub, and lift its 1500 tonne mass to the surface. During the lift, the pipe string which connected the ship to the claw would be under such stress that, should it break, it would release energy comparable to an eight kiloton nuclear explosion, which would be bad.

This would have been absurdly ambitious if conducted in the open, like the Apollo Project, but in this case it also had to be done covertly, since the slightest hint that the U.S. was attempting to raise K-129 would almost certainly provoke a Soviet response ranging from diplomatic protests to a naval patrol around the site of the sinking aimed at harassing the recovery ships. The project needed a cover story and a cut-out to hide the funding to Global Marine which, as a public company, had to disclose its financials quarterly and, unlike minions of the federal government funded by taxes collected from hairdressers and cab drivers through implicit threat of violence, could not hide its activities in a “black budget”.

This was seriously weird and, as a contemporary philosopher said, “When the going gets weird, the weird turn pro.” At the time, nobody was more professionally weird than Howard Hughes. He had taken reclusion to a new level, utterly withdrawing from contact with the public after revulsion from dealing with the Washington swamp and the media. His company still received royalties from every oil well drilled using his drill bits, and his aerospace and technology companies were plugged into the most secret ventures of the U.S. government. Simply saying, “It's a Hughes project” was sufficient to squelch most questions. This meant it had unlimited funds, the sanction of the U.S. government (including three-letter agencies whose names must not be spoken [brrrr!]), and told pesky journalists they'd encounter a stone wall from the centre of the Earth to the edge of the universe if they tried to dig into details.

But covert as the project might be, aspects of its construction and operation would unavoidably be in the public eye. You can't build a 189 metre long, 51,000 tonne ship, the Hughes Glomar Explorer, with an 80 metre tall derrick sticking up amidships, at a shipyard on the east coast of the U.S., send it around Cape Horn to its base on the west coast (the ship was too wide to pass through the Panama Canal), without people noticing. A cover story was needed, and the CIA and their contractors cooked up a doozy.

Large areas of the deep sea floor are covered by manganese nodules, concretions which form around a seed and grow extremely slowly, but eventually reach the size of potatoes or larger. Nodules are composed of around 30% manganese, plus other valuable metals such as nickel, copper, and cobalt. There are estimated to be more than 21 billion tonnes of manganese nodules on the deep ocean floor (depths of 4000 to 6000 metres), and their composition is richer than many of the ores from which the metals they contain are usually extracted. Further, they're just lying on the seabed. If you could figure out how to go down there and scoop them up, you wouldn't have to dig mines and process huge amounts of rock. Finally, they were in international waters, and despite attempts by kleptocratic dictators (some in landlocked countries) and the international institutions who support them to enact a “Law of the Sea” treaty to pick the pockets of those who created the means to use this resource, at the time the nodules were just there for the taking—you didn't have to pay kleptocratic dictators for mining rights or have your profits skimmed by ever-so-enlightened democratic politicians in developed countries.

So, the story was put out that Howard Hughes was setting out to mine the nodules on the Pacific Ocean floor, and that Glomar Explorer, built by Global Marine under contract for Hughes (operating, of course, as a cut-out for the CIA), would deploy a robotic mining barge called the Hughes Mining Barge 1 (HMB-1) which, lowered to the ocean floor, would collect nodules, crush them, and send the slurry to the surface for processing on the mother ship.

This solved a great number of potential problems. Global Marine, as a public company, could simply (and truthfully) report that it was building Glomar Explorer under contract to Hughes, and had no participation in the speculative and risky mining venture, which would have invited scrutiny by Wall Street analysts and investors. Hughes, operating as a proprietorship, was not required to disclose the source of the funds it was paying Global Marine. Everybody assumed the money was coming from Howard Hughes' personal fortune, which he had invested, over his career, in numerous risky ventures, when in fact, he was simply passing through money from a CIA black budget account. The HMB-1 was built by Lockheed Missiles and Space Company under contract from Hughes. Lockheed was involved in numerous classified U.S. government programs, so operating in the same manner for the famously secretive Hughes raised few eyebrows.

The barge, 99 metres in length, was built in a giant enclosed hangar in the port of Redwood City, California, which shielded it from the eyes of curious onlookers and Soviet reconnaissance satellites passing overhead. This was essential, because a glance at what was being built would have revealed that it looked nothing like a mining barge but rather a giant craw—sorry—claw! To install the claw on the ship, it was towed, enclosed in its covered barge, to a location near Catalina Island in southern California, where deeper water allowed it to be sunk beneath the surface, and then lifted into the well (“moon pool”) of Glomar Explorer, all out of sight to onlookers.

So far, the project had located the target on the ocean floor, designed and built a special ship and retrieval claw to seize it, fabricated a cover story of a mining venture so persuasive other mining companies were beginning to explore launching their own seabed mining projects, and evaded scrutiny by the press, Congress, and Soviet intelligence assets. But these are pussycats compared to the California Tax Nazis! After the first test of mating the claw to the ship, Glomar Explorer took to the ocean to, it was said, test the stabilisation system which would keep the derrick vertical as the ship pitched and rolled in the sea. Actually, the purpose of the voyage was to get the ship out of U.S. territorial waters on March 1st, the day California assessed a special inventory tax on all commercial vessels in state waters. This would not only cost a lot of money, it would force disclosure of the value of the ship, which could be difficult to reconcile with its cover mission. Similar fast footwork was required when Hughes took official ownership of the vessel from Global Marine after acceptance. A trip outside U.S. territorial waters was also required to get off the hook for the 7% sales tax California would otherwise charge on the transfer of ownership.

Finally, in June 1974, all was ready, and Glomar Explorer with HMB-1 attached set sail from Long Beach, California to the site of K-129's wreck, arriving on site on the Fourth of July, only to encounter foul weather. Opening the sea doors in the well in the centre of the ship and undocking the claw required calm seas, and it wasn't until July 18th that they were ready to begin the main mission. Just at that moment, what should show up but a Soviet missile tracking ship. After sending its helicopter to inspect Explorer, it eventually departed. This wasn't the last of the troubles with pesky Soviets.

On July 21, the recovery operation began, slowly lowering the claw on its string of pipes. Just at this moment, another Soviet ship arrived, a 47 metre ocean-going tug called SB-10. This tug would continue to harass the recovery operation for days, approaching on an apparent collision course and then veering off. (Glomar Explorer could not move during the retrieval operation, being required to use its thrusters to maintain its position directly above the wrecked submarine on the bottom.)

On August 3, the claw reached the bottom and its television cameras revealed it was precisely on target—there was the submarine, just as it had been photographed by the Halibut six years earlier. The claw gripped the larger part of the wreck, its tines closed under it, and a combination of pistons driving against the ocean bottom and the lift system pulling on the pipe from the ship freed the submarine from the bottom. Now the long lift could begin.

Everything had worked. The claw had been lowered, found its target on the first try, successfully seized it despite the ocean bottom's being much harder than expected, freed it from the bottom, and the ship had then successfully begun to lift the 6.4 million kg of pipe, claw, and submarine back toward the surface. Within the first day of the lift, more than a third of the way to the surface, with the load on the heavy lift equipment diminishing by 15 tonnes as each segment of lift pipe was removed from the string, a shudder went through the ship and the heavy lift equipment lurched violently. Something had gone wrong, seriously wrong. Examination of television images from the claw revealed that several of the tines gripping the hull of the submarine had failed and part of the sub, maybe more than half, had broken off and fallen back toward the abyss. (It was later decided that the cause of the failure was that the tines had been fabricated from maraging steel, which is very strong but brittle, rather than a more ductile alloy which would bend under stress but not break.)

After consultation with CIA headquarters, it was decided to continue the lift and recover whatever was left in the claw. (With some of the tines broken and the mechanism used to break the load free of the ocean floor left on the bottom, it would have been impossible to return and recover the lost part of the sub on this mission.) On August 6th, the claw and its precious payload reached the ship and entered the moon pool in its centre. Coincidentally, the Soviet tug departed the scene the same day. Now it was possible to assess what had been recovered, and the news was not good: two thirds of the sub had been lost, including the ballistic missile tubes and the code room. Only the front third was in the claw. Further, radiation five times greater than background was detected even outside the hull—those exploring it would have to proceed carefully.

An “exploitation team” composed of CIA specialists and volunteers from the ship's crew began to explore the wreckage, photographing and documenting every part recovered. They found the bodies of six Soviet sailors and assorted human remains which could not be identified; all went to the ship's morgue. Given that the bow portion of the submarine had been recovered, it is likely that one or more of its torpedoes equipped with nuclear warheads were recovered, but to this day the details of what was found in the wreck remain secret. By early September, the exploitation was complete and the bulk of the recovered hull, less what had been removed and sent for analysis, was dumped in the deep ocean 160 km south of Hawaii.

One somber task remained. On September 4, 1974, the remains of the six recovered crewmen and the unidentified human remains were buried at sea in accordance with Soviet Navy tradition. A video tape of this ceremony was made and, in 1992, a copy was presented to Russian President Boris Yeltsin by then CIA director Robert Gates.

The partial success encouraged some in the CIA to mount a follow-up mission to recover the rest of the sub, including the missiles and code room. After all, they knew precisely where it was, had a ship in hand, fully paid for, which had successfully lowered the claw to the bottom and returned to the surface with part of the sub, and they knew what had gone wrong with the claw and how to fix it. The effort was even given a name, Project Matador. But it was not to be.

Over the five years of the project there had been leaks to the press and reporters sniffing on the trail of the story but the CIA had been able to avert disclosure by contacting the reporters directly, explaining the importance of the mission and need for secrecy, and offering them an exclusive of full disclosure and permission to publish it before the project was officially declassified for the general public. This had kept a lid on the secret throughout the entire development process and the retrieval and analysis, but this all came to an end in March 1975 when Jack Anderson got wind of the story. There was no love lost between Anderson and what we now call the Deep State. Anderson believed the First Amendment was divinely inspired and absolute, while J. Edgar Hoover had called Anderson “lower than the regurgitated filth of vultures”. Further, this was a quintessential Jack Anderson story—based upon his sources, he presented Project Azorian as a US$ 350 million failure which had produced no useful intelligence information and was being kept secret only to cover up the squandering of taxpayers' money.

CIA Director William Colby offered Anderson the same deal other journalists had accepted, but was flatly turned down. Five minutes before Anderson went on the radio to break the story, Colby was still pleading with him to remain silent. On March 18, 1975, Anderson broke the story on his Mutual Radio Network show and, the next day, published additional details in his nationally syndicated newspaper column. Realising the cover had been blown, Colby called all of the reporters who had agreed to hold the story to give them the green light to publish. Seymour Hersh of the New York Times had his story ready to go, and it ran on the front page of the next day's paper, providing far more detail (albeit along with a few errors) than Anderson's disclosure. Hersh revealed that he had been aware of the project since 1973 but had agreed to withhold publication in the interest of national security.

The story led newspaper and broadcast news around the country and effectively drove a stake through any plans to mount a follow-up retrieval mission. On June 16, 1975, Secretary of State Henry Kissinger made a formal recommendation to president Gerald Ford to terminate the project and that was the end of it. The Soviets had communicated through a back channel that they had no intention of permitting a second retrieval attempt and they had maintained an ocean-going tug on site to monitor any activity since shortly after the story broke in the U.S.

The CIA's official reaction to all the publicity was what has come to be called the “Glomar Response”: “We can neither confirm nor can we deny.” And that is where things stand more that four decades after the retrieval attempt. Although many of those involved in the project have spoken informally about aspects of it, there has never been an official report on precisely what was recovered or what was learned from it. Some CIA veterans have said, off the record, that much more was learned from the recovered material than has been suggested in press reports, with a few arguing that the entire large portion of the sub was recovered and the story about losing much of it was a cover story. (But if this was the case, the whole plan to mount a second retrieval mission and the substantial expense repairing and upgrading the claw for the attempt, which is well documented, would also have to have been a costly cover story.)

What is certain is that Project Azorian was one of the most daring intelligence exploits in history, carried out in total secrecy under the eyes of the Soviets, and kept secret from an inquiring press for five years by a cover story so persuasive other mining companies bought it hook, line, and sinker. We may never know all the details of the project, but from what we do know it is a real-world thriller which equals or exceeds those imagined by masters of the fictional genre.

September 2018 Permalink

Deary, Terry. The Cut-Throat Celts. London: Hippo, 1997. ISBN 0-590-13972-X.

July 2002 Permalink

Demick, Barbara. Nothing to Envy. New York: Spiegel & Grau, [2009] 2010. ISBN 978-0-385-52391-2.
The last decade or so I lived in California, I spent a good deal of my time being angry—so much so that I didn't really perceive the extent that anger had become part of who I was and how I lived my life. It was only after I'd gotten out of California and the U.S. in 1991 and lived a couple of years in Switzerland that I discovered that the absence of driving on crumbling roads overcrowded with aggressive and incompetent drivers, a government bent on destroying productive enterprise, and a culture collapsing into vulgarity and decadence had changed who I was: in short, only after leaving Marin County California, had I become that thing which its denizens delude themselves into believing they are—mellow.

What, you might be asking yourself, does this have to do with a book about the lives of ordinary people in North Korea? Well, after a couple of decades in Switzerland, it takes quite a bit of provocation to bring back the old hair-on-fire white flash, like passing through a U.S. airport or…reading this book. I do not mean that this book angered me; it is a superb work of reportage on a society so hermetically closed that obtaining even the slightest details on what is really going on there is near-impossible, as tourists and journalists are rarely permitted to travel outside North Korea's capital of Pyongyang, a Stalinist Potemkin village built to deceive them as to the situation in other cities and the countryside. What angered me is the horrible, pointless, and needless waste of the lives of tens of millions of people, generation after generation, at the hands of a tyranny so abject it seems to have read Orwell's 1984 not as a dystopian warning, but an instruction manual. The victims of this tragedy are not just the millions who have died in the famines, ended their lives in the sprawling complex of prisons and forced labour camps, or were executed for “crimes” such as trying to communicate with relatives outside the country; but the tens of millions forced to live in a society which seems to have been engineered to extinguish every single pleasure which makes human life worth living. Stunted due to lack of food, indoctrinated with the fantasy that the horror which is their lives is the best for which they can hope, and deprived of any contact with the human intellectual heritage which does not serve the interests of their rulers, they live in an environment which a medieval serf would view as a huge step down from their lot in life, all while the rulers at the top of the pyramid live in grand style and are treated as legitimate actors on the international stage by diplomatic crapweasels from countries that should be shamed by their behaviour.

In this book the author tackles the formidable task of penetrating the barrier of secrecy and lies which hides the reality of life in North Korea from the rest of the world by recounting the lives of six defectors all of whom originated in Chongjin, the third largest city in North Korea, off limits to almost all foreign visitors. The names of the witnesses to this horror have been changed to protect relatives still within the slave state, but their testimony is quoted at length and provides a chilling view of what faces the 24 million who have so far been unable to escape. Now, clearly, if you're relying exclusively on the testimony of those who have managed to escape an oppressive regime, you're going to get a different picture than if you'd interviewed those who remain—just as you'd get a different view of California and the U.S. from somebody who got out of there twenty years ago compared to a current resident—but the author takes pains to corroborate the accounts of defectors against one another and the sparse information available from international aid workers who have been infrequently allowed to visit Chongjin. The accounts of the culture shock escapees from North Korea experience not just in 21st century South Korea but even in rural China are heartrending: Kim Ji-eun, a medical doctor who escaped to China after seeing the children in her care succumb to starvation without anything she could do, describes her first memory of China as discovering a dog's bowl filled with white rice and bits of meat and realising that dogs in China ate better than doctors in North Korea.

As Lenin asked, “What is to be done?” Taking on board the information in this narrative may cause you to question many of what appear to be sound approaches to bringing an end to this horror. For, according to the accounts of the defectors, tyranny of the North Korean style actually works quite well: escapees are minuscule compared to the population which remains behind, many of whom actually appear to believe the lies of the regime that they are a superior race and have it better than the balance of humanity, even as they see members of their family starve to death or disappear into the gulag. For some years I have been thinking about “freedom flights”. This is where a bunch of liberty-loving philanthropists hire a fleet of cargo aircraft to scatter several million single-shot pistols, each with its own individual parachute and accompanied by a translation of Major von Dach's book, across the territory of tyrannical Hell-holes and “let the people rule”. After reading this book, I'm not sure that would suffice. So effectively has the population been brainwashed that it seems a substantial fraction believe the lies of the regime and accept their sorry lot as the normal state of human existence. Perhaps we'll also need to drop solar-powered or hand-cranked satellite radio receivers to provide a window into the outside world—along with the guns, of course, to take care of snitches who try to turn in those who choose to widen their perspective and the minions of the state who come to arrest them.

By almost any measure, North Korea is an extreme outlier. By comparison, Iran is almost a paradise. Even Zimbabwe, while Hell on earth for those unfortunate enough to live there, is relatively transparent to outsiders who document what is going on and much easier to escape. But studying the end point of trends which seem to be relatively benign when they get going can be enlightening, and this book provides a chilling view of what awaits at the final off-ramp of the road to serfdom.

September 2011 Permalink

DiLorenzo, Thomas J. The Real Lincoln. Roseville, CA: Prima Publishing, 2002. ISBN 0-7615-3641-8.

August 2002 Permalink

Drury, Bob and Tom Clavin. Halsey's Typhoon. New York: Grove Press, 2007. ISBN 978-0-8021-4337-2.
As Douglas MacArthur's forces struggled to expand the beachhead of their landing on the Philippine island of Mindoro on December 15, 1944, Admiral William “Bull” Halsey's Third Fleet was charged with providing round the clock air cover over Japanese airfields throughout the Philippines, both to protect against strikes being launched against MacArthur's troops and kamikaze attacks on his own fleet, which had been so devastating in the battle for Leyte Gulf three weeks earlier. After supporting the initial landings and providing cover thereafter, Halsey's fleet, especially the destroyers, were low on fuel, and the admiral requested and received permission to withdraw for a rendezvous with an oiler task group to refuel.

Unbeknownst to anybody in the chain of command, this decision set the Third Fleet on a direct intercept course with the most violent part of an emerging Pacific (not so much, in this case) typhoon which was appropriately named, in retrospect, Typhoon Cobra. Typhoons in the Pacific are as violent as Atlantic hurricanes, but due to the circumstances of the ocean and atmosphere where they form and grow, are much more compact, which means that in an age prior to weather satellites, there was little warning of the onset of a storm before one found oneself overrun by it.

Halsey's orders sent the Third Fleet directly into the bull's eye of the disaster: one ship measured sustained winds of 124 knots (143 miles per hour) and seas in excess of 90 feet. Some ships' logs recorded the barometric pressure as “U”—the barometer had gone off-scale low and the needle was above the “U” in “U. S. Navy”.

There are some conditions at sea which ships simply cannot withstand. This was especially the case for Farragut class destroyers, which had been retrofitted with radar and communication antennæ on their masts and a panoply of antisubmarine and gun directing equipment on deck, all of which made them top-heavy, vulnerable to heeling in high winds, and prone to capsize.

As the typhoon overtook the fleet, even the “heavies” approached their limits of endurance. On the aircraft carrier USS Monterey, Lt. (j.g.) Jerry Ford was saved from being washed off the deck to a certain death only by luck and his athletic ability. He survived, later to become President of the United States. On the destroyers, the situation was indescribably more dire. The watch on the bridge saw the inclinometer veer back and forth on each roll between 60 and 70 degrees, knowing that a roll beyond 71° might not be recoverable. They surfed up the giant waves and plunged down, with screws turning in mid-air as they crested the giant combers. Shipping water, many lost electrical power due to shorted-out panels, and most lost their radar and communications antennæ, rendering them deaf, dumb, and blind to the rest of the fleet and vulnerable to collisions.

The sea took its toll: in all, three destroyers were sunk, a dozen other ships were hors de combat pending repairs, and 146 aircraft were destroyed, all due to weather and sea conditions. A total of 793 U.S. sailors lost their lives, more than twice those killed in the Battle of Midway.

This book tells, based largely upon interviews with people who were there, the story of what happens when an invincible fleet encounters impossible seas. There are tales of heroism every few pages, which are especially poignant since so many of the heroes had not yet celebrated their twentieth birthdays, hailed from landlocked states, and had first seen the ocean only months before at the start of this, their first sea duty. After the disaster, the heroism continued, as the crew of the destroyer escort Tabberer, under its reservist commander Henry L. Plage, who disregarded his orders and, after his ship was dismasted and severely damaged, persisted in the search and rescue of survivors from the foundered ships, eventually saving 55 from the ocean. Plage expected to face a court martial, but instead was awarded the Legion of Merit by Halsey, whose orders he ignored.

This is an epic story of seamanship, heroism, endurance, and the nigh impossible decisions commanders in wartime have to make based upon the incomplete information they have at the time. You gain an appreciation for how the master of a ship has to balance doing things by the book and improvising in exigent circumstances. One finding of the Court of Inquiry convened to investigate the disaster was that the commanders of the destroyers which were lost may have given too much priority to following pre-existing orders to hold their stations as opposed to the overriding imperative to save the ship. Given how little experience these officers had at sea, this is not surprising. CEOs should always keep in mind this utmost priority: save the ship.

Here we have a thoroughly documented historical narrative which is every bit as much a page-turner as the the latest ginned-up thriller. As it happens, one of my high school teachers was a survivor of this storm (on one of the ships which did not go down), and I remember to this day how harrowing it was when he spoke of destroyers “turning turtle”. If accounts like this make you lose sleep, this is not the book for you, but if you want to experience how ordinary people did extraordinary things in impossible circumstances, it's an inspiring narrative.

August 2009 Permalink

Edmonds, David and John Eidinow. Wittgenstein's Poker. London: Faber and Faber, 2001. ISBN 0-571-20909-2.
A U.S. edition of this book, ISBN 0-06-093664-9, was published in September 2002.

December 2002 Permalink

Einstein, Albert, Hanock Gutfreund, and Jürgen Renn. The Road to Relativity. Princeton: Princeton University Press, 2015. ISBN 978-0-691-16253-9.
One hundred years ago, in 1915, Albert Einstein published the final version of his general theory of relativity, which extended his 1905 special theory to encompass accelerated motion and gravitation. It replaced the Newtonian concept of a “gravitational force” acting instantaneously at a distance through an unspecified mechanism with the most elegant of concepts: particles not under the influence of an external force move along spacetime geodesics, the generalisation of straight lines, but the presence of mass-energy curves spacetime, which causes those geodesics to depart from straight lines when observed at a large scale.

For example, in Newton's conception of gravity, the Earth orbits the Sun because the Sun exerts a gravitational force upon the Earth which pulls it inward and causes its motion to depart from a straight line. (The Earth also exerts a gravitational force upon the Sun, but because the Sun is so much more massive, this can be neglected to a first approximation.) In general relativity there is no gravitational force. The Earth is moving in a straight line in spacetime, but because the Sun curves spacetime in its vicinity this geodesic traces out a helix in spacetime which we perceive as the Earth's orbit.

Now, if this were a purely qualitative description, one could dismiss it as philosophical babble, but Einstein's theory provided a precise description of the gravitational field and the motion of objects within it and, when the field strength is strong or objects are moving very rapidly, makes different predictions than Newton's theory. In particular, Einstein's theory predicted that the perihelion of the orbit of Mercury would rotate around the Sun more rapidly than Newton's theory could account for, that light propagating near the limb of the Sun or other massive bodies would be bent through twice the angle Newton's theory predicted, and that light from the Sun or other massive stars would be red-shifted when observed from a distance. In due course all of these tests have been found to agree with the predictions of general relativity. The theory has since been put to many more precise tests and no discrepancy with experiment has been found. For a theory which is, once you get past the cumbersome mathematical notation in which it is expressed, simple and elegant, its implications are profound and still being explored a century later. Black holes, gravitational lensing, cosmology and the large-scale structure of the universe, gravitomagnetism, and gravitational radiation are all implicit in Einstein's equations, and exploring them are among the frontiers of science a century hence.

Unlike Einstein's original 1905 paper on special relativity, the 1915 paper, titled “Die Grundlage der allgemeinen Relativitätstheorie” (“The Foundation of General Relativity”) is famously difficult to comprehend and baffled many contemporary physicists when it was published. Almost half is a tutorial for physicists in Riemann's generalised multidimensional geometry and the tensor language in which it is expressed. The balance of the paper is written in this notation, which can be forbidding until one becomes comfortable with it.

That said, general relativity can be understood intuitively the same way Einstein began to think about it: through thought experiments. First, imagine a person in a stationary elevator in the Earth's gravitational field. If the elevator cable were cut, while the elevator was in free fall (and before the sudden stop), no experiment done within the elevator could distinguish between the state of free fall within Earth's gravity and being in deep space free of gravitational fields. (Conversely, no experiment done in a sufficiently small closed laboratory can distinguish it being in Earth's gravitational field from being in deep space accelerating under the influence of a rocket with the same acceleration as Earth's gravity.) (The “sufficiently small” qualifier is to eliminate the effects of tides, which we can neglect at this level.)

The second thought experiment is a bit more subtle. Imagine an observer at the centre of a stationary circular disc. If the observer uses rigid rods to measure the radius and circumference of the disc, he will find the circumference divided by the radius to be 2π, as expected from the Euclidean geometry of a plane. Now set the disc rotating and repeat the experiment. When the observer measures the radius, it will be as before, but at the circumference the measuring rod will be contracted due to its motion according to special relativity, and the circumference, measured by the rigid rod, will be seen to be larger. Now, when the circumference is divided by the radius, a ratio greater than 2π will be found, indicating that the space being measured is no longer Euclidean: it is curved. But the only difference between a stationary disc and one which is rotating is that the latter is in acceleration, and from the reasoning of the first thought experiment there is no difference between acceleration and gravity. Hence, gravity must bend spacetime and affect the paths of objects (geodesics) within it.

Now, it's one thing to have these kinds of insights, and quite another to puzzle out the details and make all of the mathematics work, and this process occupied Einstein for the decade between 1905 and 1915, with many blind alleys. He eventually came to understand that it was necessary to entirely discard the notion of any fixed space and time, and express the equations of physics in a way which was completely independent of any co-ordinate system. Only this permitted the metric structure of spacetime to be completely determined by the mass and energy within it.

This book contains a facsimile reproduction of Einstein's original manuscript, now in the collection of the Hebrew University of Jerusalem. The manuscript is in Einstein's handwriting which, if you read German, you'll have no difficulty reading. Einstein made many edits to the manuscript before submitting it for publication, and you can see them all here. Some of the hand-drawn figures in the manuscript have been cut out by the publisher to be sent to an illustrator for preparation of figures for the journal publication. Parallel to the manuscript, the editors describe the content and the historical evolution of the concepts discussed therein. There is a 36 page introduction which describes the background of the theory and Einstein's quest to discover it and the history of the manuscript. An afterword provides an overview of general relativity after Einstein and brief biographies of principal figures involved in the development and elaboration of the theory. The book concludes with a complete English translation of Einstein's two papers given in the manuscript.

This is not the book to read if you're interested in learning general relativity; over the last century there have been great advances in mathematical notation and pedagogy, and a modern text is the best resource. But, in this centennial year, this book allows you to go back to the source and understand the theory as Einstein presented it, after struggling for so many years to comprehend it. The supplemental material explains the structure of the paper, the essentials of the theory, and how Einstein came to develop it.

October 2015 Permalink

Emison, John Avery. Lincoln über Alles. Gretna, LA: Pelican Publishing, 2009. ISBN 978-1-58980-692-4.
Recent books, such as Liberal Fascism (January 2008), have explored the roots and deep interconnections between the Progressive movement in the United States and the philosophy and policies of its leaders such as Theodore Roosevelt and Woodrow Wilson, and collectivist movements in twentieth century Europe, including Soviet communism, Italian fascism, and Nazism in Germany. The resurgence of collectivism in the United States, often now once again calling itself “progressive”, has made this examination not just a historical footnote but rather an important clue in understanding the intellectual foundations of the current governing philosophy in Washington.

A candid look at progressivism and its consequences for liberty and prosperity has led, among those willing to set aside accounts of history written by collectivists, whether they style themselves progressives or “liberals”, and look instead at contemporary sources and analyses by genuine classical liberals, to a dramatic reassessment of the place in history of Wilson and the two Roosevelts. While, in an academy and educational establishment still overwhelmingly dominated by collectivists, this is still a minority view, at least serious research into this dissenting view of history is available to anybody interested in searching it out.

Far more difficult to find is a critical examination of the U.S. president who was, according to this account, the first and most consequential of all American progressives, Abraham Lincoln. Some years ago, L. Neil Smith, in his essay “The American Lenin”, said that if you wanted to distinguish a libertarian from a conservative, just ask them about Abraham Lincoln. This observation has been amply demonstrated by the recent critics of progressivism, almost all conservatives of one stripe or another, who have either remained silent on the topic of Lincoln or jumped on the bandwagon and praised him.

This book is a frontal assault on the hagiography of Sainted Abe. Present day accounts of Lincoln's career and the Civil War contain so many omissions and gross misrepresentations of what actually happened that it takes a book of 300 pages like this one, based in large part on contemporary sources, to provide the context for a contrary argument. Topics many readers well-versed in the conventional wisdom view of American history may encounter for the first time here include:

  • No constitutional provision prohibited states from seceding, and the common law doctrine prohibiting legislative entrenchment (one legislature binding the freedom of a successor to act) granted sovereignty conventions the same authority to secede as to join the union in the first place.
  • None of the five living former presidents at the time Lincoln took office (only one a Southerner) supported military action against the South.
  • Lincoln's Emancipation Proclamation freed only slaves in states of the Confederacy; slaves in slave states which did not secede, including Delaware, Maryland, Kentucky, and Missouri remained in bondage. In fact, in 1861, Lincoln had written to the governors of all the states urging them to ratify the Corwin Amendment, already passed by the House and Senate, which would have written protection for slavery and indentured servitude into the Constitution. Further, Lincoln supported the secession of West Virginia from Virgina, and its admittance to the Union as a slave state. Slavery was not abolished throughout the United States until the adoption of the Thirteenth Amendment in December 1865, after Lincoln's death.
  • Despite subsequent arguments that secession was illegal, Lincoln mounted no legal challenge to the declarations of secession prior to calling for troops and initiating hostilities. Congress voted no declaration of war authorising Lincoln to employ federal troops.
  • The prosecution of total war against noncombatants in the South by Sherman and others, with the approval of Grant and Lincoln, not only constituted war crimes by modern standards, but were prohibited by the Lieber Code governing the conduct of the Union armies, signed by President Lincoln in April 1863.
  • Like the progressives of the early 20th century who looked to Bismarck's Germany as the model, and present-day U.S. progressives who want to remodel their country along the lines of the European social democracies, the philosophical underpinnings of Lincoln's Republicans and a number of its political and military figures as well as the voters who put it over the top in the states of the “old northwest” were Made in Germany. The “Forty-Eighters”, supporters of the failed 1848 revolutions in Europe, emigrated in subsequent years to the U.S. and, members of the European élite, established themselves as leaders in their new communities. They were supporters of a strong national government, progressive income taxation, direct election of Senators, nationalisation of railroads and other national infrastructure, an imperialistic foreign policy, and secularisation of the society—all part of the subsequent progressive agenda, and all achieved or almost so today. An estimation of the impact of Forty-Eighters on the 1860 election (at the time, in many states immigrants who were not yet citizens could vote if they simply declared their intention to become naturalised) shows that they provided Lincoln's margin of victory in Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, and Wisconsin (although some of these were close and may have gone the other way.)

Many of these points will be fiercely disputed by Lincoln scholars and defenders; see the arguments here, follow up their source citations, and make up your own mind. What is not in dispute is that the Civil War and the policies advocated by Lincoln and implemented in his administration and its Republican successors, fundamentally changed the relationship between the Federal government and the states. While before the Federal government was the creation of the states, to which they voluntarily delegated limited and enumerated powers, which they retained the right to reclaim by leaving the union, afterward Washington became not a federal government but a national government in the 19th century European sense, with the states increasingly becoming administrative districts charged with carrying out its policies and with no recourse when their original sovereignty was violated. A “national greatness” policy was aggressively pursued by the central government, including subsidies and land grants for building infrastructure, expansion into the Western territories (with repeatedly broken treaties and genocidal wars against their native populations), and high tariffs to protect industrial supporters in the North. It was Lincoln who first brought European-style governance to America, and in so doing became the first progressive president.

Now, anybody who says anything against Lincoln will immediately be accused of being a racist who wishes to perpetuate slavery. Chapter 2, a full 40 pages of this book, is devoted to race in America, before, during, and after the Civil War. Once again, you will learn that the situation is far more complicated than you believed it to be. There is plenty of blame to go around on all sides; after reviewing the four page list of Jim Crow laws passed by Northern states between 1777 and 1868, it is hard to regard them as champions of racial tolerance on a crusade to liberate blacks in the South.

The greatest issue regarding the Civil War, discussed only rarely now, is why it happened at all. If the war was about slavery (as most people believe today), then why, among all the many countries and colonies around the world which abolished slavery in the nineteenth century, was it only in the United States that abolition required a war? If, however, the war is regarded not as a civil war (which it wasn't, since the southern states did not wish to conquer Washington and impose their will upon the union), nor as a “war between the states” (because it wasn't the states of the North fighting against the states of the South, but rather the federal government seeking to impose its will upon states which no longer wished to belong to the union), but rather an imperial conquest waged as a war of annihilation if necessary, by a central government over a recalcitrant territory which refused to cede its sovereignty, then the war makes perfect sense, and is entirely consistent with the subsequent wars waged by Republican administrations to assert sovereignty over Indian nations.

Powerful central government, elimination of state and limitation of individual autonomy, imposition of uniform policies at a national level, endowing the state with a monopoly on the use of force and the tools to impose its will, grandiose public works projects funded by taxation of the productive sector, and sanguinary conflicts embarked upon in the interest of moralistic purity or national glory: these are all hallmarks of progressives, and this book makes a persuasive case that Lincoln was the first of their kind to gain power in the United States. Should liberty blossom again there, and the consequences of progressivism be candidly reassessed, there will be two faces to come down from Mount Rushmore, not just one.

March 2010 Permalink

Engels, Friedrich. The Condition of the Working Class in England. Translated by Florence Wischnewetzky; edited with a foreword by Victor Kiernan. London: Penguin Books, [1845, 1886, 1892] 1987. ISBN 0-14-044486-6.
A Web edition of this title is available online.

January 2003 Permalink

Evans, M. Stanton. Blacklisted by History. New York: Three Rivers Press, 2007. ISBN 978-1-4000-8106-6.
In this book, the author, one of the lions of conservatism in the second half of the twentieth century, undertakes one of the most daunting tasks a historian can attempt: a dispassionate re-examination of one of the most reviled figures in modern American history, Senator Joseph McCarthy. So universal is the disdain for McCarthy by figures across the political spectrum, and so uniform is his presentation as an ogre in historical accounts, the media, and popular culture, that he has grown into a kind of legend used to scare people and intimidate those who shudder at being accused of “McCarthyism”. If you ask people about McCarthy, you'll often hear that he used the House Un-American Activities Committee to conduct witch hunts, smearing the reputations of innocent people with accusations of communism, that he destroyed the careers of people in Hollywood and caused the notorious blacklist of screen writers, and so on. None of this is so: McCarthy was in the Senate, and hence had nothing to do with activities of the House committee, which was entirely responsible for the investigation of Hollywood, in which McCarthy played no part whatsoever. The focus of his committee, the Permanent Subcommittee on Investigations of the Government Operations Committee of the U.S. Senate was on security policy and enforcement within first the State Department and later, the Signal Corps of the U.S. Army. McCarthy's hearings were not focussed on smoking out covert communists in the government, but rather investigating why communists and other security risks who had already been identified by investigations by the FBI and their employers' own internal security apparatus remained on the payroll, in sensitive policy-making positions, for years after evidence of their dubious connections and activities were brought to the attention of their employers and in direct contravention of the published security policies of both the Truman and Eisenhower administrations.

Any book about McCarthy published in the present environment must first start out by cutting through a great deal of misinformation and propaganda which is just simply false on the face of it, but which is accepted as conventional wisdom by a great many people. The author starts by telling the actual story of McCarthy, which is little known and pretty interesting. McCarthy was born on a Wisconsin farm in 1908 and dropped out of junior high school at the age of 14 to help his parents with the farm. At age 20, he entered a high school and managed to complete the full four year curriculum in nine months, earning his diploma. Between 1930 and 1935 he worked his way through college and law school, receiving his law degree and being admitted to the Wisconsin bar in 1935. In 1939 he ran for an elective post of circuit judge and defeated a well-known incumbent, becoming, at age 30, the youngest judge in the state of Wisconsin. In 1942, after the U.S. entered World War II following Pearl Harbor, McCarthy, although exempt from the draft due to his position as a sitting judge, resigned from the bench and enlisted in the Marine Corps, being commissioned as a second lieutenant (based upon his education) upon completion of boot camp. He served in the South Pacific as an intelligence officer with a dive bomber squadron, and flew a dozen missions as a tailgunner/photographer, earning the sobriquet “Tail-Gunner Joe”.

While still in the Marine Corps, McCarthy sought the Wisconsin Republican Senate nomination in 1944 and lost, but then in 1946 mounted a primary challenge to three-term incumbent senator Robert M. La Follette, Jr., scion of Winconsin's first family of Republican politics, narrowly defeating him in the primary, and then won the general election in a landslide, with more than 61% of the vote. Arriving in Washington, McCarthy was perceived to be a rather undistinguished moderate Republican back-bencher, and garnered little attention by the press.

All of this changed on February 9th, 1950, when he gave a speech in Wheeling, West Virgina in which he accused the State Department of being infested with communists, and claimed to have a list in his hand of known communists who continued to work at State after their identities had been made known to the Secretary of State. Just what McCarthy actually said in Wheeling remains a matter of controversy to this day, and is covered in gruelling detail in this book. This speech, and encore performances a few days later in Salt Lake City and Reno catapulted McCarthy onto the public stage, with intense scrutiny in the press and an uproar in Congress, leading to duelling committee investigations: those exploring the charges he made, and those looking into McCarthy himself, precisely what he said where and when, and how he obtained his information on security risks within the government. Oddly, from the outset, the focus within the Senate and executive branch seemed to be more on the latter than the former, with one inquiry digging into McCarthy's checkbook and his income tax returns and those of members of his family dating back to 1935—more than a decade before he was elected to the Senate.

The content of the hearings chaired by McCarthy are also often misreported and misunderstood. McCarthy was not primarily interested in uncovering Reds and their sympathisers within the government: that had already been done by investigations by the FBI and agency security organisations and duly reported to the executive departments involved. The focus of McCarthy's investigation was why, once these risks were identified, often with extensive documentation covering a period of many years, nothing was done, with those identified as security risks remaining on the job or, in some cases, allowed to resign without any note in their employment file, often to immediately find another post in a different government agency or one of the international institutions which were burgeoning in the postwar years. Such an inquiry was a fundamental exercise of the power of congressional oversight over executive branch agencies, but McCarthy (and other committees looking into such matters) ran into an impenetrable stonewall of assertions of executive privilege by both the Truman and Eisenhower administrations. In 1954, the Washington Post editorialised, “The President's authority under the Constitution to withhold from Congress confidences, presidential information, the disclosure of which would be incompatible with the public interest, is altogether beyond question”. The situational ethics of the legacy press is well illustrated by comparing this Post editorial to those two decades later when Nixon asserted the same privilege against a congressional investigation.

Indeed, the entire McCarthy episode reveals how well established, already at the mid-century point, the ruling class government/media/academia axis was. Faced with an assault largely directed at “their kind” (East Coast, Ivy League, old money, creatures of the capital) by an uncouth self-made upstart from the windswept plains, they closed ranks, launched serial investigations and media campaigns, covered up, destroyed evidence, stonewalled, and otherwise aimed to obstruct and finally destroy McCarthy. This came to fruition when McCarthy was condemned by a Senate resolution on December 2nd, 1954. (Oddly, the usual word “censure” was not used in the resolution.) Although McCarthy remained in the Senate until his death at age 48 in 1957, he was shunned in the Senate and largely ignored by the press.

The perspective of half a century later allows a retrospective on the rise and fall of McCarthy which wasn't possible in earlier accounts. Many documents relevant to McCarthy's charges, including the VENONA decrypts of Soviet cable traffic, FBI security files, and agency loyalty board investigations have been declassified in recent years (albeit, in some cases, with lengthy “redactions”—blacked out passages), and the author makes extensive use of these primary sources in the present work. In essence, what they demonstrate is that McCarthy was right: that the documents he sought in vain, blocked by claims of executive privilege, gag orders, cover-ups, and destruction of evidence were, in fact, persuasive evidence that the individuals he identified were genuine security risks who, under existing policy, should not have been employed in the sensitive positions they held. Because the entire “McCarthy era”, from his initial speech to condemnation and downfall, was less than five years in length, and involved numerous investigations, counter-investigations, and re-investigations of many of the same individuals, regarding which abundant source documents have become available, the detailed accounts in this massive book (672 pages in the trade paperback edition) can become tedious on occasion. Still, if you want to understand what really happened at this crucial episode of the early Cold War, and the background behind the defining moment of the era: the conquest of China by Mao's communists, this is an essential source.

In the Kindle edition, the footnotes, which appear at the bottom of the page in the print edition, are linked to reference numbers in the text with a numbering scheme distinct from that used for source references. Each note contains a link to return to the text at the location of the note. Source citations appear at the end of the book and are not linked in the main text. The Kindle edition includes no index.

November 2010 Permalink

Eyles, Don. Sunburst and Luminary. Boston: Fort Point Press, 2018. ISBN 978-0-9863859-3-3.
In 1966, the author graduated from Boston University with a bachelor's degree in mathematics. He had no immediate job prospects or career plans. He thought he might be interested in computer programming due to a love of solving puzzles, but he had never programmed a computer. When asked, in one of numerous job interviews, how he would go about writing a program to alphabetise a list of names, he admitted he had no idea. One day, walking home from yet another interview, he passed an unimpressive brick building with a sign identifying it as the “MIT Instrumentation Laboratory”. He'd heard a little about the place and, on a lark, walked in and asked if they were hiring. The receptionist handed him a long application form, which he filled out, and was then immediately sent to interview with a personnel officer. Eyles was amazed when the personnel man seemed bent on persuading him to come to work at the Lab. After reference checking, he was offered a choice of two jobs: one in the “analysis group” (whatever that was), and another on the team developing computer software for landing the Apollo Lunar Module (LM) on the Moon. That sounded interesting, and the job had another benefit attractive to a 21 year old just graduating from university: it came with deferment from the military draft, which was going into high gear as U.S. involvement in Vietnam deepened.

Near the start of the Apollo project, MIT's Instrumentation Laboratory, led by the legendary “Doc” Charles Stark Draper, won a sole source contract to design and program the guidance system for the Apollo spacecraft, which came to be known as the “Apollo Primary Guidance, Navigation, and Control System” (PGNCS, pronounced “pings”). Draper and his laboratory had pioneered inertial guidance systems for aircraft, guided missiles, and submarines, and had in-depth expertise in all aspects of the challenging problem of enabling the Apollo spacecraft to navigate from the Earth to the Moon, land on the Moon, and return to the Earth without any assistance from ground-based assets. In a normal mission, it was expected that ground-based tracking and computers would assist those on board the spacecraft, but in the interest of reliability and redundancy it was required that completely autonomous navigation would permit accomplishing the mission.

The Instrumentation Laboratory developed an integrated system composed of an inertial measurement unit consisting of gyroscopes and accelerometers that provided a stable reference from which the spacecraft's orientation and velocity could be determined, an optical telescope which allowed aligning the inertial platform by taking sightings on fixed stars, and an Apollo Guidance Computer (AGC), a general purpose digital computer which interfaced to the guidance system, thrusters and engines on the spacecraft, the astronauts' flight controls, and mission control, and was able to perform the complex calculations for en route maneuvers and the unforgiving lunar landing process in real time.

Every Apollo lunar landing mission carried two AGCs: one in the Command Module and another in the Lunar Module. The computer hardware, basic operating system, and navigation support software were identical, but the mission software was customised due to the different hardware and flight profiles of the Command and Lunar Modules. (The commonality of the two computers proved essential in getting the crew of Apollo 13 safely back to Earth after an explosion in the Service Module cut power to the Command Module and disabled its computer. The Lunar Module's AGC was able to perform the critical navigation and guidance operations to put the spacecraft back on course for an Earth landing.)

By the time Don Eyles was hired in 1966, the hardware design of the AGC was largely complete (although a revision, called Block II, was underway which would increase memory capacity and add some instructions which had been found desirable during the initial software development process), the low-level operating system and support libraries (implementing such functionality as fixed point arithmetic, vector, and matrix computations), and a substantial part of the software for the Command Module had been written. But the software for actually landing on the Moon, which would run in the Lunar Module's AGC, was largely just a concept in the minds of its designers. Turning this into hard code would be the job of Don Eyles, who had never written a line of code in his life, and his colleagues. They seemed undaunted by the challenge: after all, nobody knew how to land on the Moon, so whoever attempted the task would have to make it up as they went along, and they had access, in the Instrumentation Laboratory, to the world's most experienced team in the area of inertial guidance.

Today's programmers may be amazed it was possible to get anything at all done on a machine with the capabilities of the Apollo Guidance Computer, no less fly to the Moon and land there. The AGC had a total of 36,864 15-bit words of read-only core rope memory, in which every bit was hand-woven to the specifications of the programmers. As read-only memory, the contents were completely fixed: if a change was required, the memory module in question (which was “potted” in a plastic compound) had to be discarded and a new one woven from scratch. There was no way to make “software patches”. Read-write storage was limited to 2048 15-bit words of magnetic core memory. The read-write memory was non-volatile: its contents were preserved across power loss and restoration. (Each memory word was actually 16 bits in length, but one bit was used for parity checking to detect errors and not accessible to the programmer.) Memory cycle time was 11.72 microseconds. There was no external bulk storage of any kind (disc, tape, etc.): everything had to be done with the read-only and read-write memory built into the computer.

The AGC software was an example of “real-time programming”, a discipline with which few contemporary programmers are acquainted. As opposed to an “app” which interacts with a user and whose only constraint on how long it takes to respond to requests is the user's patience, a real-time program has to meet inflexible constraints in the real world set by the laws of physics, with failure often resulting in disaster just as surely as hardware malfunctions. For example, when the Lunar Module is descending toward the lunar surface, burning its descent engine to brake toward a smooth touchdown, the LM is perched atop the thrust vector of the engine just like a pencil balanced on the tip of your finger: it is inherently unstable, and only constant corrections will keep it from tumbling over and crashing into the surface, which would be bad. To prevent this, the Lunar Module's AGC runs a piece of software called the digital autopilot (DAP) which, every tenth of a second, issues commands to steer the descent engine's nozzle to keep the Lunar Module pointed flamy side down and adjusts the thrust to maintain the desired descent velocity (the thrust must be constantly adjusted because as propellant is burned, the mass of the LM decreases, and less thrust is needed to maintain the same rate of descent). The AGC/DAP absolutely must compute these steering and throttle commands and send them to the engine every tenth of a second. If it doesn't, the Lunar Module will crash. That's what real-time computing is all about: the computer has to deliver those results in real time, as the clock ticks, and if it doesn't (for example, it decides to give up and flash a Blue Screen of Death instead), then the consequences are not an irritated or enraged user, but actual death in the real world. Similarly, every two seconds the computer must read the spacecraft's position from the inertial measurement unit. If it fails to do so, it will hopelessly lose track of which way it's pointed and how fast it is going. Real-time programmers live under these demanding constraints and, especially given the limitations of a computer such as the AGC, must deploy all of their cleverness to meet them without fail, whatever happens, including transient power failures, flaky readings from instruments, user errors, and completely unanticipated “unknown unknowns”.

The software which ran in the Lunar Module AGCs for Apollo lunar landing missions was called LUMINARY, and in its final form (version 210) used on Apollo 15, 16, and 17, consisted of around 36,000 lines of code (a mix of assembly language and interpretive code which implemented high-level operations), of which Don Eyles wrote in excess of 2,200 lines, responsible for the lunar landing from the start of braking from lunar orbit through touchdown on the Moon. This was by far the most dynamic phase of an Apollo mission, and the most demanding on the limited resources of the AGC, which was pushed to around 90% of its capacity during the final landing phase where the astronauts were selecting the landing spot and guiding the Lunar Module toward a touchdown. The margin was razor-thin, and that's assuming everything went as planned. But this was not always the case.

It was when the unexpected happened that the genius of the AGC software and its ability to make the most of the severely limited resources at its disposal became apparent. As Apollo 11 approached the lunar surface, a series of five program alarms: codes 1201 and 1202, interrupted the display of altitude and vertical velocity being monitored by Buzz Aldrin and read off to guide Neil Armstrong in flying to the landing spot. These codes both indicated out-of-memory conditions in the AGC's scarce read-write memory. The 1201 alarm was issued when all five of the 44-word vector accumulator (VAC) areas were in use when another program requested to use one, and 1202 signalled exhaustion of the eight 12-word core sets required by each running job. The computer had a single processor and could execute only one task at a time, but its operating system allowed lower priority tasks to be interrupted in order to service higher priority ones, such as the time-critical autopilot function and reading the inertial platform every two seconds. Each suspended lower-priority job used up a core set and, if it employed the interpretive mathematics library, a VAC, so exhaustion of these resources usually meant the computer was trying to do too many things at once. Task priorities were assigned so the most critical functions would be completed on time, but computer overload signalled something seriously wrong—a condition in which it was impossible to guarantee all essential work was getting done.

In this case, the computer would throw up its hands, issue a program alarm, and restart. But this couldn't be a lengthy reboot like customers of personal computers with millions of times the AGC's capacity tolerate half a century later. The critical tasks in the AGC's software incorporated restart protection, in which they would frequently checkpoint their current state, permitting them to resume almost instantaneously after a restart. Programmers estimated around 4% of the AGC's program memory was devoted to restart protection, and some questioned its worth. On Apollo 11, it would save the landing mission.

Shortly after the Lunar Module's landing radar locked onto the lunar surface, Aldrin keyed in the code to monitor its readings and immediately received a 1202 alarm: no core sets to run a task; the AGC restarted. On the communications link Armstrong called out “It's a 1202.” and Aldrin confirmed “1202.”. This was followed by fifteen seconds of silence on the “air to ground” loop, after which Armstrong broke in with “Give us a reading on the 1202 Program alarm.” At this point, neither the astronauts nor the support team in Houston had any idea what a 1202 alarm was or what it might mean for the mission. But the nefarious simulation supervisors had cranked in such “impossible” alarms in earlier training sessions, and controllers had developed a rule that if an alarm was infrequent and the Lunar Module appeared to be flying normally, it was not a reason to abort the descent.

At the Instrumentation Laboratory in Cambridge, Massachusetts, Don Eyles and his colleagues knew precisely what a 1202 was and found it was deeply disturbing. The AGC software had been carefully designed to maintain a 10% safety margin under the worst case conditions of a lunar landing, and 1202 alarms had never occurred in any of their thousands of simulator runs using the same AGC hardware, software, and sensors as Apollo 11's Lunar Module. Don Eyles' analysis, in real time, just after a second 1202 alarm occurred thirty seconds later, was:

Again our computations have been flushed and the LM is still flying. In Cambridge someone says, “Something is stealing time.” … Some dreadful thing is active in our computer and we do not know what it is or what it will do next. Unlike Garman [AGC support engineer for Mission Control] in Houston I know too much. If it were in my hands, I would call an abort.

As the Lunar Module passed 3000 feet, another alarm, this time a 1201—VAC areas exhausted—flashed. This is another indication of overload, but of a different kind. Mission control immediately calls up “We're go. Same type. We're go.” Well, it wasn't the same type, but they decided to press on. Descending through 2000 feet, the DSKY (computer display and keyboard) goes blank and stays blank for ten agonising seconds. Seventeen seconds later another 1202 alarm, and a blank display for two seconds—Armstrong's heart rate reaches 150. A total of five program alarms and resets had occurred in the final minutes of landing. But why? And could the computer be trusted to fly the return from the Moon's surface to rendezvous with the Command Module?

While the Lunar Module was still on the lunar surface Instrumentation Laboratory engineer George Silver figured out what happened. During the landing, the Lunar Module's rendezvous radar (used only during return to the Command Module) was powered on and set to a position where its reference timing signal came from an internal clock rather than the AGC's master timing reference. If these clocks were in a worst case out of phase condition, the rendezvous radar would flood the AGC with what we used to call “nonsense interrupts” back in the day, at a rate of 800 per second, each consuming one 11.72 microsecond memory cycle. This imposed an additional load of more than 13% on the AGC, which pushed it over the edge and caused tasks deemed non-critical (such as updating the DSKY) not to be completed on time, resulting in the program alarms and restarts. The fix was simple: don't enable the rendezvous radar until you need it, and when you do, put the switch in the position that synchronises it with the AGC's clock. But the AGC had proved its excellence as a real-time system: in the face of unexpected and unknown external perturbations it had completed the mission flawlessly, while alerting its developers to a problem which required their attention.

The creativity of the AGC software developers and the merit of computer systems sufficiently simple that the small number of people who designed them completely understood every aspect of their operation was demonstrated on Apollo 14. As the Lunar Module was checked out prior to the landing, the astronauts in the spacecraft and Mission Control saw the abort signal come on, which was supposed to indicate the big Abort button on the control panel had been pushed. This button, if pressed during descent to the lunar surface, immediately aborted the landing attempt and initiated a return to lunar orbit. This was a “one and done” operation: no Microsoft-style “Do you really mean it?” tea ceremony before ending the mission. Tapping the switch made the signal come and go, and it was concluded the most likely cause was a piece of metal contamination floating around inside the switch and occasionally shorting the contacts. The abort signal caused no problems during lunar orbit, but if it should happen during descent, perhaps jostled by vibration from the descent engine, it would be disastrous: wrecking a mission costing hundreds of millions of dollars and, coming on the heels of Apollo 13's mission failure and narrow escape from disaster, possibly bring an end to the Apollo lunar landing programme.

The Lunar Module AGC team, with Don Eyles as the lead, was faced with an immediate challenge: was there a way to patch the software to ignore the abort switch, protecting the landing, while still allowing an abort to be commanded, if necessary, from the computer keyboard (DSKY)? The answer to this was obvious and immediately apparent: no. The landing software, like all AGC programs, ran from read-only rope memory which had been woven on the ground months before the mission and could not be changed in flight. But perhaps there was another way. Eyles and his colleagues dug into the program listing, traced the path through the logic, and cobbled together a procedure, then tested it in the simulator at the Instrumentation Laboratory. While the AGC's programming was fixed, the AGC operating system provided low-level commands which allowed the crew to examine and change bits in locations in the read-write memory. Eyles discovered that by setting the bit which indicated that an abort was already in progress, the abort switch would be ignored at the critical moments during the descent. As with all software hacks, this had other consequences requiring their own work-arounds, but by the time Apollo 14's Lunar Module emerged from behind the Moon on course for its landing, a complete procedure had been developed which was radioed up from Houston and worked perfectly, resulting in a flawless landing.

These and many other stories of the development and flight experience of the AGC lunar landing software are related here by the person who wrote most of it and supported every lunar landing mission as it happened. Where technical detail is required to understand what is happening, no punches are pulled, even to the level of bit-twiddling and hideously clever programming tricks such as using an overflow condition to skip over an EXTEND instruction, converting the following instruction from double precision to single precision, all in order to save around forty words of precious non-bank-switched memory. In addition, this is a personal story, set in the context of the turbulent 1960s and early ’70s, of the author and other young people accomplishing things no humans had ever before attempted.

It was a time when everybody was making it up as they went along, learning from experience, and improvising on the fly; a time when a person who had never written a line of computer code would write, as his first program, the code that would land men on the Moon, and when the creativity and hard work of individuals made all the difference. Already, by the end of the Apollo project, the curtain was ringing down on this era. Even though a number of improvements had been developed for the LM AGC software which improved precision landing capability, reduced the workload on the astronauts, and increased robustness, none of these were incorporated in the software for the final three Apollo missions, LUMINARY 210, which was deemed “good enough” and the benefit of the changes not worth the risk and effort to test and incorporate them. Programmers seeking this kind of adventure today will not find it at NASA or its contractors, but instead in the innovative “New Space” and smallsat industries.

November 2019 Permalink

Faverjon, Philippe. Les mensonges de la Seconde Guerre mondiale. Paris: Perrin, 2004. ISBN 2-262-01949-5.
“In wartime,” said Winston Churchill, “truth is so precious that she should always be attended by a bodyguard of lies.” This book examines lies, big and small, variously motivated, made by the principal combatants in World War II, from the fabricated attack on a German radio station used as a pretext to launch the invasion of Poland which ignited the conflict, to conspiracy theories about the Yalta conference which sketched the map of postwar Europe as the war drew to a close. The nature of the lies discussed in the various chapters differs greatly—some are propaganda addressed to other countries, others intended to deceive domestic populations; some are strategic disinformation, while still others are delusions readily accepted by audiences who preferred them to the facts. Although most chapters end with a paragraph which sets the stage for the next, each is essentially a stand-alone essay which can be read on its own, and the book can be browsed in any order. The author is either (take your pick) scrupulous in his attention to historical accuracy or, (if you prefer) almost entirely in agreement with my own viewpoint on these matters. There is no “big message”, philosophical or otherwise, here, nor any partisan agenda—this is simply a catalogue of deception in wartime based on well-documented historical examples which, translated into the context of current events, can aid in critical analysis of conventional wisdom and mass stampede media coverage of present-day conflicts.

July 2005 Permalink

Fergusson, Adam. When Money Dies. New York: PublicAffairs, [1975] 2010. ISBN 978-1-58648-994-6.

This classic work, originally published in 1975, is the definitive history of the great inflation in Weimar Germany, culminating in the archetypal paroxysm of hyperinflation in the Fall of 1923, when Reichsbank printing presses were cranking out 100 trillion (1012) mark banknotes as fast as paper could be fed to them, and government expenditures were 6 quintillion (1018) marks while, in perhaps the greatest achievement in deficit spending of all time, revenues in all forms accounted for only 6 quadrillion (1015) marks. The book has long been out of print and much in demand by students of monetary madness, driving the price of used copies into the hundreds of dollars (although, to date, not trillions and quadrillions—patience). Fortunately for readers interested in the content and not collectibility, the book has been re-issued in a new paperback and electronic edition, just as inflation has come back onto the radar in the over-leveraged economies of the developed world. The main text is unchanged, and continues to use mid-1970s British nomenclature for large numbers (“millard” for 109, “billion” for 1012 and so on) and pre-decimalisation pounds, shillings, and pence for Sterling values. A new note to this edition explains how to convert the 1975 values used in the text to their approximate present-day equivalents.

The Weimar hyperinflation is an oft-cited turning point in twentieth century, but like many events of that century, much of the popular perception and portrayal of it in the legacy media is incorrect. This work is an in-depth antidote to such nonsense, concentrating almost entirely upon the inflation itself, and discussing other historical events and personalities only when relevant to the main topic. To the extent people are aware of the German hyperinflation at all, they'll usually describe it as a deliberate and cynical ploy by the Weimar Republic to escape the reparations for World War I exacted under the Treaty of Versailles by inflating away the debt owed to the Allies by debasing the German mark. This led to a cataclysmic episode of hyperinflation where people had to take a wheelbarrow of banknotes to the bakery to buy a loaf of bread and burning money would heat a house better than the firewood or coal it would buy. The great inflation and the social disruption it engendered led directly to the rise of Hitler.

What's wrong with this picture? Well, just about everything…. Inflation of the German mark actually began with the outbreak of World War I in 1914 when the German Imperial government, expecting a short war, decided to finance the war effort by deficit spending and printing money rather than raising taxes. As the war dragged on, this policy continued and was reinforced, since it was decided that adding heavy taxes on top of the horrific human cost and economic privations of the war would be disastrous to morale. As a result, over the war years of 1914–1918 the value of the mark against other currencies fell by a factor of two and was halved again in the first year of peace, 1919. While Germany was committed to making heavy reparation payments, these payments were denominated in gold, not marks, so inflating the mark did nothing to reduce the reparation obligations to the Allies, and thus provided no means of escaping them. What inflation and the resulting cheap mark did, however, was to make German exports cheap on the world market. Since export earnings were the only way Germany could fund reparations, promoting exports through inflation was both a way to accomplish this and to promote social peace through full employment, which was in fact achieved through most of the early period of inflation. By early 1920 (well before the hyperinflationary phase is considered to have kicked in), the mark had fallen to one fortieth of its prewar value against the British pound and U.S. dollar, but the cost of living in Germany had risen only by a factor of nine. This meant that German industrialists and their workers were receiving a flood of marks for the products they exported which could be spent advantageously on the domestic market. Since most of Germany's exports at the time relied little on imported raw materials and products, this put Germany at a substantial advantage in the world market, which was much remarked upon by British and French industrialists at the time, who were prone to ask, “Who won the war, anyway?”.

While initially beneficial to large industry and its organised labour force which was in a position to negotiate wages that kept up with the cost of living, and a boon to those with mortgaged property, who saw their principal and payments shrink in real terms as the currency in which they were denominated declined in value, the inflation was disastrous to pensioners and others on fixed incomes denominated in marks, as their standard of living inexorably eroded.

The response of the nominally independent Reichsbank under its President since 1908, Dr. Rudolf Havenstein, and the German government to these events was almost surreally clueless. As the originally mild inflation accelerated into dire inflation and then headed vertically on the exponential curve into hyperinflation they universally diagnosed the problem as “depreciation of the mark on the foreign exchange market” occurring for some inexplicable reason, which resulted in a “shortage of currency in the domestic market”, which could only be ameliorated by the central bank's revving up its printing presses to an ever-faster pace and issuing notes of larger and larger denomination. The concept that this tsunami of paper money might be the cause of the “depreciation of the mark” both at home and abroad, never seemed to enter the minds of the masters of the printing presses.

It's not like this hadn't happened before. All of the sequelæ of monetary inflation have been well documented over forty centuries of human history, from coin clipping and debasement in antiquity through the demise of every single unbacked paper currency ever created. Lord D'Abernon, the British ambassador in Berlin and British consular staff in cities across Germany precisely diagnosed the cause of the inflation and reported upon it in detail in their dispatches to the Foreign Office, but their attempts to explain these fundamentals to German officials were in vain. The Germans did not even need to look back in history at episodes such as the assignat hyperinflation in revolutionary France: just across the border in Austria, a near-identical hyperinflation had erupted just a few years earlier, and had eventually been stabilised in a manner similar to that eventually employed in Germany.

The final stages of inflation induce a state resembling delirium, where people seek to exchange paper money for anything at all which might keep its value even momentarily, farmers with abundant harvests withhold them from the market rather than exchange them for worthless paper, foreigners bearing sound currency descend upon the country and buy up everything for sale at absurdly low prices, employers and towns, unable to obtain currency to pay their workers, print their own scrip, further accelerating the inflation, and the professional and middle classes are reduced to penury or liquidated entirely, while the wealthy, industrialists, and unionised workers do reasonably well by comparison.

One of the principal problems in coping with inflation, whether as a policy maker or a citizen or business owner attempting to survive it, is inherent in its exponential growth. At any moment along the path, the situation is perceived as a “crisis” and the current circumstances “unsustainable”. But an exponential curve is self-similar: when you're living through one, however absurd the present situation may appear to be based on recent experience, it can continue to get exponentially more bizarre in the future by the inexorable continuation of the dynamic driving the curve. Since human beings have evolved to cope with mostly linear processes, we are ill-adapted to deal with exponential growth in anything. For example, we run out of adjectives: after you've used up “crisis”, “disaster”, “calamity”, “catastrophe”, “collapse”, “crash”, “debacle”, “ruin”, “cataclysm”, “fiasco”, and a few more, what do you call it the next time they tack on three more digits to all the money?

This very phenomenon makes it difficult to bring inflation to an end before it completely undoes the social fabric. The longer inflation persists, the more painful wringing it out of an economy will be, and consequently the greater the temptation to simply continue to endure the ruinous exponential. Throughout the period of hyperinflation in Germany, the fragile government was painfully aware that any attempt to stabilise the currency would result in severe unemployment, which radical parties of both the Left and Right were poised to exploit. In fact, the hyperinflation was ended only by the elected government essentially ceding its powers to an authoritarian dictatorship empowered to put down social unrest as the costs of its policies were felt. At the time the stabilisation policies were put into effect in November 1923, the mark was quoted at six trillion to the British pound, and the paper marks printed and awaiting distribution to banks filled 300 ten-ton railway boxcars.

What lessons does this remote historical episode have for us today? A great many, it seems to me. First and foremost, when you hear pundits holding forth about the Weimar inflation, it's valuable to know that much of what they're talking about is folklore and conventional wisdom which has little to do with events as they actually happened. Second, this chronicle serves to remind the reader of the one simple fact about inflation that politicians, bankers, collectivist media, organised labour, and rent-seeking crony capitalists deploy an entire demagogic vocabulary to conceal: that inflation is caused by an increase in the money supply, not by “greed”, “shortages”, “speculation”, or any of the other scapegoats trotted out to divert attention from where blame really lies: governments and their subservient central banks printing money (or, in current euphemism, “quantitative easing”) to stealthily default upon their obligations to creditors. Third, wherever and whenever inflation occurs, its ultimate effect is the destruction of the middle class, which has neither the political power of organised labour nor the connections and financial resources of the wealthy. Since liberal democracy is, in essence, rule by the middle class, its destruction is the precursor to establishment of authoritarian rule, which will be welcomed after the once-prosperous and self-reliant bourgeoisie has been expropriated by inflation and reduced to dependence upon the state.

The Weimar inflation did not bring Hitler to power—for one thing the dates just don't work. The inflation came to an end in 1923, the year Hitler's beer hall putsch in Munich failed ignominiously and resulted in his imprisonment. The stabilisation of the economy in the following years was widely considered the death knell for radical parties on both the Left and Right, including Hitler's. It was not until the onset of the Great Depression following the 1929 crash that rising unemployment, falling wages, and a collapsing industrial economy as world trade contracted provided an opening for Hitler, and he did not become chancellor until 1933, almost a decade after the inflation ended. And yet, while there was no direct causal connection between the inflation and Hitler's coming to power, the erosion of civil society and the rule of law, the destruction of the middle class, and the lingering effects of the blame for these events being placed on “speculators” all set the stage for the eventual Nazi takeover.

The technology and complexity of financial markets have come a long way from “Railway Rudy” Havenstein and his 300 boxcars of banknotes to “Helicopter BenBernanke. While it used to take years of incompetence and mismanagement, leveling of vast forests, and acres of steam powered printing presses to destroy an industrial and commercial republic and impoverish those who sustain its polity, today a mere fat-finger on a keyboard will suffice. And yet the dynamic of inflation, once unleashed, proceeds on its own timetable, often taking longer than expected to corrode the institutions of an economy, and with ups and downs which tempt investors back into the market right before the next sickening slide. The endpoint is always the same: destruction of the middle class and pensioners who have provided for themselves and the creation of a dependent class of serfs at the mercy of an authoritarian regime. In past inflations, including the one documented in this book, this was an unintended consequence of ill-advised monetary policy. I suspect the crowd presently running things views this as a feature, not a bug.

A Kindle edition is available, in which the table of contents and notes are properly linked to the text, but the index is simply a list of terms, not linked to their occurrences in the text.

May 2011 Permalink

Ferro, Marc. Suez — 1956. Bruxelles: Éditions Complexe, 1982. ISBN 2-87027-101-8.

October 2001 Permalink

Ferro, Marc. Les tabous de l'histoire. Paris: NiL, 2002. ISBN 2-84111-147-4.

April 2003 Permalink

Fleming, Thomas. The New Dealers' War. New York: Basic Books, 2001. ISBN 0-465-02464-5.

September 2003 Permalink

Florence, Ronald. The Perfect Machine. New York: Harper Perennial, 1994. ISBN 978-0-06-092670-0.
George Ellery Hale was the son of a wealthy architect and engineer who made his fortune installing passenger elevators in the skyscrapers which began to define the skyline of Chicago as it rebuilt from the great fire of 1871. From early in his life, the young Hale was fascinated by astronomy, building his own telescope at age 14. Later he would study astronomy at MIT, the Harvard College Observatory, and in Berlin. Solar astronomy was his first interest, and he invented new instruments for observing the Sun and discovered the magnetic fields associated with sunspots.

His work led him into an academic career, culminating in his appointment as a full professor at the University of Chicago in 1897. He was co-founder and first editor of the Astrophysical Journal, published continuously since 1895. Hale's greatest goal was to move astronomy from its largely dry concentration on cataloguing stars and measuring planetary positions into the new science of astrophysics: using observational techniques such as spectroscopy to study the composition of stars and nebulæ and, by comparing them, begin to deduce their origin, evolution, and the mechanisms that made them shine. His own work on solar astronomy pointed the way to this, but the Sun was just one star. Imagine how much more could be learned when the Sun was compared in detail to the myriad stars visible through a telescope.

But observing the spectra of stars was a light-hungry process, especially with the insensitive photographic material available around the turn of the 20th century. Obtaining the spectrum of all but a few of the brightest stars would require exposure times so long they would exceed the endurance of observers to operate the small telescopes which then predominated, over multiple nights. Thus, Hale became interested in larger telescopes, and the quest for ever more light from the distant universe would occupy him for the rest of his life.

First, he promoted the construction of a 40 inch (102 cm) refractor telescope, accessible from Chicago at a dark sky site in Wisconsin. At the epoch, universities, government, and private foundations did not fund such instruments. Hale persuaded Chicago streetcar baron Charles T. Yerkes to pick up the tab, and Yerkes Observatory was born. Its 40 inch refractor remains the largest telescope of that kind used for astronomy to this day.

There are two principal types of astronomical telescopes. A refracting telescope has a convex lens at one end of a tube, which focuses incoming light to an eyepiece or photographic plate at the other end. A reflecting telescope has a concave mirror at the bottom of the tube, the top end of which is open. Light enters the tube and falls upon the mirror, which reflects and focuses it upward, where it can be picked off by another mirror, directly focused on a sensor, or bounced back down through a hole in the main mirror. There are a multitude of variations in the design of both types of telescopes, but the fundamental principles of refraction and reflection remain the same.

Refractors have the advantages of simplicity, a sealed tube assembly which keeps out dust and moisture and excludes air currents which might distort the image but, because light passes through the lens, must use clear glass free of bubbles, strain lines, or other irregularities that might interfere with forming a perfect focus. Further, refractors tend to focus different colours of light at different distances. This makes them less suitable for use in spectroscopy. Colour performance can be improved by making lenses of two or more different kinds of glass (an achromatic or apochromatic design), but this further increases the complexity, difficulty, and cost of manufacturing the lens. At the time of the construction of the Yerkes refractor, it was believed the limit had been reached for the refractor design and, indeed, no larger astronomical refractor has been built since.

In a reflector, the mirror (usually made of glass or some glass-like substance) serves only to support an extremely thin (on the order of a thousand atoms) layer of reflective material (originally silver, but now usually aluminium). The light never passes through the glass at all, so as long as it is sufficiently uniform to take on and hold the desired shape, and free of imperfections (such as cracks or bubbles) that would make the reflecting surface rough, the optical qualities of the glass don't matter at all. Best of all, a mirror reflects all colours of light in precisely the same way, so it is ideal for spectrometry (and, later, colour photography).

With the Yerkes refractor in operation, it was natural that Hale would turn to a reflector in his quest for ever more light. He persuaded his father to put up the money to order a 60 inch (1.5 metre) glass disc from France, and, when it arrived months later, set one of his co-workers at Yerkes, George W. Ritchey, to begin grinding the disc into a mirror. All of this was on speculation: there were no funds to build a telescope, an observatory to house it, nor to acquire a site for the observatory. The persistent and persuasive Hale approached the recently-founded Carnegie Institution, and eventually secured grants to build the telescope and observatory on Mount Wilson in California, along with an optical laboratory in nearby Pasadena. Components for the telescope had to be carried up the crude trail to the top of the mountain on the backs of mules, donkeys, or men until a new road allowing the use of tractors was built. In 1908 the sixty inch telescope began operation, and its optics and mechanics performed superbly. Astronomers could see much deeper into the heavens. But still, Hale was not satisfied.

Even before the sixty inch entered service, he approached John D. Hooker, a Los Angeles hardware merchant, for seed money to fund the casting of a mirror blank for an 84 inch telescope, requesting US$ 25,000 (around US$ 600,000 today). Discussing the project, Hooker and Hale agreed not to settle for 84, but rather to go for 100 inches (2.5 metres). Hooker pledged US$ 45,000 to the project, with Hale promising the telescope would be the largest in the world and bear Hooker's name. Once again, an order for the disc was placed with the Saint-Gobain glassworks in France, the only one with experience in such large glass castings. Problems began almost immediately. Saint-Gobain did not have the capacity to melt the quantity of glass required (four and a half tons) all at once: they would have to fill the mould in three successive pours. A massive piece of cast glass (101 inches in diameter and 13 inches thick) cannot simply be allowed to cool naturally after being poured. If that were to occur, shrinkage of the outer parts of the disc as it cooled while the inside still remained hot would almost certainly cause the disc to fracture and, even if it didn't, would create strains within the disc that would render it incapable of holding the precise figure (curvature) required by the mirror. Instead, the disc must be placed in an annealing oven, where the temperature is reduced slowly over a period of time, allowing the internal stresses to be released. So massive was the 100 inch disc that it took a full year to anneal.

When the disc finally arrived in Pasadena, Hale and Ritchey were dismayed by what they saw, There were sheets of bubbles between the three layers of poured glass, indicating they had not fused. There was evidence the process of annealing had caused the internal structure of the glass to begin to break down. It seemed unlikely a suitable mirror could be made from the disc. After extended negotiations, Saint-Gobain decided to try again, casting a replacement disc at no additional cost. Months later, they reported the second disc had broken during annealing, and it was likely no better disc could be produced. Hale decided to proceed with the original disc. Patiently, he made the case to the Carnegie Institution to fund the telescope and observatory on Mount Wilson. It would not be until November 1917, eleven years after the order was placed for the first disc, that the mirror was completed, installed in the massive new telescope, and ready for astronomers to gaze through the eyepiece for the first time. The telescope was aimed at brilliant Jupiter.

Observers were horrified. Rather than a sharp image, Jupiter was smeared out over multiple overlapping images, as if multiple mirrors had been poorly aimed into the eyepiece. Although the mirror had tested to specification in the optical shop, when placed in the telescope and aimed at the sky, it appeared to be useless for astronomical work. Recalling that the temperature had fallen rapidly from day to night, the observers adjourned until three in the morning in the hope that as the mirror continued to cool down to the nighttime temperature, it would perform better. Indeed, in the early morning hours, the images were superb. The mirror, made of ordinary plate glass, was subject to thermal expansion as its temperature changed. It was later determined that the massive disc took twenty-four hours to cool ten degrees Celsius. Rapid changes in temperature on the mountain could cause the mirror to misbehave until its temperature stabilised. Observers would have to cope with its temperamental nature throughout the decades it served astronomical research.

As the 1920s progressed, driven in large part by work done on the 100 inch Hooker telescope on Mount Wilson, astronomical research became increasingly focused on the “nebulæ”, many of which the great telescope had revealed were “island universes”, equal in size to our own Milky Way and immensely distant. Many were so far away and faint that they appeared as only the barest smudges of light even in long exposures through the 100 inch. Clearly, a larger telescope was in order. As always, Hale was interested in the challenge. As early as 1921, he had requested a preliminary design for a three hundred inch (7.6 metre) instrument. Even based on early sketches, it was clear the magnitude of the project would surpass any scientific instrument previously contemplated: estimates came to around US$ 12 million (US$ 165 million today). This was before the era of “big science”. In the mid 1920s, when Hale produced this estimate, one of the most prestigious scientific institutions in the world, the Cavendish Laboratory at Cambridge, had an annual research budget of less than £ 1000 (around US$ 66,500 today). Sums in the millions and academic science simply didn't fit into the same mind, unless it happened to be that of George Ellery Hale. Using his connections, he approached people involved with foundations endowed by the Rockefeller fortune. Rockefeller and Carnegie were competitors in philanthropy: perhaps a Rockefeller institution might be interested in outdoing the renown Carnegie had obtained by funding the largest telescope in the world. Slowly, and with an informality which seems unimaginable today, Hale negotiated with the Rockefeller foundation, with the brash new university in Pasadena which now called itself Caltech, and with a prickly Carnegie foundation who saw the new telescope as trying to poach its painfully-assembled technical and scientific staff on Mount Wilson. By mid-1928 a deal was in hand: a Rockefeller grant for US$ 6 million (US$ 85 million today) to design and build a 200 inch (5 metre) telescope. Caltech was to raise the funds for an endowment to maintain and operate the instrument once it was completed. Big science had arrived.

In discussions with the Rockefeller foundation, Hale had agreed on a 200 inch aperture, deciding the leap to an instrument three times the size of the largest existing telescope and the budget that would require was too great. Even so, there were tremendous technical challenges to be overcome. The 100 inch demonstrated that plate glass had reached or exceeded its limits. The problems of distortion due to temperature changes only increase with the size of a mirror, and while the 100 inch was difficult to cope with, a 200 inch would be unusable, even if it could be somehow cast and annealed (with the latter process probably taking several years). Two promising alternatives were fused quartz and Pyrex borosilicate glass. Fused quartz has hardly any thermal expansion at all. Pyrex has about three times greater expansion than quartz, but still far less than plate glass.

Hale contracted with General Electric Company to produce a series of mirror blanks from fused quartz. GE's legendary inventor Elihu Thomson, second only in reputation to Thomas Edison, agreed to undertake the project. Troubles began almost immediately. Every attempt to get rid of bubbles in quartz, which was still very viscous even at extreme temperatures, failed. A new process, which involved spraying the surface of cast discs with silica passed through an oxy-hydrogen torch was developed. It required machinery which, in operation, seemed to surpass visions of hellfire. To build up the coating on a 200 inch disc would require enough hydrogen to fill two Graf Zeppelins. And still, not a single suitable smaller disc had been produced from fused quartz.

In October 1929, just a year after the public announcement of the 200 inch telescope project, the U.S. stock market crashed and the economy began to slow into the great depression. Fortunately, the Rockefeller foundation invested very conservatively, and lost little in the market chaos, so the grant for the telescope project remained secure. The deepening depression and the accompanying deflation was a benefit to the effort because raw material and manufactured goods prices fell in terms of the grant's dollars, and industrial companies which might not have been interested in a one-off job like the telescope were hungry for any work that would help them meet their payroll and keep their workforce employed.

In 1931, after three years of failures, expenditures billed at manufacturing cost by GE which had consumed more than one tenth the entire budget of the project, and estimates far beyond that for the final mirror, Hale and the project directors decided to pull the plug on GE and fused quartz. Turning to the alternative of Pyrex, Corning glassworks bid between US$ 150,000 and 300,000 for the main disc and five smaller auxiliary discs. Pyrex was already in production at industrial scale and used to make household goods and laboratory glassware in the millions, so Corning foresaw few problems casting the telescope discs. Scaling things up is never a simple process, however, and Corning encountered problems with failures in the moulds, glass contamination, and even a flood during the annealing process before the big disc was ready for delivery.

Getting it from the factory in New York to the optical shop in California was an epic event and media circus. Schools let out so students could go down to the railroad tracks and watch the “giant eye” on its special train make its way across the country. On April 10, 1936, the disc arrived at the optical shop and work began to turn it into a mirror.

With the disc in hand, work on the telescope structure and observatory could begin in earnest. After an extended period of investigation, Palomar Mountain had been selected as the site for the great telescope. A rustic construction camp was built to begin preliminary work. Meanwhile, Westinghouse began to fabricate components of the telescope mounting, which would include the largest bearing ever manufactured.

But everything depended on the mirror. Without it, there would be no telescope, and things were not going well in the optical shop. As the disc was ground flat preliminary to being shaped into the mirror profile, flaws continued to appear on its surface. None of the earlier smaller discs had contained such defects. Could it be possible that, eight years into the project, the disc would be found defective and everything would have to start over? The analysis concluded that the glass had become contaminated as it was poured, and that the deeper the mirror was ground down the fewer flaws would be discovered. There was nothing to do but hope for the best and begin.

Few jobs demand the patience of the optical craftsman. The great disc was not ready for its first optical test until September 1938. Then began a process of polishing and figuring, with weekly tests of the mirror. In August 1941, the mirror was judged to have the proper focal length and spherical profile. But the mirror needed to be a parabola, not a sphere, so this was just the start of an even more exacting process of deepening the curve. In January 1942, the mirror reached the desired parabola to within one wavelength of light. But it needed to be much better than that. The U.S. was now at war. The uncompleted mirror was packed away “for the duration”. The optical shop turned to war work.

In December 1945, work resumed on the mirror. In October 1947, it was pronounced finished and ready to install in the telescope. Eleven and a half years had elapsed since the grinding machine started to work on the disc. Shipping the mirror from Pasadena to the mountain was another epic journey, this time by highway. Finally, all the pieces were in place. Now the hard part began.

The glass disc was the correct shape, but it wouldn't be a mirror until coated with a thin layer of aluminium. This was a process which had been done many times before with smaller mirrors, but as always size matters, and a host of problems had to be solved before a suitable coating was obtained. Now the mirror could be installed in the telescope and tested further. Problem after problem with the mounting system, suspension, and telescope drive had to be found and fixed. Testing a mirror in its telescope against a star is much more demanding than any optical shop test, and from the start of 1949, an iterative process of testing, tweaking, and re-testing began. A problem with astigmatism in the mirror was fixed by attaching four fisherman's scales from a hardware store to its back (they are still there). In October 1949, the telescope was declared finished and ready for use by astronomers. Twenty-one years had elapsed since the project began. George Ellery Hale died in 1938, less than ten years into the great work. But it was recognised as his monument, and at its dedication was named the “Hale Telescope.”

The inauguration of the Hale Telescope marked the end of the rapid increase in the aperture of observatory telescopes which had characterised the first half of the twentieth century, largely through the efforts of Hale. It would remain the largest telescope in operation until 1975, when the Soviet six metre BTA-6 went into operation. That instrument, however, was essentially an exercise in Cold War one-upmanship, and never achieved its scientific objectives. The Hale would not truly be surpassed before the ten metre Keck I telescope began observations in 1993, 44 years after the Hale. The Hale Telescope remains in active use today, performing observations impossible when it was inaugurated thanks to electronics undreamt of in 1949.

This is an epic recounting of a grand project, the dawn of “big science”, and the construction of instruments which revolutionised how we see our place in the cosmos. There is far more detail than I have recounted even in this long essay, and much insight into how a large, complicated project, undertaken with little grasp of the technical challenges to be overcome, can be achieved through patient toil sustained by belief in the objective.

A PBS documentary, The Journey to Palomar, is based upon this book. It is available on DVD or a variety of streaming services.

In the Kindle edition, footnotes which appear in the text are just asterisks, which are almost impossible to select on touch screen devices without missing and accidentally turning the page. In addition, the index is just a useless list of terms and page numbers which have nothing to do with the Kindle document, which lacks real page numbers. Disastrously, the illustrations which appear in the print edition are omitted: for a project which was extensively documented in photographs, drawings, and motion pictures, this is inexcusable.

October 2016 Permalink

Foden, Giles. Mimi and Toutou Go Forth. London: Penguin, 2004. ISBN 0-14-100984-5.
Only a perfect idiot would undertake to transport two forty foot mahogany motorboats from London to Cape Town and then onward to Lake Tanganyika by ship, rail, steam tractor, and teams of oxen, there to challenge German dominance of the lake during World War I by attempting to sink a ship three times the length and seven times the displacement of the fragile craft. Fortunately, the Admiralty found just the man in Geoffrey Basil Spicer-Simpson, in 1915 the oldest Lieutenant Commander in the Royal Navy, his ascent through the ranks having been retarded due to his proclivity for sinking British ships. Spicer-Simpson was an inveterate raconteur of tall tales and insufferable know-it-all (on the ship bound for South Africa he was heard lecturing the Astronomer Royal of Cape Town on the southern constellations), and was eccentric in about as many ways as can be packed into a single human frame. Still, he and his motley team, despite innumerable misadventures (many self-inflicted), got the job done, sinking the ship they were sent to and capturing another German vessel, the first German warship ever captured by the Royal Navy. Afterward, Spicer-Simpson rather blotted his copybook by declining to engage first a German fort and then a warship both later found to have been “armed” only with wooden dummy guns. His exploits caused him to be worshipped as a god by the Holo-holo tribe, who fashioned clay effigies of him, but rather less impressed the Admiralty who, despite awarding him the DSO, re-assigned him upon his return to the routine desk job he had before the adventure. HMS Mimi and Toutou were the boats under Spicer-Simpson's command, soon joined by the captured German ship which was rechristened HMS Fifi. The events described herein (very loosely) inspired C.S.Forester's 1935 novel The African Queen and the 1951 Bogart/Hepburn film.

A U.S. edition is now available, titled Mimi and Toutou's Big Adventure, but at present only in hardcover. A U.S. paperback is scheduled for March, 2006.

October 2005 Permalink

Fort, Adrian. Prof: The Life of Frederick Lindemann. London: Jonathan Cape, 2003. ISBN 0-224-06317-0.
Frederick Lindemann is best known as Winston Churchill's scientific advisor in the years prior to and during World War II. He was the central figure in what Churchill called the “Wizard War”, including the development and deployment of radar, antisubmarine warfare technologies, the proximity fuze, area bombing techniques, and nuclear weapons research (which was well underway in Britain before the Manhattan Project began in the U.S.). Lindemann's talents were so great and his range of interests so broad that if he had settled into the cloistered life of an Oxford don after his appointment as Professor of Experimental Philosophy and chief of the Clarendon Laboratory in 1919, he would still be remembered for his scientific work in quantum mechanics, X-ray spectra, cryogenics, photoelectric photometry in astronomy, and isotope separation, as well as for restoring Oxford's reputation in the natural sciences, which over the previous half century “had sunk almost to zero” in Lindemann's words.

Educated in Germany, he spoke German and French like a native. He helped organise the first historic Solvay Conference in 1911, which brought together the pioneers of the relativity and quantum revolutions in physics. There he met Einstein, beginning a life-long friendship. Lindemann was a world class tennis champion and expert golfer and squash player, as well as a virtuoso on the piano. Although a lifetime bachelor, he was known as a ladies' man and never lacked female companionship.

In World War I Lindemann tackled the problem of spin recovery in aircraft, then thought to be impossible (this in an era when pilots were not issued parachutes!). To collect data and test his theories, he learned to fly and deliberately induced spins in some of the most notoriously dangerous aircraft types and confirmed his recovery procedure by putting his own life on the line. The procedure he developed is still taught to pilots today.

With his close contacts in Germany, Lindemann was instrumental in arranging and funding the emigration of Jewish and other endangered scientists after Hitler took power in 1933. The scientists he enabled to escape not only helped bring Oxford into the first rank of research universities, many ended up contributing to the British and U.S. atomic projects and other war research. About the only thing he ever failed at was his run for Parliament in 1937, yet his influence as confidant and advisor to Churchill vastly exceeded that of a Tory back bencher. With the outbreak of war in 1939, he joined Churchill at the Admiralty, where he organised and ran the Statistical Branch, which applied what is now called Operations Research to the conduct of the war, which rōle he expanded as chief of “S Department” after Churchill became Prime Minister in May 1940. Many of the wartime “minutes” quoted in Churchill's The Second World War were drafted by Lindemann and sent out verbatim over Churchill's signature, sometimes with the addition “Action this day”. Lindemann finally sat in Parliament, in the House of Lords, after being made Lord Cherwell in 1941, and joined the Cabinet in 1942 and became a Privy Counsellor in 1943.

After the war, Lindemann returned to Oxford, continuing to champion scientific research, taking leave to serve in Churchill's cabinet from 1951–1953, where he almost single-handedly and successfully fought floating of the pound and advocated the establishment of an Atomic Energy Authority, on which he served for the rest of his life.

There's an atavistic tendency when writing history to focus exclusively on the person at the top, as if we still lived in the age of warrior kings, neglecting those who obtain and filter the information and develop the policies upon which the exalted leader must ultimately decide. (This is as common, or more so, in the business press where the cult of the CEO is well entrenched.) This biography, of somebody many people have never heard of, shows that the one essential skill a leader must have is choosing the right people to listen to and paying attention to what they say.

A paperback edition is now available.

March 2005 Permalink

Fregosi, Paul. Jihad in the West. Amherst, NY: Prometheus Books, 1998. ISBN 1-57392-247-1.

July 2002 Permalink

[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 1. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1776, 1781] 1998. ISBN 962-634-071-1.
This is the first audiobook to appear in this list, for the excellent reason that it's the first one to which I've ever listened. I've been planning to “get around” to reading Gibbon's Decline and Fall for about twenty-five years, and finally concluded that the likelihood I was going to dive into that million-word-plus opus any time soon was negligible, so why not raise the intellectual content of my regular walks around the village with one of the masterpieces of English prose instead of ratty old podcasts?

The “Volume 1” in the title of this work refers to the two volumes of this audio edition, which is an abridgement of the first three volumes of Gibbon's history, covering the reign of Augustus through the accession of the first barbarian king, Odoacer. Volume 2 abridges the latter three volumes, primarily covering the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453. Both audio programs are almost eight hours in length, and magnificently read by Philip Madoc, whose voice is strongly reminiscent of Richard Burton's. The abridgements are handled well, with a second narrator, Neville Jason, summarising the material which is being skipped over. Brief orchestral music passages separate major divisions in the text. The whole work is artfully done and a joy to listen to, worthy of the majesty of Gibbon's prose, which is everything I've always heard it to be, from solemn praise for courage and wisdom, thundering condemnation of treason and tyranny, and occasionally laugh-out-loud funny descriptions of foibles and folly.

I don't usually read abridged texts—I figure that if the author thought it was worth writing, it's worth my time to read. But given the length of this work (and the fact that most print editions are abridged), it's understandable that the publisher opted for an abridged edition; after all, sixteen hours is a substantial investment of listening time. An Audio CD edition is available. And yes, I'm currently listening to Volume 2.

May 2007 Permalink

[Audiobook] Gibbon, Edward. The Decline and Fall of the Roman Empire. Vol. 2. (Audiobook, Abridged). Hong Kong: Naxos Audiobooks, [1788, 1789] 1998. ISBN 962-634-122-X.
The “Volume 2” in the title of this work refers to the two volumes of this audiobook edition. This is an abridgement of the final three volumes of Gibbon's history, primarily devoted the eastern empire from the time of Justinian through the fall of Constantinople to the Turks in 1453, although the fractious kingdoms of the west, the Crusades, the conquests of Genghis Khan and Tamerlane, and the origins of the great schism between the Roman and Eastern Orthodox churches all figure in this history. I understand why many people read only the first three volumes of Gibbon's masterpiece—the doings of the Byzantine court are, well, byzantine, and the steady litany of centuries of backstabbing, betrayal, intrigue, sedition, self-indulgence, and dissipation can become both tedious and depressing. Although there are are some sharply-worded passages which may have raised eyebrows in the eighteenth century, I did not find Gibbon anywhere near as negative on the influence of Christianity on the Roman Empire as I expected from descriptions of his work by others. The facile claim that “Gibbon blamed the fall of Rome on the Christians” is simply not borne out by his own words.

Please see my comments on Volume 1 for details of the (superb) production values of this seven hour recording. An Audio CD edition is available.

June 2007 Permalink

Gingerich, Owen. The Book Nobody Read. New York: Penguin Books, 2004. ISBN 0-14-303476-6.
There is something about astronomy which seems to invite obsession. Otherwise, why would intelligent and seemingly rational people expend vast amounts of time and effort to compile catalogues of hundreds of thousands of stars, precisely measure the positions of planets over periods of decades, travel to the ends of the Earth to observe solar eclipses, get up before the crack of noon to see a rare transit of Mercury or Venus, or burn up months of computer time finding every planetary transit in a quarter million year interval around the present? Obsession it may be, but it's also fascinating and fun, and astronomy has profited enormously from the labours of those so obsessed, whether on a mountain top in the dead of winter, or carrying out lengthy calculations when tables of logarithms were the only computational tool available.

This book chronicles one man's magnificent thirty-year obsession. Spurred by Arthur Koestler's The Sleepwalkers, which portrayed Copernicus as a villain and his magnum opus De revolutionibus “the book that nobody read”—“an all time worst seller”, followed by the discovery of an obviously carefully read and heavily annotated first edition in the library of the Royal Observatory in Edinburgh, Scotland, the author, an astrophysicist and Harvard professor of the history of science, found himself inexorably drawn into a quest to track down and examine every extant copy of the first (Nuremberg, 1543) and second (Basel, 1566) editions of De revolutionibus to see whether and where readers had annotated them and so determine how widely the book, of which about a thousand copies were printed in these editions—typical for scientific works at the time—was read. Unlike today, when we've been educated that writing in a book is desecration, readers in the 16th and 17th centuries often made extensive annotations to their books, even assigning students and apprentices the task of copying annotations by other learned readers into their copies.

Along the way Gingerich found himself driving down an abandoned autobahn in the no man's land between East and West Germany, testifying in the criminal trial of a book rustler, discovering the theft of copies which librarians were unaware were missing, tracking down the provenance of pages in “sophisticated” (in the original sense of the word) copies assembled from two or more incomplete originals, attending the auction at Sotheby's of a first edition with a dubious last leaf which sold for US$750,000 (the author, no impecunious naļf in the rare book game, owns two copies of the second edition himself), and discovering the fate of many less celebrated books from that era (toilet paper). De revolutionibus has survived the vicissitudes of the centuries quite well—out of about 1000 original copies of the first and second editions, approximately six hundred exemplars remain.

Aside from the adventures of the Great Copernicus Chase, there is a great deal of information about Copernicus and the revolution he discovered and sparked which dispels many widely-believed bogus notions such as:

  • Copernicus was a hero of secular science against religious fundamentalism. Wrong!   Copernicus was a deeply religious doctor of church law, canon of the Roman Catholic Varmian Cathedral in Poland. He dedicated the book to Pope Paul III.
  • Prior to Copernicus, astronomers relying on Ptolemy's geocentric system kept adding epicycles on epicycles to try to approximate the orbits of the planets. Wrong!   This makes for a great story, but there is no evidence whatsoever for “epicycles on epicycles”. The authoritative planetary ephemerides in use in the age of Copernicus were calculated using the original Ptolemaic system without additional refinements, and there are no known examples of systems with additional epicycles.
  • Copernicus banished epicycles from astronomy. Wrong!   The Copernican system, in fact, included thirty-four epicycles! Because Copernicus believed that all planetary motion was based on circles, just like Ptolemy he required epicycles to approximate motion which wasn't known to be actually elliptical prior to Kepler. In fact, the Copernican system was no more accurate in predicting planetary positions than that of Ptolemy, and ephemerides computed from it were no better.
  • The Roman Catholic Church was appalled by Copernicus's suggestion that the Earth was not the centre of the cosmos and immediately banned his book. Wrong!   The first edition of De revolutionibus was published in 1543. It wasn't until 1616, more than seventy years later, that the book was placed on the Index Librorum Prohibitorum, and in 1620 it was permitted as long as ten specific modifications were made. Outside Italy, few copies even in Catholic countries were censored according to these instructions. In Spain, usually thought of as a hotbed of the Inquisition, the book was never placed on the Index at all. Galileo's personal copy has the forbidden passages marked in boxes and lined through, permitting the original text to be read. There is no evidence of any copy having been destroyed on the orders of the Church, and the Vatican library has three copies of both the first and second editions.

Obviously, if you're as interested as I in eccentric topics like positional astronomy, rare books, the evolution of modern science, and the surprisingly rapid and efficient diffusion of knowledge more than five centuries before the Internet, this is a book you're probably going to read if you haven't already. The only flaw is that the colour plates (at least in the UK paperback edition I read) are terribly reproduced—they all look like nobody bothered to focus the copy camera when the separations were made; plates 4b, 6, and 7a through 7f, which show annotations in various copies, are completely useless because they're so fuzzy the annotations can barely be read, if at all.

November 2005 Permalink

Goetz, Peter. A Technical History of America's Nuclear Weapons. Unspecified: Independently published, 2020. ISBN Vol. 1 979-8-6646-8488-9, Vol. 2 978-1-7181-2136-2.

This is an encyclopedic history and technical description of United States nuclear weapons, delivery systems, manufacturing, storage, maintenance, command and control, security, strategic and tactical doctrine, and interaction with domestic politics and international arms control agreements, covering the period from the inception of these weapons in World War II through 2020. This encompasses a huge amount of subject matter, and covering it in the depth the author undertakes is a large project, with the two volume print edition totalling 1244 20×25 centimetre pages. The level of detail and scope is breathtaking, especially considering that not so long ago much of the information documented here was among the most carefully-guarded secrets of the U.S. military. You will learn the minutiæ of neutron initiators, which fission primaries were used in what thermonuclear weapons, how the goal of “one-point safety” was achieved, the introduction of permissive action links to protect against unauthorised use of weapons and which weapons used what kind of security device, and much, much more.

If the production quality of this work matched its content, it would be an invaluable reference for anybody interested in these weapons, from military historians, students of large-scale government research and development projects, researchers of the Cold War and the nuclear balance of power, and authors setting fiction in that era and wishing to get the details right. Sadly, when it comes to attention to detail, this work, as published in this edition, is sadly lacking—it is both slipshod and shoddy. I was reading it for information, not with the fine-grained attention I devote when proofreading my work or that of others, but in the process I marked 196 errors of fact, spelling, formatting, and grammar, or about one every six printed pages. Now, some of these are just sloppy things (including, or course, misuse of the humble apostrophe) which grate upon the reader but aren't likely to confuse, but others are just glaring errors.

Here are some of the obvious errors. Names misspelled or misstated include Jay Forrester, John von Neumann, Air Force Secretary Hans Mark, and Ronald Reagan. In chapter 11, an entire paragraph is duplicated twice in a row. In chapter 9, it is stated that the Little Feller nuclear test in 1962 was witnessed by president John F. Kennedy; in fact, it was his brother, Attorney General Robert F. Kennedy, who observed the test. There is a long duplicated passage at the start of chapter 20, but this may be a formatting error in the Kindle edition. In chapter 29, it is stated that nitrogen tetroxide was the fuel of the Titan II missile—in fact, it was the oxidiser. In chapter 41, the Northrop B-2 stealth bomber is incorrectly attributed to Lockheed in four places. In chapter 42, the Trident submarine-launched missile is referred to as “Titan” on two occasions.

The problem with such a plethora of errors is that when reading information with which you aren't acquainted or have the ability to check, there's no way to know whether they're correct or nonsense. Before using anything from this book as a source in your own work, I'd advise keeping in mind the Russian proverb, Доверяй, но проверяй—“Trust, but verify”. In this case, I'd go light on the trust and double up on the verification.

In the citation above, I link to the Kindle edition, which is free for Kindle Unlimited subscribers. The print edition is published in two paperbacks, Volume 1 and Volume 2.

January 2021 Permalink

Griffin, G. Edward. The Creature from Jekyll Island. Westlake Village, CA: American Media, [1994, 1995, 1998, 2002] 2010. ISBN 978-0-912986-45-6.
Almost every time I review a book about or discuss the U.S. Federal Reserve System in a conversation or Internet post, somebody recommends this book. I'd never gotten around to reading it until recently, when a couple more mentions of it pushed me over the edge. And what an edge that turned out to be. I cannot recommend this book to anybody; there are far more coherent, focussed, and persuasive analyses of the Federal Reserve in print, for example Ron Paul's excellent book End the Fed (October 2009). The present book goes well beyond a discussion of the Federal Reserve and rambles over millennia of history in a chaotic manner prone to induce temporal vertigo in the reader, discussing the history of money, banking, political manipulation of currency, inflation, fractional reserve banking, fiat money, central banking, cartels, war profiteering, bailouts, monetary panics and bailouts, nonperforming loans to “developing” nations, the Rothschilds and Rockefellers, booms and busts, and more.

The author is inordinately fond of conspiracy theories. As we pursue our random walk through history and around the world, we encounter:

  • The sinking of the Lusitania
  • The assassination of Abraham Lincoln
  • The Order of the Knights of the Golden Circle, the Masons, and the Ku Klux Klan
  • The Bavarian Illuminati
  • Russian Navy intervention in the American Civil War
  • Cecil Rhodes and the Round Table Groups
  • The Council on Foreign Relations
  • The Fabian Society
  • The assassination of John F. Kennedy
  • Theodore Roosevelt's “Bull Moose” run for the U.S. presidency in 1912
  • The Report from Iron Mountain
  • The attempted assassination of Andrew Jackson in 1835
  • The Bolshevik Revolution in Russia

I've jumped around in history to give a sense of the chaotic, achronological narrative here. “What does this have to do with the Federal Reserve?”, you might ask. Well, not very much, except as part of a worldview in which almost everything is explained by the machinations of bankers assisted by the crooked politicians they manipulate.

Now, I agree with the author, on those occasions he actually gets around to discussing the Federal Reserve, that it was fraudulently sold to Congress and the U.S. population and has acted, from the very start, as a self-serving cartel of big New York banks enriching themselves at the expense of anybody who holds assets denominated in the paper currency they have been inflating away ever since 1913. But you don't need to invoke conspiracies stretching across the centuries and around the globe to explain this. The Federal Reserve is (despite how it was deceptively structured and promoted) a central bank, just like the Bank of England and the central banks of other European countries upon which it was modelled, and creating funny money out of thin air and looting the population by the hidden tax of inflation is what central banks do, always have done, and always will, as long as they are permitted to exist. Twice in the history of the U.S. prior to the establishment of the Federal Reserve, central banks were created, the first in 1791 by Alexander Hamilton, and the second in 1816. Each time, after the abuses of such an institution became apparent, the bank was abolished, the first in 1811, and the second in 1836. Perhaps, after the inevitable crack-up which always results from towering debt and depreciating funny money, the Federal Reserve will follow the first two central banks into oblivion, but so deeply is it embedded in the status quo it is difficult to see how that might happen today.

In addition to the rambling narrative, the production values of the book are shoddy. For a book which has gone through five editions and 33 printings, nobody appears to have spent the time giving the text even the most cursory of proofreading. Without examining it with the critical eye I apply when proofing my own work or that of others, I noted 137 errors of spelling, punctuation, and formatting in the text. Paragraph breaks are inserted seemingly at random, right in the middle of sentences, and other words are run together. Words which are misspelled include “from”, “great”, “fourth”, and “is”. This is not a freebie or dollar special, but a paperback which sells for US$20 at Amazon, or US$18 for the Kindle edition. And as I always note, if the author and publisher cannot be bothered to get simple things like these correct, how likely is it that facts and arguments in the text can be trusted?

Don't waste your money or your time. Ron Paul's End the Fed is much better, only a third the length, and concentrates on the subject without all of the whack-a-doodle digressions. For a broader perspective on the history of money, banking, and political manipulation of currency, see Murray Rothbard's classic What Has Government Done to Our Money? (July 2019).

August 2019 Permalink

Guedj, Denis. Le mčtre du monde. Paris: Seuil, 2000. ISBN 2-02-049989-4.
When thinking about management lessons one can learn from the French Revolution, I sometimes wonder if Louis XVI, sometime in the interval between when the Revolution lost its mind and he lost his head, ever thought, “Memo to file: when running a country seething with discontent, it's a really poor idea to invite people to compile lists of things they detest about the current regime.” Yet, that's exactly what he did in 1788, soliciting cahiers de doléances (literally, “notebooks of complaints”) to be presented to the Estates-General when it met in May of 1789. There were many, many things about which to complain in the latter years of the Ancien Régime, but one which appeared on almost every one of the lists was the lack of uniformity in weights and measures. Not only was there a bewildering multitude of different measures in use (around 2000 in France alone), but the value of measures with the same name differed from one region to another, a legacy of feudal days when one of the rights of the lord was to define the weights and measures in his fiefdom. How far is “three leagues down the road?” Well, that depends on what you mean by “league”, which was almost 40% longer in Provence than in Paris. The most common unit of weight, the “livre”, had more than two hundred different definitions across the country. And if that weren't bad enough, unscrupulous merchants and tax collectors would exploit the differences and lack of standards to cheat those bewildered by the complexity.

Revolutions, and the French Revolution in particular, have a way of going far beyond the intentions of those who launch them. The multitudes who pleaded for uniformity in weights and measures almost unanimously intended, and would have been entirely satisfied with, a standardisation of the values of the commonly used measures of length, weight, volume, and area. But perpetuating these relics of tyranny was an affront to the revolutionary spirit of remaking the world, and faced with a series of successive decisions, the revolutionary assembly chose the most ambitious and least grounded in the past on each occasion: to entirely replace all measures in use with entirely new ones, to use identical measures for every purpose (traditional measures used different units depending upon what was being measured), to abandon historical subdivisions of units in favour of a purely decimal system, and to ground all of the units in quantities based in nature and capable of being determined by anybody at any time, given only the definition.

Thus was the metric system born, and seldom have so many eminent figures been involved in what many might consider an arcane sideshow to revolution: Concordet, Coulomb, Lavoisier, Laplace, Talleyrand, Bailly, Delambre, Cassini, Legendre, Lagrange, and more. The fundamental unit, the metre, was defined in terms of the Earth's meridian, and since earlier measures failed to meet the standard of revolutionary perfection, a project was launched to measure the meridian through the Paris Observatory from Dunkirk to Barcelona. Imagine trying to make a precision measurement over such a distance as revolution, terror, hyper-inflation, counter-revolution, and war between France and Spain raged all around the savants and their surveying instruments. So long and fraught with misadventures was the process of creating the metric system that while the original decree ordering its development was signed by Louis XVI, it was officially adopted only a few months before Napoleon took power in 1799. Yet despite all of these difficulties and misadventures, the final measure of the meridian accepted in 1799 differed from the best modern measurements by only about ten metres over a baseline of more than 1000 kilometres.

This book tells the story of the metric system and the measurement of the meridian upon which it was based, against the background of revolutionary France. The author pulls no punches in discussing technical detail—again and again, just when you expect he's going to gloss over something, you turn the page or read a footnote and there it is. Writing for a largely French audience, the author may assume the reader better acquainted with the chronology, people, and events of the Revolution than readers hailing from other lands are likely to be; the chronology at the end of the book is an excellent resource when you forget what happened when. There is no index. This seems to be one of those odd cultural things; I've found French books whose counterparts published in English would almost certainly be indexed to frequently lack this valuable attribute—I have no idea why this is the case.

One of the many fascinating factoids I gleaned from this book is that the country with the longest continuous use of the metric system is not France! Napoleon replaced the metric system with the mesures usuelles in 1812, redefining the traditional measures in terms of metric base units. The metric system was not reestablished in France until 1840, by which time Belgium, Holland, and Luxembourg had already adopted it.

April 2007 Permalink

Haffner, Sebastian [Raimund Pretzel]. Defying Hitler. New York: Picador, [2000] 2003. ISBN 978-0-312-42113-7.
In 1933, the author was pursuing his ambition to follow his father into a career in the Prussian civil service. While completing his law degree, he had obtained a post as a Referendar, the lowest rank in the civil service, performing what amounted to paralegal work for higher ranking clerks and judges. He enjoyed the work, especially doing research in the law library and drafting opinions, and was proud to be a part of the Prussian tradition of an independent judiciary. He had no strong political views nor much interest in politics. But, as he says, “I have a fairly well developed figurative sense of smell, or to put it differently, a sense of the worth (or worthlessness!) of human, moral, political views and attitudes. Most Germans unfortunately lack this sense almost completely.”

When Hitler came to power in January 1933, “As for the Nazis, my nose left me with no doubts. … How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset. What was not at all clear to me was what terrible enemies they would turn out to be.” Initially, little changed: it was a “matter for the press”. The new chancellor might rant to enthralled masses about the Jews, but in the court where Haffner clerked, a Jewish judge continued to sit on the bench and work continued as before. He hoped that the political storm on the surface would leave the depths of the civil service unperturbed. This was not to be the case.

Haffner was a boy during the First World War, and, like many of his schoolmates, saw the war as a great adventure which unified the country. Coming of age in the Weimar Republic, he experienced the great inflation of 1921–1924 as up-ending the society: “Amid all the misery, despair, and poverty there was an air of light-headed youthfulness, licentiousness, and carnival. Now, for once, the young had money and the old did not. Its value lasted only a few hours. It was spent as never before or since; and not on the things old people spend their money on.” A whole generation whose ancestors had grown up in a highly structured society where most decisions were made for them now were faced with the freedom to make whatever they wished of their private lives. But they had never learned to cope with such freedom.

After the Reichstag fire and the Nazi-organised boycott of Jewish businesses (enforced by SA street brawlers standing in doors and intimidating anybody who tried to enter), the fundamental transformation of the society accelerated. Working in the library at the court building, Haffner is shocked to see this sanctum of jurisprudence defiled by the SA, who had come to eject all Jews from the building. A Jewish colleague is expelled from university, fired from the civil service, and opts to emigrate.

The chaos of the early days of the Nazi ascendency gives way to Gleichschaltung, the systematic takeover of all institutions by placing Nazis in key decision-making positions within them. Haffner sees the Prussian courts, which famously stood up to Frederick the Great a century and a half before, meekly toe the line.

Haffner begins to consider emigrating from Germany, but his father urges him to complete his law degree before leaving. His close friends among the Referendars run the gamut from Communist sympathisers to ardent Nazis. As he is preparing for the Assessor examination (the next rank in the civil service, and the final step for a law student), he is called up for mandatory political and military indoctrination now required for the rank. The barrier between the personal, professional, and political had completely fallen. “Four weeks later I was wearing jackboots and a uniform with a swastika armband, and spent many hours each day marching in a column in the vicinity of Jüterbog.”

He discovers that, despite his viewing the Nazis as essentially absurd, there is something about order, regimentation, discipline, and forced camaraderie that resonates in his German soul.

Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue.

That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.

And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”.

This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that.

This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available.

Thanks to Glenn Beck for recommending this book.

June 2017 Permalink

Hall, R. Cargill. Lunar Impact. Washington: National Aeronautics and Space Administration, 1977. ISBN 978-0-486-47757-2. NASA SP-4210.
One of the wonderful things about the emergence of electronic books is that long out-of-print works from publishers' back-lists are becoming available once again since the cost of keeping them in print, after the initial conversion to an electronic format, is essentially zero. The U.S. civilian space agency NASA is to be commended for their efforts to make publications in their NASA history series available electronically at a bargain price. Many of these documents, chronicling the early days of space exploration from a perspective only a few years after the events, have been out of print for decades and some command forbidding prices on used book markets. Those interested in reading them, as opposed to collectors, now have an option as inexpensive as it is convenient to put these works in their hands.

The present volume, originally published in 1977, chronicles Project Ranger, NASA's first attempt to obtain “ground truth” about the surface of the Moon by sending probes to crash on its surface, radioing back high-resolution pictures, measuring its composition, and hard-landing scientific instruments on the surface to study the Moon's geology. When the project was begun in 1959, it was breathtakingly ambitious—so much so that one gets the sense those who set its goals did not fully appreciate the difficulty of accomplishing them. Ranger was to be not just a purpose-built lunar probe, but rather a general-purpose “bus” for lunar and planetary missions which could be equipped with different scientific instruments depending upon the destination and goals of the flight. It would incorporate, for the first time in a deep space mission, three-axis stabilisation, a steerable high-gain antenna, midcourse and terminal trajectory correction, an onboard (albeit extremely primitive) computer, real-time transmission of television imagery, support by a global Deep Space Network of tracking stations which did not exist before Ranger, sterilisation of the spacecraft to protect against contamination of celestial bodies by terrestrial organisms, and a retro-rocket and landing capsule which would allow rudimentary scientific instruments to survive thumping down on the Moon and transmit their results back to Earth.

This was a great deal to bite off, and as those charged with delivering upon these lofty goals discovered, extremely difficult to chew, especially in a period where NASA was still in the process of organising itself and lines of authority among NASA Headquarters, the Jet Propulsion Laboratory (charged with developing the spacecraft and conducting the missions) and the Air Force (which provided the Atlas-Agena launch vehicle that propelled Ranger to the Moon) were ill-defined and shifting frequently. This, along with the inherent difficulty of what was being attempted, contributed to results which can scarcely be imagined in an era of super-conservative mission design: six consecutive failures between 1961 and 1964, with a wide variety of causes. Even in the early days of spaceflight, this was enough to get the attention of the press, politicians, and public, and it was highly probable that had Ranger 7 also failed, it would be the end of the program. But it didn't—de-scoped to just a camera platform, it performed flawlessly and provided the first close-up glimpse of the Moon's surface. Rangers 8 and 9 followed, both complete successes, with the latter relaying pictures “live from the Moon” to televisions of viewers around the world. To this day I recall seeing them and experiencing a sense of wonder which is difficult to appreciate in our jaded age.

Project Ranger provided both the technology and experience base used in the Mariner missions to Venus, Mars, and Mercury. While the scientific results of Ranger were soon eclipsed by those of the Surveyor soft landers, it is unlikely that program would have succeeded without learning the painful lessons from Ranger.

The electronic edition of this book appears to have been created by scanning a print copy and running it through an optical character recognition program, then performing a spelling check and fixing errors it noted. However, no close proofreading appears to have been done, so that scanning errors which resulted in words in the spelling dictionary were not corrected. This results in a number of goofs in the text, some of which are humorous. My favourite is the phrase “midcourse correction bum [burn]” which occurs on several occasions. I imagine a dissipated wino with his trembling finger quivering above a big red “FIRE” button at a console at JPL. British readers may…no, I'm not going there. Illustrations from the original book are scanned and included as tiny thumbnails which cannot be enlarged. This is adequate for head shots of people, but for diagrams, charts, and photographs of hardware and the lunar surface, next to useless. References to endnotes in the text look like links but (at least reading the Kindle edition on an iPad) do nothing. These minor flaws do not seriously detract from the glimpse this work provides of unmanned planetary exploration at its moment of creation or the joy that this account is once again readily available.

Unlike many of the NASA history series, a paperback reprint edition is available, published by Dover. It is, however, much more expensive than the electronic edition.

Update: Reader J. Peterson writes that a free on-line edition of this book is available on NASA's Web site, in which the illustrations may be clicked to view full-resolution images.

February 2012 Permalink

Hamilton-Paterson, James. Empire of the Clouds. London: Faber and Faber, 2010. ISBN 978-0-571-24795-0.
At the end of World War II, Great Britain seemed poised to dominate or at the least be a major player in postwar aviation. The aviation industries of Germany, France, and, to a large extent, the Soviet Union lay in ruins, and while the industrial might of the United States greatly out-produced Britain in aircraft in the latter years of the war, America's P-51 Mustang was powered by a Rolls-Royce engine built under license in the U.S., and the first U.S. turbojet and turboshaft engines were based on British designs. When the war ended, Britain not only had a robust aircraft industry, composed of numerous fiercely independent and innovative companies, it had in hand projects for game-changing military aircraft and a plan, drawn up while the war still raged, to seize dominance of civil aviation from American manufacturers with a series of airliners which would redefine air travel.

In the first decade after the war, Britons, especially aviation-mad “plane-spotters” like the author, found it easy to believe this bright future beckoned to them. They thronged to airshows where innovative designs performed manoeuvres thought impossible only a few years before, and they saw Britain launch the first pure-jet, and the first medium- and long-range turboprop airliners into commercial service. This was a very different Britain than that of today. Only a few years removed from the war, even postwar austerity seemed a relief from the privations of wartime, and many people vividly recalled losing those close to them in combat or to bombing attacks by the enemy. They were a hard people, and not inclined to discouragement even by tragedy. In 1952, at an airshow at Farnborough, an aircraft disintegrated in flight and fell into the crowd, killing 31 people and injuring more than 60 others. While ambulances were still carrying away the dead and injured, the show went on, and the next day Winston Churchill sent the pilot who went up after the disaster his congratulations for carrying on. While losses to aircraft and aircrew in the postwar era were small compared combat in the war, they were still horrific by present day standards.

A quick glance at the rest of this particular AIB [Accidents Investigation Branch] file reveals many similar casualties. It deals with accidents that took place between 3 May 1956 and 3 January 1957. In those mere eight months there was a total of thirty-four accidents in which forty-two aircrew were killed (roughly one fatality every six days). Pilot error and mechanical failure shared approximately equal billing in the official list of causes. The aircraft types included ten de Havilland Venoms, six de Havilland Vampires, six Hawker Hunters, four English Electric Canberras, two Gloster Meteors, and one each of the following: Gloster Javelin, Folland Gnat, Avro Vulcan, Avro Shackleton, Short Seamew and Westland Whirlwind helicopter. (pp. 128–129)

There is much to admire in the spirit of mourn the dead, fix the problem, and get on with the job, but that stoic approach, essential in wartime, can blind one to asking, “Are these losses acceptable? Do they indicate we're doing something wrong? Do we need to revisit our design assumptions, practises, and procedures?” These are the questions which came into the mind of legendary test pilot Bill Waterton, whose career is the basso continuo of this narrative. First as an RAF officer, then as a company test pilot, and finally as aviation correspondent for the Daily Express, he perceived and documented how Britain's aviation industry was, due to its fragmentation into tradition-bound companies, incessant changes of priorities by government, and failure to adapt to the aggressive product development schedules of the Americans and even the French, still rebuilding from wartime ruins, doomed to bring inferior products to the market too late to win foreign sales, which were essential for the viability of an industry with a home market as small as Britain's to maintain world-class leadership.

Although the structural problems within the industry had long been apparent to observers such as Waterton, any hope of British leadership was extinguished by the Duncan Sandys 1957 Defence White Paper which, while calling for long-overdue consolidation of the fragmented U.K. aircraft industry, concluded that most military missions in the future could be accomplished more effectively and less expensively by unmanned missiles. With a few exceptions, it cancelled all British military aviation development projects, condemning Britain, once the fallacy in the “missiles only” approach became apparent, to junior partner status in international projects or outright purchases of aircraft from suppliers overseas. On the commercial aviation side, only the Vickers Viscount was a success: the fatigue-induced crashes of the de Havilland Comet and the protracted development process of the Bristol Britannia caused their entry into service to be so late as to face direct competition from the Boeing 707 and Douglas DC-8, which were superior aircraft in every regard.

This book recounts a curious epoch in the history of British aviation. To observers outside the industry, including the hundreds of thousands who flocked to airshows, it seemed like a golden age, with one Made in Britain innovation following another in rapid succession. But in fact, it was the last burst of energy as the capital of a mismanaged and misdirected industry was squandered at the direction of fickle politicians whose priorities were elsewhere, leading to a sorry list of cancelled projects, prototypes which never flew, and aircraft which never met their specifications or were rushed into service before they were ready. In 1945, Britain was positioned to be a world leader in aviation and proceeded, over the next two decades, to blow it, not due to lack of talent, infrastructure, or financial resources, but entirely through mismanagement, shortsightedness, and disastrous public policy. The following long quote from the concluding chapter expresses this powerfully.

One way of viewing the period might be as a grand swansong or coda to the process we Britons had ourselves started with the Industrial Revolution. The long, frequently brilliant chapter of mechanical inventiveness and manufacture that began with steam finally ran out of steam. This was not through any waning of either ingenuity or enthusiasm on the part of individuals, or even of the nation's aviation industry as a whole. It happened because, however unconsciously and blunderingly it was done, it became the policy of successive British governments to eradicate that industry as though it were an unruly wasps' nest by employing the slow cyanide of contradictory policies, the withholding of support and funds, and the progressive poisoning of morale. In fact, although not even the politicians themselves quite realised it – and certainly not at the time of the upbeat Festival of Britain in 1951 – this turned out to be merely part of a historic policy change to do away with all Britain's capacity as a serious industrial nation, abolishing not just a century of making its own cars but a thousand years of building its own ships. I suspect this policy was more unconscious than deliberately willed, and it is one whose consequences for the nation are still not fully apparent. It sounds improbable; yet there is surely no other interpretation to be made of the steady, decades-long demolition of the country's manufacturing capacity – including its most charismatic industry – other that at some level it was absolutely intentional, no matter what lengths politicians went to in order to conceal this fact from both the electorate and themselves. (p. 329)

Not only is this book rich in aviation anecdotes of the period, it has many lessons for those living in countries which have come to believe they can prosper by de-industrialising, sending all of their manufacturing offshore, importing their science and engineering talent from other nations, and concentrating on selling “financial services” to one another. Good luck with that.

May 2011 Permalink

Hanson, Victor Davis. Carnage and Culture. New York: Doubleday, 2001. ISBN 0-385-50052-1.

February 2002 Permalink

Hanson, Victor Davis. The Second World Wars. New York: Basic Books, 2017. ISBN 978-0-465-06698-8.
This may be the best single-volume history of World War II ever written. While it does not get into the low-level details of the war or its individual battles (don't expect to see maps with boxes, front lines, and arrows), it provides an encyclopedic view of the first truly global conflict with a novel and stunning insight every few pages.

Nothing like World War II had ever happened before and, thankfully, has not happened since. While earlier wars may have seemed to those involved in them as involving all of the powers known to them, they were at most regional conflicts. By contrast, in 1945, there were only eleven countries in the entire world which were neutral—not engaged on one side or the other. (There were, of course, far fewer countries then than now—most of Africa and South Asia were involved as colonies of belligerent powers in Europe.) And while war had traditionally been a matter for kings, generals, and soldiers, in this total war the casualties were overwhelmingly (70–80%) civilian. Far from being confined to battlefields, many of the world's great cities, from Amsterdam to Yokohama, were bombed, shelled, or besieged, often with disastrous consequences for their inhabitants.

“Wars” in the title refers to Hanson's observation that what we call World War II was, in reality, a collection of often unrelated conflicts which happened to occur at the same time. The settling of ethnic and territorial scores across borders in Europe had nothing to do with Japan's imperial ambitions in China, or Italy's in Africa and Greece. It was sometimes difficult even to draw a line dividing the two sides in the war. Japan occupied colonies in Indochina under the administration of Vichy France, notwithstanding Japan and Vichy both being nominal allies of Germany. The Soviet Union, while making a massive effort to defeat Nazi Germany on the land, maintained a non-aggression pact with Axis power Japan until days before its surrender and denied use of air bases in Siberia to Allied air forces for bombing campaigns against the home islands.

Combatants in different theatres might have well have been fighting in entirely different wars, and sometimes in different centuries. Air crews on long-range bombing missions above Germany and Japan had nothing in common with Japanese and British forces slugging it out in the jungles of Burma, nor with attackers and defenders fighting building to building in the streets of Stalingrad, or armoured combat in North Africa, or the duel of submarines and convoys to keep the Atlantic lifeline between the U.S. and Britain open, or naval battles in the Pacific, or the amphibious landings on islands they supported.

World War II did not start as a global war, and did not become one until the German invasion of the Soviet Union and the Japanese attack on U.S., British, and Dutch territories in the Pacific. Prior to those events, it was a collection of border wars, launched by surprise by Axis powers against weaker neighbours which were, for the most part, successful. Once what Churchill called the Grand Alliance (Britain, the Soviet Union, and the United States) was forged, the outcome was inevitable, yet the road to victory was long and costly, and its length impossible to foresee at the outset.

The entire war was unnecessary, and its horrific cost can be attributed to a failure of deterrence. From the outset, there was no way the Axis could have won. If, as seemed inevitable, the U.S. were to become involved, none of the Axis powers possessed the naval or air resources to strike the U.S. mainland, no less contemplate invading and occupying it. While all of Germany and Japan's industrial base and population were, as the war progressed, open to bombardment day and night by long-range, four engine, heavy bombers escorted by long-range fighters, the Axis possessed no aircraft which could reach the cities of the U.S. east coast, the oil fields of Texas and Oklahoma, or the industrial base of the midwest. While the U.S. and Britain fielded aircraft carriers which allowed them to project power worldwide, Germany and Italy had no effective carrier forces and Japan's were reduced by constant attacks by U.S. aviation.

This correlation of forces was known before the outbreak of the war. Why did Japan and then Germany launch wars which were almost certain to result in forces ranged against them which they could not possibly defeat? Hanson attributes it to a mistaken belief that, to use Hitler's terminology, the will would prevail. The West had shown itself unwilling to effectively respond to aggression by Japan in China, Italy in Ethiopia, and Germany in Czechoslovakia, and Axis leaders concluded from this, catastrophically for their populations, that despite their industrial, demographic, and strategic military weakness, there would be no serious military response to further aggression (the “bore war” which followed the German invasion of Poland and the declarations of war on Germany by France and Britain had to reinforce this conclusion). Hanson observes, writing of Hitler, “Not even Napoleon had declared war in succession on so many great powers without any idea how to destroy their ability to make war, or, worse yet, in delusion that tactical victories would depress stronger enemies into submission.” Of the Japanese, who attacked the U.S. with no credible capability or plan for invading and occupying the U.S. homeland, he writes, “Tojo was apparently unaware or did not care that there was no historical record of any American administration either losing or quitting a war—not the War of 1812, the Mexican War, the Civil War, the Spanish American War, or World War I—much less one that Americans had not started.” (Maybe they should have waited a few decades….)

Compounding the problems of the Axis was that it was essentially an alliance in name only. There was little or no co-ordination among its parties. Hitler provided Mussolini no advance notice of the attack on the Soviet Union. Mussolini did not warn Hitler of his attacks on Albania and Greece. The Japanese attack on Pearl Harbor was as much a surprise to Germany as to the United States. Japanese naval and air assets played no part in the conflict in Europe, nor did German technology and manpower contribute to Japan's war in the Pacific. By contrast, the Allies rapidly settled on a division of labour: the Soviet Union would concentrate on infantry and armoured warfare (indeed, four out of five German soldiers who died in the war were killed by the Red Army), while Britain and the U.S. would deploy their naval assets to blockade the Axis, keep the supply lines open, and deliver supplies to the far-flung theatres of the war. U.S. and British bomber fleets attacked strategic targets and cities in Germany day and night. The U.S. became the untouchable armoury of the alliance, delivering weapons, ammunition, vehicles, ships, aircraft, and fuel in quantities which eventually surpassed those all other combatants on both sides combined. Britain and the U.S. shared technology and cooperated in its development in areas such as radar, antisubmarine warfare, aircraft engines (including jet propulsion), and nuclear weapons, and shared intelligence gleaned from British codebreaking efforts.

As a classicist, Hanson examines the war in its incarnations in each of the elements of antiquity: Earth (infantry), Air (strategic and tactical air power), Water (naval and amphibious warfare), and Fire (artillery and armour), and adds People (supreme commanders, generals, workers, and the dead). He concludes by analysing why the Allies won and what they ended up winning—and losing. Britain lost its empire and position as a great power (although due to internal and external trends, that might have happened anyway). The Soviet Union ended up keeping almost everything it had hoped to obtain through its initial partnership with Hitler. The United States emerged as the supreme economic, industrial, technological, and military power in the world and promptly entangled itself in a web of alliances which would cause it to underwrite the defence of countries around the world and involve it in foreign conflicts far from its shores.

Hanson concludes,

The tragedy of World War II—a preventable conflict—was that sixty million people had perished to confirm that the United States, the Soviet Union, and Great Britain were far stronger than the fascist powers of Germany, Japan, and Italy after all—a fact that should have been self-evident and in no need of such a bloody laboratory, if not for prior British appeasement, American isolationism, and Russian collaboration.

At 720 pages, this is not a short book (the main text is 590 pages; the rest are sources and end notes), but there is so much wisdom and startling insights among those pages that you will be amply rewarded for the time you spend reading them.

May 2018 Permalink

Harris, Robert. Imperium. New York: Simon & Schuster, 2006. ISBN 0-7432-6603-X.
Marcus Tullius Tiro was a household slave who served as the personal secretary to the Roman orator, lawyer, and politician Cicero. Tiro is credited with the invention of shorthand, and is responsible for the extensive verbatim records of Cicero's court appearances and political speeches. He was freed by Cicero in 53 B.C. and later purchased a farm where he lived to around the age of 100 years. According to contemporary accounts, Tiro published a biography of Cicero of at least four volumes; this work has been lost.

In this case, history's loss is a novelist's opportunity, which alternative-history wizard Robert Harris (Fatherland [June 2002], Archangel [February 2003], Enigma, Pompeii) seizes, bringing the history of Cicero's rise from ambitious lawyer to Consul of Rome to life, while remaining true to the documented events of Cicero's career. The narrator is Tiro, who discovers both the often-sordid details of how the Roman republic actually functioned and the complexity of Cicero's character as the story progresses.

The sense one gets of Rome is perhaps a little too modern, and terminology creeps in from time to time (for example, “electoral college” [p. 91]) which seems out of place. On pp. 226–227 there is an extended passage which made me fear we were about to veer off into commentary on current events:

‘I do not believe we should negotiate with such people, as it will only encourage them in their criminal acts.’ … Where would be struck next? What Rome was facing was a threat very different from that posed by a conventional enemy. These pirates were a new type of ruthless foe, with no government to represent them and no treaties to bind them. Their bases were not confined to a single state. They had no unified system of command. They were a worldwide pestilence, a parasite which needed to be stamped out, otherwise Rome—despite her overwhelming military superiority—would never again know security or peace. … Any ruler who refuses to cooperate will be regarded as Rome's enemy. Those who are not with us are against us.
Harris resists the temptation of turning Rome into a soapbox for present-day political advocacy on any side, and quickly gets back to the political intrigue in the capital. (Not that the latter days of the Roman republic are devoid of relevance to the present situation; study of them may provide more insight into the news than all the pundits and political blogs on the Web. But the parallels are not exact, and the circumstances are different in many fundamental ways. Harris wisely sticks to the story and leaves the reader to discern the historical lessons.)

The novel comes to a rather abrupt close with Cicero's election to the consulate in 63 B.C. I suspect that what we have here is the first volume of a trilogy. If that be the case, I look forward to future installments.

April 2007 Permalink

Haynes, John Earl and Harvey Klehr. Venona: Decoding Soviet Espionage in America. New Haven, CT: Yale University Press, 1999. ISBN 0-300-08462-5.
Messages encrypted with a one-time pad are absolutely secure unless the adversary obtains a copy of the pad or discovers some non-randomness in the means used to prepare it. Soviet diplomatic and intelligence traffic used one-time pads extensively, avoiding the vulnerabilities of machine ciphers which permitted World War II codebreakers to read German and Japanese traffic. The disadvantage of one-time pads is key distribution: since every message consumes as many groups from the one-time pad as its own length and pads are never reused (hence the name), embassies and agents in the field require a steady supply of new one-time pads, which can be a logistical nightmare in wartime and risk to covert operations. The German invasion of the Soviet Union in 1941 caused Soviet diplomatic and intelligence traffic to explode in volume, surpassing the ability of Soviet cryptographers to produce and distribute new one-time pads. Apparently believing the risk to be minimal, they reacted by re-using one-time pad pages, shuffling them into a different order and sending them to other posts around the world. Bad idea! In fact, reusing one-time pad pages opened up a crack in security sufficiently wide to permit U.S. cryptanalysts, working from 1943 through 1980, to decode more than five thousand pages (some only partially) of Soviet cables from the wartime era. The existence of this effort, later codenamed Project VENONA, and all the decoded material remained secret until 1995 when it was declassified. The most-requested VENONA decrypts may be viewed on-line at the NSA Web site. (A few months ago, there was a great deal of additional historical information on VENONA at the NSA site, but at this writing the links appear to be broken.) This book has relatively little to say about the cryptanalysis of the VENONA traffic. It is essentially a history of Soviet espionage in the U.S. in the 1930s and 40s as documented by the VENONA decrypts. Some readers may be surprised at how little new information is presented here. In essence, VENONA messages completely confirmed what Whittaker Chambers (Witness, September 2003) and Elizabeth Bentley testified to in the late 1940s, and FBI counter-intelligence uncovered. The apparent mystery of why so many who spied for the Soviets escaped prosecution and/or conviction is now explained by the unwillingness of the U.S. government to disclose the existence of VENONA by using material from it in espionage cases. The decades long controversy over the guilt of the Rosenbergs (The Rosenberg File, August 2002) has been definitively resolved by disclosure of VENONA—incontrovertible evidence of their guilt remained secret, out of reach to historians, for fifty years after their crimes. This is a meticulously-documented work of scholarly history, not a page-turning espionage thriller; it is probably best absorbed in small doses rather than one cover to cover gulp.

February 2004 Permalink

Hendrickx, Bart and Bert Vis. Energiya-Buran. Chichester, UK: Springer Praxis, 2007. ISBN 978-0-387-69848-9.
This authoritative history chronicles one of the most bizarre episodes of the Cold War. When the U.S. Space Shuttle program was launched in 1972, the Soviets, unlike the majority of journalists and space advocates in the West who were bamboozled by NASA's propaganda, couldn't make any sense of the economic justification for the program. They worked the numbers, and they just didn't work—the flight rates, cost per mission, and most of the other numbers were obviously not achievable. So, did the Soviets chuckle at this latest folly of the capitalist, imperialist aggressors and continue on their own time-proven path of mass-produced low-technology expendable boosters? Well, of course not! They figured that even if their wisest double-domed analysts were unable to discern the justification for the massive expenditures NASA had budgeted for the Shuttle, there must be some covert military reason for its existence to which they hadn't yet twigged, and hence they couldn't tolerate a shuttle gap and consequently had to build their own, however pointless it looked on the surface.

And that's precisely what they did, as this book so thoroughly documents, with a detailed history, hundreds of pictures, and technical information which has only recently become available. Reasonable people can argue about the extent to which the Soviet shuttle was a copy of the American (and since the U.S. program started years before and placed much of its design data into the public domain, any wise designer would be foolish not to profit by using it), but what is not disputed is that (unlike the U.S. Shuttle) Energiya was a general purpose heavy-lift launcher which had the orbiter Buran as only one of its possible payloads and was one of the most magnificent engineering projects of the space programs of any nation, involving massive research and development, manufacturing, testing, integrated mission simulation, crew training, and flight testing programs.

Indeed, Energiya-Buran was in many ways a better-conceived program for space access than the U.S. Shuttle program: it integrated a heavy payload cargo launcher with the shuttle program, never envisioned replacing less costly expendable boosters with the shuttle, and forecast a development program which would encompass full reusability of boosters and core stages and both unmanned cargo and manned crew changeout missions to Soviet space stations.

The program came to a simultaneously triumphant and tragic end: the Energiya booster and the Energiya-Buran shuttle system performed flawless missions (the first Energiya launch failed to put its payload into orbit, but this was due to a software error in the payload: the launcher performed nominally from ignition through payload separation).

In the one and only flight of Buran (launch and landing video, other launch views) the orbiter was placed into its intended orbit and landed on the cosmodrome runway at precisely the expected time.

And then, in the best tradition not only of the Communist Party of the Soviet Union but of the British Labour Party of the 1970s, this singular success was rewarded by cancellation of the entire program. As an engineer, I have almost unlimited admiration for my ex-Soviet and Russian colleagues who did such masterful work and who will doubtless advance technology in the future to the benefit of us all. We should celebrate the achievement of those who created this magnificent space transportation system, while encouraging those inspired by it to open the high frontier to all of those who exulted in its success.

January 2009 Permalink

Herken. Gregg. Brotherhood of the Bomb. New York: Henry Holt, 2002. ISBN 0-8050-6589-X.
What more's to be said about the tangled threads of science, politics, ego, power, and history that bound together the lives of Ernest O. Lawrence, J. Robert Oppenheimer, and Edward Teller from the origin of the Manhattan Project through the postwar controversies over nuclear policy and the development of thermonuclear weapons? In fact, a great deal, as declassification of FBI files, including wiretap transcripts, release of decrypted Venona intercepts of Soviet espionage cable traffic, and documents from Moscow archives opened to researchers since the collapse of the Soviet Union have provide a wealth of original source material illuminating previously dark corners of the epoch.

Gregg Herken, a senior historian and curator at the National Air and Space Museum, draws upon these resources to explore the accomplishments, conflicts, and controversies surrounding Lawrence, Oppenheimer, and Teller, and the cold war era they played such a large part in defining. The focus is almost entirely on the period in which the three were active in weapons development and policy—there is little discussion of their prior scientific work, nor of Teller's subsequent decades on the public stage. This is a serious academic history, with almost 100 pages of source citations and bibliography, but the story is presented in an engaging manner which leaves the reader with a sense of the personalities involved, not just their views and actions. The author writes with no discernible ideological bias, and I noted only one insignificant technical goof.

May 2005 Permalink

Hodges, Michael. AK47: The Story of the People's Gun. London: Sceptre, 2007. ISBN 978-0-340-92106-7.
The AK-47 (the author uses “AK47” in this book, except for a few places in the last chapter; I will use the more common hyphenated designation here) has become an iconic symbol of rebellion in the six decades since Mikhail Kalashnikov designed this simple (just 8 moving parts), rugged, inexpensive to manufacture, and reliable assault rifle. Iconic? Yes, indeed—for example the flag and coat of arms of Mozambique feature this weapon which played such a large and tragic rôle in its recent history. Wherever violence erupts around the world, you'll probably see young men brandishing AK-47s or one of its derivatives. The AK-47 has become a global brand as powerful as Coca-Cola, but symbolising insurgency and rebellion, and this book is an attempt to recount how that came to be.

Toward that end it is a total, abject, and utter failure. In a total of 225 pages, only about 35 are devoted to Mikhail Kalashnikov, the history of the weapon he invented, its subsequent diffusion and manufacture around the world, and its derivatives. Instead, what we have is a collection of war stories from Vietnam, Palestine, the Sudan, Pakistan, Iraq, and New Orleans (!), all told from a relentlessly left-wing, anti-American, and anti-Israel perspective, in which the AK-47 figures only peripherally. The author, as a hard leftist, believes, inter alia, in the bizarre notion that an inanimate object made of metal and wood can compel human beings to behave in irrational and ultimately self-destructive ways. You think I exaggerate? Well, here's an extended quote from p. 131.

The AK47 moved from being a tool of the conflict to the cause of the conflict, and by the mid-1990s it had become the progenitor of indiscriminate terror across huge swaths of the continent. How could it be otherwise? AKs were everywhere, and their ubiquity made stability a rare commodity as even the smallest groups could bring to bear a military pressure out of proportion to their actual size.
That's right—the existence of weapons compels human beings, who would presumably otherwise live together in harmony, to murder one another and rend their societies into chaotic, blood-soaked Hell-holes. Yup, and why do the birds always nest in the white areas? The concept that one should look at the absence of civil society as the progenitor of violence never enters the picture here. It is the evil weapon which is at fault, not the failed doctrines to which the author clings, which have wrought such suffering across the globe. Homo sapiens is a violent species, and our history has been one of constant battles. Notwithstanding the horrific bloodletting of the twentieth century, on a per-capita basis, death from violent conflict has fallen to an all-time low in the nation-state era, notwithstanding the advent of of weapons such as General Kalashnikov's. When bad ideas turn murderous, machetes will do.

A U.S edition is now available, but as of this date only in hardcover.

August 2008 Permalink

Hofschröer, Peter. Wellington's Smallest Victory. London: Faber and Faber, 2004. ISBN 0-571-21768-0.
Wellington's victory over Napoléon at Waterloo in 1815 not only inspired Beethoven's worst musical composition, but a veritable industry of histories, exhibitions, and re-enactments in Britain. The most spectacular of these was the model of the battlefield which William Siborne, career officer and author of two books on military surveying, was commissioned to build in 1830. Siborne was an assiduous researcher; after surveying the battlefield in person, he wrote to most of the surviving officers in the battle: British, Prussian, and French, to determine the precise position of their troops at the “crisis of the battle” he had chosen to depict: 19:00 on June 18th, 1815. The responses he received indicated that Wellington's Waterloo Despatch, the after-action report penned the day after the battle was, shall we say, at substantial variance with the facts, particularly as regards the extent to which Prussian troops contributed to the victory and the time at which Wellington was notified of Napoléon's attack. Siborne stuck with the facts, and his model, first exhibited in London in 1838, showed the Prussian troops fully engaged with the French at the moment the tide of battle turned. Wellington was not amused and, being not only a national hero but former Prime Minister, was a poor choice as enemy. For the rest of Siborne's life, Wellington waged a war of attrition against Siborne's (accurate) version of the events at Waterloo, with such success that most contemporary histories take Wellington's side, even if it requires believing in spyglasses capable of seeing on the other side of hills. But truth will out. Siborne's companion History of the Waterloo Campaign remains in print 150 years after its publication, and his model of the battlefield (albeit with 40,000 figures of Prussian soldiers removed) may be seen at the National Army Museum in London.

June 2004 Permalink

Holland, Tom. Rubicon. London: Abacus, 2003. ISBN 0-349-11563-X.
Such is historical focus on the final years of the Roman Republic and the emergence of the Empire that it's easy to forget that the Republic survived for more than four and a half centuries prior to the chaotic events beginning with Caesar's crossing the Rubicon which precipitated its transformation into a despotism, preserving the form but not the substance of the republican institutions. When pondering analogies between Rome and present-day events, it's worth keeping in mind that representative self-government in Rome endured about twice as long as the history of the United States to date. This superb history recounts the story of the end of the Republic, placing the events in historical context and, to an extent I have never encountered in any other work, allowing the reader to perceive the personalities involved and their actions through the eyes and cultural assumptions of contemporary Romans, which were often very different from those of people today.

The author demonstrates how far-flung territorial conquests and the obligations they imposed, along with the corrupting influence of looted wealth flowing into the capital, undermined the institutions of the Republic which had, after all, evolved to govern just a city-state and limited surrounding territory. Whether a republican form of government could work on a large scale was a central concern of the framers of the U.S. Constitution, and this narrative graphically illustrates why their worries were well-justified and raises the question of whether a modern-day superpower can resist the same drift toward authoritarian centralism which doomed consensual government in Rome.

The author leaves such inference and speculation to the reader. Apart from a few comments in the preface, he simply recounts the story of Rome as it happened and doesn't draw lessons from it for the present. And the story he tells is gripping; it may be difficult to imagine, but this work of popular history reads like a thriller (I mean that entirely as a compliment—historical integrity is never sacrificed in the interest of storytelling), and he makes the complex and often contradictory characters of figures such as Sulla, Cato, Cicero, Mark Antony, Pompey, and Marcus Brutus come alive and the shifting alliances among them comprehensible. Source citations are almost entirely to classical sources although, as the author observes, ancient sources, though often referred to as primary, are not necessarily so: for example, Plutarch was born 90 years after the assassination of Caesar. A detailed timeline lists events from the foundation of Rome in 753 B.C. through the death of Augustus in A.D. 14.

A U.S. edition is now available.

October 2007 Permalink

Holmes, W. J. Double-Edged Secrets. Annapolis: U.S. Naval Institute, [1979] 1998. ISBN 1-55750-324-9.
This is the story of U.S. Naval Intelligence in the Pacific theatre during World War II, told by somebody who was there—Holmes served in the inner sanctum of Naval Intelligence at Pearl Harbor from before the Japanese attack in 1941 through the end of the war in 1945. Most accounts of naval intelligence in the war with Japan focus on cryptanalysis and use of the “Ultra” information it yielded from Japanese radio intercepts. Holmes regularly worked with this material, and with the dedicated and sometimes eccentric individuals who produced it, but his focus is broader—on intelligence as a whole, of which cryptanalysis was only a part. The “product” delivered by his shop to warfighters in the fleet was painstakingly gleaned not only from communications intercepts, but also traffic analysis, direction finding, interpretation of aerial and submarine reconnaissance photos, interrogation of prisoners, translations of captured documents, and a multitude of other sources. In preparing for the invasion of Okinawa, naval intelligence tracked down an eighty-year-old seashell expert who provided information on landing beaches from his pre-war collecting expedition there. The total material delivered by intelligence for the Okinawa operation amounted to 127 tons of paper. This book provides an excellent feel for the fog of war, and how difficult it is to discern enemy intentions from the limited and conflicting information at hand. In addition, the difficult judgement calls which must be made between the risk of disclosing sources of information versus getting useful information into the hands of combat forces on a timely basis is a theme throughout the narrative. If you're looking for more of a focus on cryptanalysis and a discussion of the little-known British contribution to codebreaking in the Pacific war, see Michael Smith's The Emperor's Codes (August 2001).

December 2004 Permalink

Hoover, Herbert. Freedom Betrayed. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2011. ISBN 978-0-8179-1234-5.
This book, begun in the days after the attack on Pearl Harbor, became the primary occupation of former U.S. president Herbert Hoover until his death in 1964. He originally referred to it as the “War Book” and titled subsequent draft manuscripts Lost Statesmanship, The Ordeal of the American People, and Freedom Betrayed, which was adopted for this edition. Over the two decades Hoover worked on the book, he and his staff came to refer to it as the “Magnum Opus”, and it is magnum indeed—more than 950 pages in this massive brick of a hardcover edition.

The work began as an attempt to document how, in Hoover's view, a series of diplomatic and strategic blunders committed during the Franklin Roosevelt administration had needlessly prompted Hitler's attack upon the Western democracies, forged a disastrous alliance with Stalin, and deliberately provoked Japan into attacking the U.S. and Britain in the Pacific. This was summarised by Hoover as “12 theses” in a 1946 memorandum to his research assistant (p. 830):

  1. War between Russia and Germany was inevitable.
  2. Hitler's attack on Western Democracies was only to brush them out of his way.
  3. There would have been no involvement of Western Democracies had they not gotten in his (Hitler's) way by guaranteeing Poland (March, 1939).
  4. Without prior agreement with Stalin this constituted the greatest blunder of British diplomatic history.
  5. There was no sincerity on either side of the Stalin-Hitler alliance of August, 1939.
  6. The United States or the Western Hemisphere were never in danger by Hitler.
  7. [This entry is missing in Hoover's typescript—ed.]
  8. This was even less so when Hitler determined to attack Stalin.
  9. Roosevelt, knowing this about November, 1940, had no remote warranty for putting the United States in war to “save Britain” and/or saving the United States from invasion.
  10. The use of the Navy for undeclared war on Germany was unconstitutional.
  11. There were secret military agreements with Britain probably as early of January, 1940.
  12. The Japanese war was deliberately provoked. …

…all right—eleven theses. As the years passed, Hoover expanded the scope of the project to include what he saw as the cynical selling-out of hundreds of millions of people in nations liberated from Axis occupation into Communist slavery, making a mockery of the principles espoused in the Atlantic Charter and reaffirmed on numerous occasions and endorsed by other members of the Allies, including the Soviet Union. Hoover puts the blame for this betrayal squarely at the feet of Roosevelt and Churchill, and documents how Soviet penetration of the senior levels of the Roosevelt administration promoted Stalin's agenda and led directly to the loss of China to Mao's forces and the Korean War.

As such, this is a massive work of historical revisionism which flies in the face of the mainstream narrative of the origins of World War II and the postwar period. But, far from the rantings of a crank, this is the work of a former President of the United States, who, in his career as an engineer and humanitarian work after World War I lived in or travelled extensively through all of the countries involved in the subsequent conflict and had high-level meetings with their governments. (Hoover was the only U.S. president to meet with Hitler; the contemporary notes from his 1938 meeting appear here starting on p. 837.) Further, it is a scholarly examination of the period, with extensive citations and excerpts of original sources. Hoover's work in food relief in the aftermath of World War II provided additional entrée to governments in that period and an on-the-ground view of the situation as communism tightened its grip on Eastern Europe and sought to expand into Asia.

The amount of folly chronicled here is astonishing, and the extent of the human suffering it engendered is difficult to comprehend. Indeed, Hoover's “just the facts” academic style may leave you wishing he expressed more visceral anger at all the horrible things that happened which did not have to. But then Hoover was an engineer, and we engineers don't do visceral all that well. Now, Hoover was far from immune from blunders: his predecessor in the Oval Office called him “wonder boy” for his enthusiasm for grand progressive schemes, and Hoover's mis-handling of the aftermath of the 1929 stock market crash turned what might have been a short and deep recession into the First Great Depression and set the stage for the New Deal. Yet here, I think Hoover the historian pretty much gets it right, and when reading these words, last revised in 1963, one gets the sense that the verdict of history has reinforced the evidence Hoover presents here, even though his view remains anathema in an academy almost entirely in the thrall of slavers.

In the last months of his life, Hoover worked furiously to ready the manuscript for publication; he viewed it as a large part of his life's work and his final contribution to the history of the epoch. After his death, the Hoover Foundation did not proceed to publish the document for reasons which are now impossible to determine, since none of the people involved are now alive. One can speculate that they did not wish to embroil the just-deceased founder of their institution in what was sure to be a firestorm of controversy as he contradicted the smug consensus view of progressive historians of the time, but nobody really knows (and the editor, recruited by the successor of that foundation to prepare the work for publication, either did not have access to that aspect of the story or opted not to pursue it). In any case, the editor's work was massive: sorting through thousands of documents and dozens of drafts of the work, trying to discern the author's intent from pencilled-in marginal notes, tracking down citations and verifying quoted material, and writing an introduction of more than a hundred pages explaining the origins of the work, its historical context, and the methodology used to prepare this edition; the editing is a serious work of scholarship in its own right.

If you're acquainted with the period, you're unlikely to learn any new facts here, although Hoover's first-hand impressions of countries and leaders are often insightful. In the decades after Hoover's death, many documents which were under seal of official secrecy have become available, and very few of them contradict the picture presented here. (As a former president with many military and diplomatic contacts, Hoover doubtless had access to some of this material on a private basis, but he never violated these confidences in this work.) What you will learn from reading this book is that a set of facts can be interpreted in more than one way, and that if one looks at the events from 1932 through 1962 through the eyes of an observer who was, like Hoover, fundamentally a pacifist, humanitarian, and champion of liberty, you may end up with a very different impression than that in the mainstream history books. What the conventional wisdom deems a noble battle against evil can, from a different perspective, be seen as a preventable tragedy which not only consigned entire nations to slavery for decades, but sowed the seeds of tyranny in the U.S. as the welfare/warfare state consolidated itself upon the ashes of limited government and individual liberty.

June 2012 Permalink

Hoover, Herbert. The Crusade Years. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2013. ISBN 978-0-8179-1674-9.
In the modern era, most former U.S. presidents have largely retired from the public arena, lending their names to charitable endeavours and acting as elder statesmen rather than active partisans. One striking counter-example to this rule was Herbert Hoover who, from the time of his defeat by Franklin Roosevelt in the 1932 presidential election until shortly before his death in 1964, remained in the arena, giving hundreds of speeches, many broadcast nationwide on radio, writing multiple volumes of memoirs and analyses of policy, collecting and archiving a multitude of documents regarding World War I and its aftermath which became the core of what is now the Hoover Institution collection at Stanford University, working in famine relief during and after World War II, and raising funds and promoting benevolent organisations such as the Boys' Clubs. His strenuous work to keep the U.S. out of World War II is chronicled in his “magnum opus”, Freedom Betrayed (June 2012), which presents his revisionist view of U.S. entry into and conduct of the war, and the tragedy which ensued after victory had been won. Freedom Betrayed was largely completed at the time of Hoover's death, but for reasons difficult to determine at this remove, was not published until 2011.

The present volume was intended by Hoover to be a companion to Freedom Betrayed, focussing on domestic policy in his post-presidential career. Over the years, he envisioned publishing the work in various forms, but by the early 1950s he had given the book its present title and accumulated 564 pages of typeset page proofs. Due to other duties, and Hoover's decision to concentrate his efforts on Freedom Betrayed, little was done on the manuscript after he set it aside in 1955. It is only through the scholarship of the editor, drawing upon Hoover's draft, but also documents from the Hoover Institution and the Hoover Presidential Library, that this work has been assembled in its present form. The editor has also collected a variety of relevant documents, some of which Hoover cited or incorporated in earlier versions of the work, into a comprehensive appendix. There are extensive source citations and notes about discrepancies between Hoover's quotation of documents and speeches and other published versions of them.

Of all the crusades chronicled here, the bulk of the work is devoted to “The Crusade Against Collectivism in American Life”, and Hoover's words on the topic are so pithy and relevant to the present state of affairs in the United States that one suspects that a brave, ambitious, but less than original politician who simply cut and pasted Hoover's words into his own speeches would rapidly become the darling of liberty-minded members of the Republican party. I cannot think of any present-day Republican, even darlings of the Tea Party, who draws the contrast between the American tradition of individual liberty and enterprise and the grey uniformity of collectivism as Hoover does here. And Hoover does it with a firm intellectual grounding in the history of America and the world, personal knowledge from having lived and worked in countries around the world, and an engineer's pragmatism about doing what works, not what sounds good in a speech or makes people feel good about themselves.

This is somewhat of a surprise. Hoover was, in many ways, a progressive—Calvin Coolidge called him “wonder boy”. He was an enthusiastic believer in trust-busting and regulation as a counterpoise to concentration of economic power. He was a protectionist who supported the tariff to protect farmers and industry from foreign competition. He supported income and inheritance taxes “to regulate over-accumulations of wealth.” He was no libertarian, nor even a “light hand on the tiller” executive like Coolidge.

And yet he totally grasped the threat to liberty which the intrusive regulatory and administrative state represented. It's difficult to start quoting Hoover without retyping the entire book, as there is line after line, paragraph after paragraph, and page after page which are not only completely applicable to the current predicament of the U.S., but guaranteed applause lines were they uttered before a crowd of freedom loving citizens of that country. Please indulge me in a few (comments in italics are my own).

(On his electoral defeat)   Democracy is not a polite employer.

We cannot extend the mastery of government over the daily life of a people without somewhere making it master of people's souls and thoughts.

(On JournoList, vintage 1934)   I soon learned that the reviewers of the New York Times, the New York Herald Tribune, the Saturday Review and of other journals of review in New York kept in touch to determine in what manner they should destroy books which were not to their liking.

Who then pays? It is the same economic middle class and the poor. That would still be true if the rich were taxed to the whole amount of their fortunes….

Blessed are the young, for they shall inherit the national debt….

Regulation should be by specific law, that all who run may read.

It would be far better that the party go down to defeat with the banner of principle flying than to win by pussyfooting.

The seizure by the government of the communications of persons not charged with wrong-doing justifies the immoral conduct of every snooper.

I could quote dozens more. Should Hoover re-appear and give a composite of what he writes here as a keynote speech at the 2016 Republican convention, and if it hasn't been packed with establishment cronies, I expect he would be interrupted every few lines with chants of “Hoo-ver, Hoo-ver” and nominated by acclamation.

It is sad that in the U.S. in the age of Obama there is no statesman with the stature, knowledge, and eloquence of Hoover who is making the case for liberty and warning of the inevitable tyranny which awaits at the end of the road to serfdom. There are voices articulating the message which Hoover expresses so pellucidly here, but in today's media environment they don't have access to the kind of platform Hoover did when his post-presidential policy speeches were routinely broadcast nationwide. After his being reviled ever since his presidency, not just by Democrats but by many in his own party, it's odd to feel nostalgia for Hoover, but Obama will do that to you.

In the Kindle edition the index cites page numbers in the hardcover edition which, since the Kindle edition does not include real page numbers, are completely useless.

April 2014 Permalink

Houston, Keith. Shady Characters. New York: W. W. Norton, 2013. ISBN 978-0-393-06442-1.
The earliest written languages seem mostly to have been mnemonic tools for recording and reciting spoken text. As such, they had little need for punctuation and many managed to get along withoutevenspacesbetweenwords. If you read it out loud, it's pretty easy to sound out (although words written without spaces can be used to create deliciously ambiguous text). As the written language evolved to encompass scholarly and sacred texts, commentaries upon other texts, fiction, drama, and law, the structural complexity of the text grew apace, and it became increasingly difficult to express this in words alone. Punctuation was born.

In the third century B.C. Aristophanes of Byzantium (not to be confused with the other fellow), librarian at Alexandria, invented a system of dots to denote logical breaks in Greek texts of classical rhetoric, which were placed after units called the komma, kolon, and periodos. In a different graphical form, they are with us still.

Until the introduction of movable type printing in Europe in the 15th century, books were hand-copied by scribes, each of whom was free, within the constraints of their institutions, to innovate in the presentation of the texts they copied. In the interest of conserving rare and expensive writing materials such as papyrus and parchment, abbreviations came into common use. The humble ampersand (the derivation of whose English name is delightfully presented here) dates to the shorthand invented by Cicero's personal secretary/slave Tiro, who invented a mark to quickly write “et” as his master spoke.

Other punctuation marks co-evolved with textual criticism: quotation marks allowed writers to distinguish text from other sources included within their works, and asterisks, daggers, and other symbols were introduced to denote commentary upon text. Once bound books (codices) printed with wide margins became common, readers would annotate them as they read, often pointing out key passages. Even a symbol as with-it as the now-ubiquitous “@” (which I recall around 1997 being called “the Internet logo”) is documented as having been used in 1536 as an abbreviation for amphorae of wine. And the ever-more-trending symbol prefixing #hashtags? Isaac Newton used it in the 17th century, and the story of how it came to be called an “octothorpe” is worthy of modern myth.

This is much more than a history of obscure punctuation. It traces how we communicate in writing over the millennia, and how technologies such as movable type printing, mechanical type composition, typewriting, phototypesetting, and computer text composition have both enriched and impoverished our written language. Impoverished? Indeed—I compose this on a computer able to display in excess of 64,000 characters from the written languages used by most people since the dawn of civilisation. And yet, thanks to the poisonous legacy of the typewriter, only a few people seem to be aware of the distinction, known to everybody setting type in the 19th century, among the em-dash—used to set off a phrase; the en-dash, denoting “to” in constructions like “1914–1918”; the hyphen, separating compound words such as “anarcho-libertarian” or words split at the end of a line; the minus sign, as in −4.221; and the figure dash, with the same width as numbers in a font where all numbers have the same width, which permits setting tables of numbers separated by dashes in even columns. People who appreciate typography and use TeX are acutely aware of this and grind their teeth when reading documents produced by demotic software tools such as Microsoft Word or reading postings on the Web which, although they could be so much better, would have made Mencken storm the Linotype floor of the Sunpapers had any of his writing been so poorly set.

Pilcrows, octothorpes, interrobangs, manicules, and the centuries-long quest for a typographical mark for irony (Like, we really need that¡)—this is a pure typographical delight: enjoy!

In the Kindle edition end of chapter notes are bidirectionally linked (albeit with inconsistent and duplicate reference marks), but end notes are not linked to their references in the text—you must manually flip to the notes and find the number. The end notes contain many references to Web URLs, but these are not active links, just text: to follow them you must copy and paste them into a browser address bar. The index is just a list of terms, not linked to references in the text. There is no way to distinguish examples of typographic symbols which are set in red type from chapter note reference links set in an identical red font.

October 2013 Permalink

Jacobsen, Annie. Phenomena. New York: Little, Brown, 2017. ISBN 978-0-316-34936-9.
At the end of World War II, it was clear that science and technology would be central to competition among nations in the postwar era. The development of nuclear weapons, German deployment of the first operational ballistic missile, and the introduction of jet propelled aircraft pointed the way to a technology-driven arms race, and both the U.S. and the Soviet Union scrambled to lay hands on the secret super-weapon programs of the defeated Nazi regime. On the U.S. side, the Alsos Mission not only sought information on German nuclear and missile programs, but also came across even more bizarre projects, such as those undertaken by Berlin's Ahnenerbe Institute, founded in 1935 by SS leader Heinrich Himmler. Investigating the institute's headquarters in a Berlin suburb, Samuel Goudsmit, chief scientist of Alsos, found what he described as “Remnants of weird Teutonic symbols and rites … a corner with a pit of ashes in which I found the skull of an infant.” What was going on? Had the Nazis attempted to weaponise black magic? And, to the ever-practical military mind, did it work?

In the years after the war, the intelligence community and military services in both the U.S. and Soviet Union would become involved in the realm of the paranormal, funding research and operational programs based upon purported psychic powers for which mainstream science had no explanation. Both superpowers were not only seeking super powers for their spies and soldiers, but also looking over their shoulders afraid the other would steal a jump on them in exploiting these supposed powers of mind. “We can't risk a ‘woo-woo gap’ with the adversary!”

Set aside for a moment (as did most of the agencies funding this research) the question of just how these mental powers were supposed to work. If they did, in fact, exist and if they could be harnessed and reliably employed, they would confer a tremendous strategic advantage on their possessor. Consider: psychic spies could project their consciousness out of body and penetrate the most secure military installations; telepaths could read the minds of diplomats during negotiations or perhaps even plant thoughts and influence their judgement; telekinesis might be able to disrupt the guidance systems of intercontinental missiles or space launchers; and psychic assassins could undetectably kill by stopping the hearts of their victims remotely by projecting malign mental energy in their direction.

All of this may seem absurd on its face, but work on all of these phenomena and more was funded, between 1952 and 1995, by agencies of the U.S. government including the U.S. Army, Air Force, Navy, the CIA, NSA, DIA, and ARPA/DARPA, expending tens of millions of dollars. Between 1978 and 1995 the Defense Department maintained an operational psychic espionage program under various names, using “remote viewing” to provide information on intelligence targets for clients including the Secret Service, Customs Service, Drug Enforcement Administration, and the Coast Guard.

What is remote viewing? Experiments in parapsychology laboratories usually employ a protocol called “outbounder-beacon”, where a researcher travels to a location selected randomly from a set of targets and observes the locale while a subject in the laboratory, usually isolated from sensory input which might provide clues, attempts to describe, either in words or by a drawing, what the outbounder is observing. At the conclusion of the experiment, the subject's description is compared with pictures of the targets by an independent judge (unaware of which was the outbounder's destination), who selects the one which is the closest match to the subject's description. If each experiment picked the outbounder's destination from a set of five targets, you'd expect from chance alone that in an ensemble of experiments the remote viewer's perception would match the actual target around 20% of the time. Experiments conducted in the 1970s at the Stanford Research Institute (and subsequently the target of intense criticism by skeptics) claimed in excess of 65% accuracy by talented remote viewers.

While outbounder-beacon experiments were used to train and test candidate remote viewers, operational military remote viewing as conducted by the Stargate Project (and under assorted other code names over the years), was quite different. Usually the procedure involved “coordinate remote viewing”. The viewer would simply be handed a slip of paper containing the latitude and longitude of the target and then, relaxing and clearing his or her mind, would attempt to describe what was there. In other sessions, the viewer might be handed a sealed envelope containing a satellite reconnaissance photograph. The results were sometimes stunning. In 1979, a KH-9 spy satellite photographed a huge building which had been constructed at Severodvinsk Naval Base in the Soviet arctic. Analysts thought the Soviets might be building their first aircraft carrier inside the secret facility. Joe McMoneagle, an Army warrant office and Vietnam veteran who was assigned to the Stargate Project as its first remote viewer, was given the target in the form of an envelope with the satellite photo sealed inside. Concentrating on the target, he noted “There's some kind of a ship. Some kind of a vessel. I'm getting a very, very strong impression of props [propellers]”. Then, “I'm seeing fins…. They look like shark fins.” He continued, “I'm seeing what looks like part of a submarine in this building.” The entire transcript was forty-seven pages long.

McMoneagle's report was passed on to the National Security Council, which dismissed it because it didn't make any sense for the Soviets to build a huge submarine in a building located one hundred metres from the water. McMoneagle had described a canal between the building and the shore, but the satellite imagery showed no such structure. Then, four months later, in January 1980, another KH-9 pass showed a large submarine at a dock at Severodvinsk, along with a canal between the mystery building and the sea, which had been constructed in the interim. This was the prototype of the new Typhoon class ballistic missile submarine, which was a complete surprise to Western analysts, but not Joe McMoneagle. This is what was referred to as an “eight martini result”. When McMoneagle retired in 1984, he was awarded the Legion of Merit for exceptionally meritorious service in the field of human intelligence.

A decade later the U.S. Customs Service approached the remote viewing unit for assistance in tracking down a rogue agent accused of taking bribes from cocaine smugglers in Florida. He had been on the run for two years, and appeared on the FBI's Most Wanted List. He was believed to be in Florida or somewhere in the Caribbean. Self-taught remote viewer Angela Dellafiora concentrated on the case and immediately said, “He's in Lowell, Wyoming.” Wyoming? There was no reason for him to be in such a place. Further, there was no town named Lowell in the state. Agents looked through an atlas and found there was, however, a Lovell, Wyoming. Dellafiora said, “Well, that's probably it.” Several weeks later, she was asked to work the case again. Her notes include, “If you don't get him now you'll lose him. He's moving from Lowell.” She added that he was “at or near a campground that had a large boulder at its entrance”, and that she “sensed an old Indian burial ground is located nearby.”. After being spotted by a park ranger, the fugitive was apprehended at a campground next to an Indian burial ground, about fifty miles from Lovell, Wyoming, where he had been a few weeks before. Martinis all around.

A total of 417 operational sessions were run in 1989 and 1990 for the counter-narcotics mission; 52% were judged as producing results of intelligence value while 47% were of no value. Still, what was produced was considered of sufficient value that the customers kept coming back.

Most of this work and its products were classified, in part to protect the program from ridicule by journalists and politicians. Those running the projects were afraid of being accused of dabbling in the occult, so they endorsed an Army doctrine that remote viewing, like any other military occupational specialty, was a normal human facility which could be taught to anybody with a suitable training process, and a curriculum was developed to introduce new people to the program. This was despite abundant evidence that the ability to remote view, if it exists at all, is a rare trait some people acquire at birth, and cannot be taught to randomly selected individuals any more than they can be trained to become musical composers or chess grand masters.

Under a similar shroud of secrecy, paranormal research for military applications appears to have been pursued in the Soviet Union and China. From time to time information would leak out into the open literature, such as the Soviet experiments with Ninel Kulagina. In China, H. S. Tsien (Qian Xuesen), a co-founder of the Jet Propulsion Laboratory in the United States who, after being stripped of his security clearance and moving to mainland China in 1955, led the Chinese nuclear weapons and missile programs, became a vocal and powerful advocate of research into the paranormal which, in accordance with Chinese Communist doctrine, was called “Extraordinary Human Body Functioning” (EHBF), and linked to the concept of qi, an energy field which is one of the foundations of traditional Chinese medicine and martial arts. It is likely this work continues today in China.

The U.S. remote viewing program came to an end in June 1995, when the CIA ordered the Defense Intelligence Agency to shut down the Stargate project. Many documents relating to the project have since been declassified but, oddly for a program which many claimed produced no useful results, others remain secret to this day. The paranormal continues to appeal to some in the military. In 2014, the Office of Naval Research launched a four year project funded with US$ 3.85 million to investigate premonitions, intuition, and hunches—what the press release called “Spidey sense”. In the 1950s, during a conversation between physicist Wolfgang Pauli and psychiatrist Carl Jung about psychic phenomena, Jung remarked, “As is only to be expected, every conceivable kind of attempt has been made to explain away these results, which seem to border on the miraculous and frankly impossible. But all such attempts come to grief on the facts, and the facts refuse so far to be argued out of existence.” A quarter century later in 1975, a CIA report concluded “A large body of reliable experimental evidence points to the inescapable conclusion that extrasensory perception does exist as a real phenomenon.”

To those who have had psychic experiences, there is no doubt of the reality of the phenomena. But research into them or, even more shockingly, attempts to apply them to practical ends, runs squarely into a paradigm of modern science which puts theory ahead of observation and experiment. A 1986 report by the U.S. Army said that its research had “succeeded in documenting general anomalies worthy of scientific interest,“ but that “in the absence of a confirmed paranormal theory…paranormality could be rejected a priori.” When the remote viewing program was cancelled in 1995, a review of its work stated that “a statistically significant effect has been observed in the laboratory…[but] the laboratory studies do not provide evidence regarding the sources or origins of the phenomenon.” In other words, experimental results can be discarded if there isn't a theory upon which to hang them, and there is no general theory of paranormal phenomena. Heck, they could have asked me.

One wonders where many currently mature fields of science would be today had this standard been applied during their formative phases: rejecting experimental results due to lack of a theory to explain them. High-temperature superconductivity was discovered in 1986 and won the Nobel Prize in 1987, and still today there is no theory that explains how it works. Perhaps it is only because it is so easily demonstrated with a desktop experiment that it, too, has not been relegated to the realm of “fringe science”.

This book provides a comprehensive history of the postwar involvement of the military and intelligence communities with the paranormal, focusing on the United States. The author takes a neutral stance: both believers and skeptics are given their say. One notes a consistent tension between scientists who reject the phenomena because “it can't possibly work” and intelligence officers who couldn't care less about how it works as long as it is providing them useful results.

The author has conducted interviews with many of the principals still alive, and documented the programs with original sources, many obtained by her under the Freedom of Information Act. Extensive end notes and source citations are included. I wish I could be more confident in the accuracy of the text, however. Chapter 7 relates astronaut Edgar Mitchell's Apollo 14 mission to the Moon, during which he conducted, on his own initiative, some unauthorised ESP experiments. But most of the chapter is about the mission itself, and it is riddled with errors, all of which could be corrected with no more research than consulting Wikipedia pages about the mission and the Apollo program. When you read something you know about and discover much of it is wrong, you have to guard against what Michael Crichton called the Gell-Mann amnesia effect: turning the page and assuming what you read there, about which you have no personal knowledge, is to be trusted. When dealing with spooky topics and programs conducted in secret, one should be doubly cautious. The copy editing is only of fair quality, and the Kindle edition has no index (the print edition does have an index).

Napoléon Bonaparte said, “There are but two powers in the world, the sword and the mind. In the long run, the sword is always beaten by the mind.” The decades of secret paranormal research were an attempt to apply this statement literally, and provide a fascinating look inside a secret world where nothing was dismissed as absurd if it might provide an edge over the adversary. Almost nobody knew about this work at the time. One wonders what is going on today.

May 2017 Permalink

Johnson, Steven. The Ghost Map. New York: Riverhead Books, 2006. ISBN 1-59448-925-4.
From the dawn of human civilisation until sometime in the nineteenth century, cities were net population sinks—the increased mortality from infectious diseases, compounded by the unsanitary conditions, impure water, and food transported from the hinterland and stored without refrigeration so shortened the lives of city-dwellers (except for the ruling class and the wealthy, a small fraction of the population) that a city's population was maintained only by a constant net migration to it from the countryside. In densely-packed cities, not only does an infected individual come into contact with many more potential victims than in a rural environment, highly virulent strains of infectious agents which would “burn out” due to rapidly killing their hosts in farm country or a small village can prosper in a city, since each infected host still has the opportunity to infect many others before succumbing. Cities can be thought of as Petri dishes for evolving killer microbes.

No civic culture medium was as hospitable to pathogens as London in the middle of the 19th century. Its population, 2.4 million in 1851, had exploded from just one million at the start of the century, and all of these people had been accommodated in a sprawling metropolis almost devoid of what we would consider a public health infrastructure. Sewers, where they existed, were often open and simply dumped into the Thames, whence other Londoners drew their drinking water, downstream. Other residences dumped human waste in cesspools, emptied occasionally (or maybe not) by “night-soil men”. Imperial London was a smelly, and a deadly place. Observing it first-hand is what motivated Friedrich Engels to document and deplore The Condition of the Working Class in England (January 2003).

Among the diseases which cut down inhabitants of cities, one of the most feared was cholera. In 1849, an outbreak killed 14,137 in London, and nobody knew when or where it might strike next. The prevailing theory of disease at this epoch was that infection was caused by and spread through “miasma”: contaminated air. Given how London stank and how deadly it was to its inhabitants, this would have seemed perfectly plausible to people living before the germ theory of disease was propounded. Edwin Chadwick, head of the General Board of Health in London at the epoch, went so far as to assert (p. 114) “all smell is disease”. Chadwick was, in many ways, one of the first advocates and implementers of what we have come to call “big government”—that the state should take an active role in addressing social problems and providing infrastructure for public health. Relying upon the accepted “miasma” theory and empowered by an act of Parliament, he spent the 1840s trying to eliminate the stink of the cesspools by connecting them to sewers which drained their offal into the Thames. Chadwick was, by doing so, to provide one of the first demonstrations of that universal concomitant of big government, unintended consequences: “The first defining act of a modern, centralized public-health authority was to poison an entire urban population.” (p. 120).

When, in 1854, a singularly virulent outbreak of cholera struck the Soho district of London, physician and pioneer in anæsthesia John Snow found himself at the fulcrum of a revolution in science and public health toward which he had been working for years. Based upon his studies of the 1849 cholera outbreak, Snow had become convinced that the pathogen spread through contamination of water supplies by the excrement of infected individuals. He had published a monograph laying out this theory in 1849, but it swayed few readers from the prevailing miasma theory. He was continuing to document the case when cholera exploded in his own neighbourhood. Snow's mind was not only prepared to consider a waterborne infection vector, he was also one of the pioneers of the emerging science of epidemiology: he was a founding member of the London Epidemiological Society in 1850. Snow's real-time analysis of the epidemic caused him to believe that the vector of infection was contaminated water from the Broad Street pump, and his persuasive presentation of the evidence to the Board of Governors of St. James Parish caused them to remove the handle from that pump, after which the contagion abated. (As the author explains, the outbreak was already declining at the time, and in all probability the water from the Broad Street pump was no longer contaminated then. However, due to subsequent events and discoveries made later, had the handle not been removed there would have likely been a second wave of the epidemic, with casualties comparable to the first.)

Afterward, Snow, with the assistance of initially-sceptical clergyman Henry Whitehead, whose intimate knowledge of the neighbourhood and its residents allowed compiling the data which not only confirmed Snow's hypothesis but identified what modern epidemiologists would call the “index case” and “vector of contagion”, revised his monograph to cover the 1854 outbreak, illustrated by a map which illustrated its casualties that has become a classic of on-the-ground epidemiology and the graphical presentation of data. Most brilliant was Snow's use (and apparent independent invention) of a Voronoi diagram to show the boundary, by streets, of the distance, not in Euclidean space, but by walking time, of the area closer to the Broad Street pump than to others in the neighbourhood. (Oddly, the complete map with this crucial detail does not appear in the book: only a blow-up of the central section without the boundary. The full map is here; depending on your browser, you may have to click on the map image to display it at full resolution. The dotted and dashed line is the Voronoi cell enclosing the Broad Street pump.)

In the following years, London embarked upon a massive program to build underground sewers to transport the waste of its millions of residents downstream to the tidal zone of the Thames and later, directly to the sea. There would be one more cholera outbreak in London in 1866—in an area not yet connected to the new sewers and water treatment systems. Afterward, there has not been a single epidemic of cholera in London. Other cities in the developed world learned this lesson and built the infrastructure to provide their residents clean water. In the developing world, cholera continues to take its toll: in the 1990s an outbreak in South America infected more than a million people and killed almost 10,000. Fortunately, administration of rehydration therapy (with electrolytes) has drastically reduced the likelihood of death from a cholera infection. Still, you have to wonder why, in a world where billions of people lack access to clean water and third world mega-cities are drawing millions to live in conditions not unlike London in the 1850s, that some believe that laptop computers are the top priority for children growing up there.

A paperback edition is now available.

December 2007 Permalink

Jones, Peter. The 1848 Revolutions. 2nd ed. Harlow, England: Pearson Education, 1991. ISBN 0-582-06106-7.

January 2002 Permalink

Judd, Denis. Someone Has Blundered. London: Phoenix, [1973] 2007. ISBN 0-7538-2181-8.
One of the most amazing things about the British Empire was not how much of the world it ruled, but how small was the army which maintained dominion over so large a portion of the globe. While the Royal Navy enjoyed unchallenged supremacy on the high seas in the 19th century, it was of little use in keeping order in the colonies, and the ground forces available were, not just by modern standards, but by those of contemporary European powers, meagre. In the 1830s, the British regular army numbered only about 100,000, and rose to just 200,000 by the end of the century. When the Indian Mutiny (or “Sepoy Rebellion”) erupted in 1857, there were just 45,522 European troops in the entire subcontinent.

Perhaps the stolid British at home were confident that the military valour and discipline of their meagre legions would prevail, or that superior technology would carry the day:

Whatever happens,
we have got,
the Maxim gun,
and they have not.
            — Joseph Hilaire Pierre René Belloc, “The Modern Traveller”, 1898
but when it came to a fight, as happened surprisingly often in what one thinks of as the Pax Britannica era (the Appendix [pp. 174–176] lists 72 conflicts and military expeditions in the Victorian era), a small, tradition-bound force, accustomed to peace and the parade ground, too often fell victim to (p. xix) “a devil's brew of incompetence, unpreparedness, mistaken and inappropriate tactics, a reckless underestimating of the enemy, a brash overconfidence, a personal or psychological collapse, a difficult terrain, useless maps, raw and panicky recruits, skilful or treacherous opponents, diplomatic hindrance, and bone-headed leadership.”

All of these are much in evidence in the campaigns recounted here: the 1838–1842 invasion of Afghanistan, the 1854–1856 Crimean War, the 1857–1859 Indian Mutiny, the Zulu War of 1879, and the first (1880–1881) and second (1899–1902) Boer Wars. Although this book was originally published more than thirty years ago and its subtitle, “Calamities of the British Army in the Victorian Age”, suggests it is a chronicle of a quaint and long-departed age, there is much to learn in these accounts of how highly-mobile, superbly trained, excellently equipped, and technologically superior military forces were humiliated and sometimes annihilated by indigenous armies with the power of numbers, knowledge of the terrain, and the motivation to defend their own land.

April 2007 Permalink

Kaiser, David. How the Hippies Saved Physics. New York: W. W. Norton, 2011. ISBN 978-0-393-07636-3.
From its origin in the early years of the twentieth century until the outbreak of World War II, quantum theory inspired deeply philosophical reflection as to its meaning and implications for concepts rarely pondered before in physics, such as the meaning of “measurement”, the rôle of the “observer”, the existence of an objective reality apart from the result of a measurement, and whether the randomness of quantum measurements was fundamental or due to our lack of knowledge of an underlying stratum of reality. Quantum theory seemed to imply that the universe could not be neatly reduced to isolated particles which interacted only locally, but admitted “entanglement” among separated particles which seemed to verge upon mystic conceptions of “all is one”. These weighty issues occupied the correspondence and conference debates of the pioneers of quantum theory including Planck, Heisenberg, Einstein, Bohr, Schrödinger, Pauli, Dirac, Born, and others.

And then the war came, and then the war came to an end, and with it ended the inquiry into the philosophical foundations of quantum theory. During the conflict, physicists on all sides were central to war efforts including nuclear weapons, guided missiles, radar, and operations research, and after the war they were perceived by governments as a strategic resource—subsidised in their education and research and provided with lavish facilities in return for having them on tap when their intellectual capacities were needed. In this environment, the education and culture of physics underwent a fundamental change. Suddenly the field was much larger than before, filled with those interested more in their own careers than probing the bottom of deep questions, and oriented toward, in Richard Feynman's words, “getting the answer out”. Instead of debating what their equations said about the nature of reality, the motto of the age became “shut up and calculate”, and physicists who didn't found their career prospects severely constrained.

Such was the situation from the end of World War II through the 1960s, when the defence (and later space program) funding gravy train came to an end due to crowding out of R&D budgets by the Vietnam War and the growing financial crisis due to debasement of the dollar. Suddenly, an entire cohort of Ph.D. physicists who, a few years before could expect to choose among a variety of tenure-track positions in academia or posts in government or industry research laboratories, found themselves superbly qualified to do work which nobody seemed willing to pay them to do. Well, whatever you say about physicists, they're nothing if they aren't creative, so a small group of out of the box thinkers in the San Francisco Bay area self-organised into the Fundamental Fysiks Group and began to re-open the deep puzzles in quantum mechanics which had laid fallow since the 1930s. This group, founded by Elizabeth Rauscher and George Weissmann, whose members came to include Henry Stapp, Philippe Eberhard, Nick Herbert, Jack Sarfatti, Saul-Paul Sirag, Fred Alan Wolf, John Clauser, and Fritjof Capra, came to focus on Bell's theorem and its implications for quantum entanglement, what Einstein called “spooky action at a distance”, and the potential for instantaneous communications not limited by the speed of light.

The author argues that the group's work, communicated through samizdat circulation of manuscripts, the occasional publication in mainstream journals, and contact with established researchers open to considering foundational questions, provided the impetus for today's vibrant theoretical and experimental investigation of quantum information theory, computing, and encryption. There is no doubt whatsoever from the trail of citations that Nick Herbert's attempts to create a faster-than-light signalling device led directly to the quantum no-cloning theorem.

Not only did the group reestablish the prewar style of doing physics, more philosophical than computational, they also rediscovered the way science had been funded from the Medicis until the advent of Big Science. While some group members held conventional posts, others were supported by wealthy patrons interested in their work purely from its intellectual value. We encounter a variety of characters who probably couldn't have existed in any decade other than the 1970s including Werner Erhard, Michael Murphy, Ira Einhorn, and Uri Geller.

The group's activities ranged far beyond the classrooms and laboratories into which postwar physics had been confined, to the thermal baths at Esalen and outreach to the public through books which became worldwide bestsellers and remain in print to this day. Their curiosity also wandered well beyond the conventional bounds of physics, encompassing ESP (and speculating as to how quantum processes might explain it). This caused many mainstream physicists to keep members at arm's length, even as their insights on quantum processes were infiltrating the journals.

Many of us who lived through (I prefer the term “endured”) the 1970s remember them as a dull brown interlude of broken dreams, ugly cars, funny money, and malaise. But, among a small community of thinkers orphaned from the career treadmill of mainstream physics, it was a renaissance of investigation of the most profound questions in physics, and the spark which lit today's research into quantum information processing.

The Kindle edition has the table of contents, and notes properly linked, but the index is just a useless list of terms. An interview of the author, Jack Sarfatti, and Fred Alan Wolf by George Knapp on “Coast to Coast AM” is available.

November 2011 Permalink

Karsh, Efraim. Islamic Imperialism. New Haven, CT: Yale University Press, 2006. ISBN 0-300-10603-3.
A great deal of conflict and tragedy might have been avoided in recent years had only this 2006 book been published a few years earlier and read by those contemplating ambitious adventures to remake the political landscape of the Near East and Central Asia. The author, a professor of history at King's College, University of London, traces the repeated attempts, beginning with Muhammad and his immediate successors, to establish a unified civilisation under the principles of Islam, in which the Koranic proscription of conflict among Muslims would guarantee permanent peace.

In the century following the Prophet's death in the year 632, Arab armies exploded out of the birthplace of Islam and conquered a vast territory from present-day Iran to Spain, including the entire north African coast. This was the first of a succession of great Islamic empires, which would last until the dismantling of the Ottoman Empire in the aftermath of World War I. But, as this book thoroughly documents, over this entire period, the emphasis was on the word “empire” and not “Islamic”. While the leaders identified themselves as Muslims and exhorted their armies to holy war, the actual empires were very much motivated by a quest for temporal wealth and power, and behaved much as the previous despotisms they supplanted. Since the Arabs had no experience in administering an empire nor a cadre of people trained in those arts, they ended up assimilating the bureaucratic structure and personnel of the Persian empire after conquering it, and much the same happened in the West after the fall of the Byzantine empire.

While soldiers might have seen themselves as spreading the word of Islam by the sword, in fact the conquests were mostly about the traditional rationale for empire: booty and tribute. (The Prophet's injunction against raiding other Muslims does appear to have been one motivation for outward-directed conquest, especially in the early years.) Not only was there relatively little aggressive proselytising of Islam, on a number of occasions conversion to Islam by members of dhimmi populations was discouraged or prohibited outright because the imperial treasury depended heavily on the special taxes non-Muslims were required to pay. Nor did these empires resemble the tranquil Dar al-Islam envisaged by the Prophet—in fact, only 24 years would elapse after his death before the Caliph Uthman was assassinated by his rivals, and that would be first of many murders, revolutions, plots, and conflicts between Muslim factions within the empires to come.

Nor were the Crusades, seen through contemporary eyes, the cataclysmic clash of civilisations they are frequently described as today. The kingdoms established by the crusaders rapidly became seen as regional powers like any other, and often found themselves in alliance with Muslims against Muslims. Pan-Arabists in modern times who identify their movement with opposition to the hated crusader often fail to note that there was never any unified Arab campaign against the crusaders; when they were finally ejected, it was by the Turks, and their great hero Saladin was, himself, a Kurd.

The latter half of the book recounts the modern history of the Near East, from Churchill's invention of Iraq, through Nasser, Khomeini, and the emergence of Islamism and terror networks directed against Israel and the West. What is simultaneously striking and depressing about this long and detailed history of strife, subversion, oppression, and conflict is that you can open it up to almost any page and apart from a few details, it sounds like present-day news reports from the region. Thirteen centuries of history with little or no evidence for indigenous development of individual liberty, self-rule, the rule of law, and religious tolerance does not bode well for idealistic neo-Jacobin schemes to “implant democracy” at the point of a bayonet. (Modern Turkey can be seen as a counter-example, but it is worth observing that Mustafa Kemal explicitly equated modernisation with the importation and adoption of Western values, and simultaneously renounced imperial ambitions. In this, he was alone in the region.)

Perhaps the lesson one should draw from this long and tragic narrative is that this unfortunate region of the world, which was a fiercely-contested arena of human conflict thousands of years before Muhammad, has resisted every attempt by every actor, the Prophet included, to pacify it over those long millennia. Rather than commit lives and fortune to yet another foredoomed attempt to “fix the problem”, one might more wisely and modestly seek ways to keep it contained and not aggravate the situation.

October 2006 Permalink

Kauffman, Bill. Forgotten Founder, Drunken Prophet. Wilmington: ISI Books, 2008. ISBN 978-1-933859-73-6.
It is a cliché to observe that history is written by the victors, but rarely is it as evident as in the case of the drafting and ratification of the United States Constitution, where the proponents of a strong national government, some of whom, including Alexander Hamilton, wished to “annihilate the State distinctions and State operations” (p. 30), not only conducted the proceedings in secret, carefully managed the flow of information to the public, and concealed their nationalist, nay imperial, ambitions from the state conventions which were to vote on ratification. Indeed, just like modern-day collectivists in the U.S. who have purloined the word “liberal”, which used to mean a champion of individual freedom, the covert centralisers at the Constitutional Convention styled themselves “Federalists”, while promoting a supreme government which was anything but federal in nature. The genuine champions of a federal structure allowed themselves to be dubbed “Anti-Federalists” and, as always, were slandered as opposing “progress” (but toward what?). The Anti-Federalists counted among their ranks men such as Samuel Adams, Patrick Henry, George Mason, Samuel Chase, and Elbridge Gerry: these were not reactionary bumpkins but heroes, patriots, and intellectuals the equal of any of their opponents. And then there was Luther Martin, fervent Anti-Federalist and perhaps the least celebrated of the Founding Fathers.

Martin's long life was a study in contradictions. He was considered one of the most brilliant trial lawyers of his time, and yet his courtroom demeanour was universally described as long-winded, rambling, uncouth, and ungrammatical. He often appeared in court obviously inebriated, was slovenly in appearance and dress, when excited would flick spittle from his mouth, and let's not get into his table manners. At the Consitutional Convention he was a fierce opponent of the Virginia Plan which became the basis of the Constitution and, with Samuel Adams and Mason, urged the adoption of a Bill of Rights. He argued vehemently for the inclusion of an immediate ban on the importation of slaves and a plan to phase out slavery while, as of 1790, owning six slaves himself yet serving as Honorary-Counselor to a Maryland abolitionist society.

After the Constitution was adopted by the convention (Martin had walked out by the time and did not sign the document), he led the fight against its ratification by Maryland. Maryland ratified the Constitution over his opposition, but he did manage to make the ratification conditional upon the adoption of a Bill of Rights.

Martin was a man with larger than life passions. Although philosophically close to Thomas Jefferson in his view of government, he detested the man because he believed Jefferson had slandered one of his wife's ancestors as a murderer of Indians. When Jefferson became President, Martin the Anti-Federalist became Martin the ardent Federalist, bent on causing Jefferson as much anguish as possible. When a law student studying with him eloped with and married his daughter, Martin turned incandescent, wrote, and self-published a 163 page full-tilt tirade against the bounder titled Modern Gratitude.

Lest Martin come across as a kind of buffoon, bear in mind that after his singular performance at the Constitutional Convention, he went on to serve as Attorney General of the State of Maryland for thirty years (a tenure never equalled in all the years which followed), argued forty cases before the U.S. Supreme Court, and appeared for the defence in two of the epochal trials of early U.S. jurisprudence: the impeachment trial of Supreme Court Justice Samuel Chase before the U.S. Senate, and the treason trial of Aaron Burr—and won acquittals on both occasions.

The author is an unabashed libertarian, and considers Martin's diagnosis of how the Constitution would inevitably lead to the concentration of power in a Federal City (which his fellow Anti-Federalist George Clinton foresaw, “would be the asylum of the base, idle, avaricious, and ambitious” [p. xiii]) to the detriment of individual liberty as prescient. One wishes that Martin had been listened to, while sympathising with those who actually had to endure his speeches.

The author writes with an exuberantly vast vocabulary which probably would have sent the late William F. Buckley to the dictionary on several occasions: every few pages you come across a word like “roorback”, “eftsoons”, “sennight”, or “fleer”. For a complete list of those which stumped me, open the vault of the spoilers.

Spoiler warning: Plot and/or ending details follow.  
Here are the delightfully obscure words used in this book. To avoid typographic fussiness, I have not quoted them. Each is linked to its definition. Vocabulary ho!

malison, exordium, eristic, roorback, tertium quid, bibulosity, eftsoons, vendue, froward, pococurante, disprized, toper, cerecloth, sennight, valetudinarian, variorum, concinnity, plashing, ultimo, fleer, recusants, scrim, flagitious, indurated, truckling, linguacious, caducity, prepotency, natheless, dissentient, placemen, lenity, burke, plangency, roundelay, hymeneally, mesalliance, divagation, parti pris, anent, comminatory, descry, minatory
Spoilers end here.  

This is a wonderful little book which, if your view of the U.S. Constitution has been solely based on the propaganda of those who promulgated it, is an excellent and enjoyable antidote.

November 2008 Permalink

Keegan. John. The Face of Battle. New York: Penguin, 1976. ISBN 978-0-14-004897-1.
As the author, a distinguished military historian, observes in the extended introduction, the topic of much of military history is battles, but only rarely do historians delve into the experience of battle itself—instead they treat the chaotic and sanguinary events on the battlefield as a kind of choreography or chess game, with commanders moving pieces on a board. But what do those pieces, living human beings in the killing zone, actually endure in battle? What motivates them to advance in the face of the enemy or, on the other hand, turn and run away? What do they see and hear? What wounds do they suffer, and what are their most common cause, and how are the wounded treated during and after the battle? How do the various military specialities: infantry, cavalry, artillery, and armour, combat one another, and how can they be used together to achieve victory?

To answer these questions, the author examines three epic battles of their respective ages: Agincourt, Waterloo, and the first day of the Somme Offensive. Each battle is described in painstaking detail, not from that of the commanders, but the combatants on the field. Modern analysis of the weapons employed and the injuries they inflict is used to reconstruct the casualties suffered and their consequences for the victims. Although spanning almost five centuries, all of these battles took place in northwest Europe between European armies, and allow holding cultural influences constant (although, of course, evolving over time) as expansion of state authority and technology increased the size and lethality of the battlefield by orders of magnitude. (Henry's entire army at Agincourt numbered less than 6,000 and suffered 112 deaths during the battle, while on the first day of the Somme, British forces alone lost 57,470 men, with 19,240 killed.)

The experiences of some combatants in these set piece battles are so alien to normal human life that it is difficult to imagine how they were endured. Consider the Inniskilling Regiment, which arrived at Waterloo after the battle was already underway. Ordered by Wellington to occupy a position in the line, they stood there in static formation for four hours, while receiving cannon fire from French artillery several hundred yards away. During those hours, 450 of the regiment's 750 officers and men were killed and wounded, including 17 of the 18 officers. The same regiment, a century later, suffered devastating losses in a futile assault on the first day of the Somme.

Battles are decided when the intolerable becomes truly unendurable, and armies dissolve into the crowds from which they were formed. The author examines this threshold in various circumstances, and what happens when it is crossed and cohesion is lost. In a concluding chapter he explores how modern mechanised warfare (recall that when this book was published the threat of a Soviet thrust into Western Europe with tanks and tactical nuclear weapons was taken with deadly seriousness by NATO strategists) may have so isolated the combatants from one another and subjected them to such a level of lethality that armies might disintegrate within days of the outbreak of hostilities. Fortunately, we never got to see whether this was correct, and hopefully we never will.

I read the Kindle edition using the iPhone Kindle application. It appears to have been created by OCR scanning a printed copy of the book and passing it through a spelling checker, but with no further editing. Unsurprisingly, the errors one is accustomed to in scanned documents abound. The word “modern”, for example, appears more than dozen times as “modem”. Now I suppose cybercommand does engage in “modem warfare”, but this is not what the author means to say. The Kindle edition costs only a dollar less than the paperback print edition, and such slapdash production values are unworthy of a publisher with the reputation of Penguin.

July 2009 Permalink

Kennedy, Gregory P. The Rockets and Missiles of White Sands Proving Ground, 1945–1958. Atglen, PA: Schiffer Military History, 2009. ISBN 978-0-7643-3251-7.
Southern New Mexico has been a centre of American rocketry from its origin to the present day. After being chased out of Massachusetts due to his inventions' proclivity for making ear-shattering detonations and starting fires, Robert Goddard moved his liquid fuel rocket research to a site near Roswell, New Mexico in 1930 and continued to launch increasingly advanced rockets from that site until 1943, when he left to do war work for the Navy. Faced with the need for a range to test the missiles developed during World War II, in February 1945 the U.S. Army acquired a site stretching 100 miles north from the Texas-New Mexico border near El Paso and 41 miles east-west at the widest point, designated the “White Sands Proving Ground”: taking its name from the gypsum sands found in the region, also home to the White Sands National Monument.

Although established before the end of the war to test U.S. missiles, the first large rockets launched at the site were captured German V-2s (December 2002), with the first launched (unsuccessfully) in April 1946. Over the next six years, around seventy V-2s lifted off from White Sands, using the V-2's massive (for the time) one ton payload capacity to carry a wide variety of scientific instruments into the upper atmosphere and the edge of space. In the Bumper project, the V-2 was used as the booster for the world's first two stage liquid rocket, with its WAC Corporal second stage attaining an altitude of 248 miles: higher than some satellites orbit today (it did not, of course, attain anything near orbital velocity, and quickly fell back to Earth).

Simultaneously with launches of the V-2, U.S. rocketeers arrived at White Sands to test their designs—almost every U.S. missile of the 1940s and 1950s made its first flight there. These included research rockets such as Viking and Aerobee (first launched in 1948, it remained in service until 1985 with a total of 1037 launched); the Corporal, Sergeant, and Redstone ballistic missiles; Loki, Nike, Hawk anti-aircraft missiles; and a variety of tactical missiles including the unguided (!) nuclear-tipped Honest John.

White Sands in the forties and fifties was truly the Wild West of rocketry. Even by the standards of fighter aircraft development in the epoch, this was by guess and by gosh engineering in its purest incarnation. Consider Viking 8, which broke loose from the launch pad during a static test when hold-down fittings failed, and was allowed to fly to 20,000 feet to see what would happen (p. 97). Or Viking 10, whose engine exploded on the launch pad and then threatened a massive explosion because leaking fuel was causing the tankage to crumple as it left a vacuum. An intrepid rocketeer was sent out of the blockhouse with a carbine to shoot a hole in the top of the fuel tank and allow air to enter (p. 100)—problem solved! (The rocket was rebuilt and later flew successfully.) Then there was the time they ran out of 90% hydrogen peroxide and were told the first Viking launch would have to be delayed for two weeks until a new shipment could arrive by rail. Can't have that! So two engineers drove a drum of the highly volatile and corrosive substance in the back of a station wagon from Buffalo, New York to White Sands to meet the launch deadline (p. 79). In the Nike program, people worried about whether its aniline fuel would be sufficiently available under tactical conditions, so they tried using gasoline as fuel instead—BOOM! Nope, guess not (p. 132). With all this “innovation” going on, they needed a suitable place from which to observe it, so the pyramid-shaped blockhouse had reinforced concrete walls ten feet thick with a roof 27 feet thick at the peak. This was designed to withstand a direct impact from a V-2 falling from an altitude of 100 miles. “Once the rockets are up, who cares where they come down?”

And the pace of rockets going up was absolutely frenetic, almost inconceivable by the standards of today's hangar queens and launch pad prima donnas (some years ago, a booster which sat on the pad for more than a year was nicknamed the “civil servant”: it won't work and you can't fire it). By contrast, a single development program, the Loki anti-aircraft missile, conducted a total of 2282 launches at White Sands in 1953 and 1954 (p. 115)—that's an average of more than three a day, counting weekends and holidays!

The book concludes in 1958 when White Sands Proving Ground became White Sands Missile Range (scary pop-up at this link), which remains a centre of rocket development and testing to this day. With the advent of NASA and massively funded, long-term military procurement programs, much of the cut, try, and run like Hell days of rocketry came to a close; this book covers that period which, if not a golden age, was a heck of a lot of fun for engineers who enjoy making loud noises and punching holes in the sky.

The book is gorgeous, printed on glossy paper, with hundreds of illustrations. I noted no typographical or factual errors. A complete list of all U.S. V-2, WAC Corporal, and Viking launches is given in appendices at the end.

May 2010 Permalink

Kershaw, Ian. The End. New York: Penguin Press, 2011. ISBN 978-1-59420-314-5.
Ian Kershaw is the author of the definitive two-volume biography of Hitler: Hitler: 1889–1936 Hubris and Hitler: 1936–1945 Nemesis (both of which I read before I began keeping this list). In the present volume he tackles one of the greatest puzzles of World War II: why did Germany continue fighting to the bitter end, when the Red Army was only blocks from Hitler's bunker, and long after it was apparent to those in the Nazi hierarchy, senior military commanders, industrialists, and the general populace that the war was lost and continuing the conflict would only prolong the suffering, inflict further casualties, and further devastate the infrastructure upon which survival in a postwar world would depend? It is, as the author notes, quite rare in the history of human conflict that the battle has to be taken all the way to the leader of an opponent in his capital city: Mussolini was deposed by his own Grand Council of Fascism and the king of Italy, and Japan surrendered before a single Allied soldier set foot upon the Home Islands (albeit after the imposition of a total blockade, the entry of the Soviet Union into the war against Japan, and the destruction of two cities by atomic bombs).

In addressing this question, the author recounts the last year of the war in great detail, starting with the Stauffenberg plot, which attempted unsuccessfully to assassinate Hitler on July 20th, 1944. In the aftermath of this plot, a ruthless purge of those considered unreliable in the military and party ensued (in the Wehrmacht alone, around 700 officers were arrested and 110 executed), those who survived were forced to swear personal allegiance to Hitler, and additional informants and internal repression were unleashed to identify and mete out summary punishment for any perceived disloyalty or defeatist sentiment. This, in effect, aligned those who might have opposed Hitler with his own personal destiny and made any overt expression of dissent from his will to hold out to the end tantamount to suicide.

But the story does not end there. Letters from soldiers at the front, meticulously catalogued by the censors of the SD and summarised in reports to Goebbels's propaganda ministry, indicate that while morale deteriorated in the last year of the war, fear of the consequences of a defeat, particularly at the hands of the Red Army, motivated many to keep on fighting. Propaganda highlighted the atrocities committed by the “Asian Bolshevik hordes” but, if exaggerated, was grounded in fact, as the Red Army was largely given a free hand if not encouraged to exact revenge for German war crimes on Soviet territory.

As the dénouement approached, those in Hitler's inner circle, who might have otherwise moved against him under other circumstances, were paralysed by the knowledge that their own authority flowed entirely from him, and that any hint of disloyalty would cause them to be dismissed or worse (as had already happened to several). With the Party and its informants and enforcers having thoroughly infiltrated the military and civilian population, there was simply no chance for an opposition movement to establish itself. Certainly there were those, particularly on the Western front, who did as little as possible and waited for the British and Americans to arrive (the French—not so much: reprisals under the zones they occupied had already inspired fear among those in their path). But finally, as long as Hitler was determined to resist to the very last and willing to accept the total destruction of the German people who he deemed to have “failed him”, there was simply no counterpoise which could oppose him and put an end to the conflict. Tellingly, only a week after Hitler's death, his successor, Karl Dönitz, ordered the surrender of Germany.

This is a superb, thoughtful, and thoroughly documented (indeed, almost 40% of the book is source citations and notes) account of the final days of the Third Reich and an enlightening and persuasive argument as to why things ended as they did.

As with all insightful works of history, the reader may be prompted to see parallels in other epochs and current events. Personally, I gained a great deal of insight into the ongoing financial crisis and the increasingly futile efforts of those who brought it about to (as the tired phrase, endlessly repeated) “kick the can down the road” rather than make the structural changes which might address the actual causes of the problem. Now, I'm not calling the central bankers, politicians, or multinational bank syndicates Nazis—I'm simply observing that as the financial apocalypse approaches they're behaving in much the same way as the Hitler regime did in its own final days: trying increasingly desperate measures to buy first months, then weeks, then days, and ultimately hours before “The End”. Much as was the case with Hitler's inner circle, those calling the shots in the international financial system simply cannot imagine a world in which it no longer exists, or their place in such a world, so they continue to buy time, whatever the cost or how small the interval, to preserve the reference frame in which they exist. The shudder of artillery can already be felt in the bunker.

February 2012 Permalink

King, David. The Commissar Vanishes. New York: Henry Holt, 1997. ISBN 0-8050-5295-X.

June 2003 Permalink

Klemperer, Victor. I Will Bear Witness. Vol. 1. New York: Modern Library, [1933–1941, 1995] 1998. ISBN 978-0-375-75378-7.
This book is simultaneously tedious, depressing, and profoundly enlightening. The author (a cousin of the conductor Otto Klemperer) was a respected professor of Romance languages and literature at the Technical University of Dresden when Hitler came to power in 1933. Although the son of a Reform rabbi, Klemperer had been baptised in a Christian church and considered himself a protestant Christian and entirely German. He volunteered for the German army in World War I and served at the front in the artillery and later, after recovering from a serious illness, in the army book censorship office on the Eastern front. As a fully assimilated German, he opposed all appeals to racial identity politics, Zionist as well as Nazi.

Despite his conversion to protestantism, military service to Germany, exalted rank as a professor, and decades of marriage to a woman deemed “Aryan” under the racial laws promulgated by the Nazis, Klemperer was considered a “full-blooded Jew” and was subject to ever-escalating harassment, persecution, humiliation, and expropriation as the Nazis tightened their grip on Germany. As civil society spiralled toward barbarism, Klemperer lost his job, his car, his telephone, his house, his freedom of movement, the right to shop in “Aryan stores”, access to public and lending libraries, and even the typewriter on which he continued to write in the hope of maintaining his sanity. His world shrank from that of a cosmopolitan professor fluent in many European languages to a single “Jews' house” in Dresden, shared with other once-prosperous families similarly evicted from their homes. His family and acquaintances dwindle as, one after another, they opt for emigration, leaving only the author and his wife still in Germany (due to lack of opportunities, but also to an inertia and sense of fatalism evident in the narrative). Slowly the author's sense of Germanness dissipates as he comes to believe that what is happening in Germany is not an aberration but somehow deeply rooted in the German character, and that Hitler embodies beliefs widespread among the population which were previously invisible before becoming so starkly manifest. Klemperer is imprisoned for eight days in 1941 for a blackout violation for which a non-Jew would have received a warning or a small fine, and his prison journal, written a few days after his release, is a matter of fact portrayal of how an encounter with the all-powerful and arbitrary state reduces the individual to a mental servitude more pernicious than physical incarceration.

I have never read any book which provides such a visceral sense of what it is like to live in a totalitarian society and how quickly all notions of justice, rights, and human dignity can evaporate when a charismatic leader is empowered by a mob in thrall to his rhetoric. Apart from the description of the persecution the author's family and acquaintances suffered themselves, he turns a keen philologist's eye on the language of the Third Reich, and observes how the corruption of the regime is reflected in the corruption of the words which make up its propaganda. Ayn Rand's fictional (although to some extent autobiographical) We the Living provides a similar sense of life under tyranny, but this is the real thing, written as events happened, with no knowledge of how it was all going to come out, and is, as a consequence, uniquely compelling. Klemperer wrote these diaries with no intention of their being published: they were, at most, the raw material for an autobiography he hoped eventually to write, so when you read these words you're perceiving how a Jew in Nazi Germany perceived life day to day, and how what historians consider epochal events in retrospect are quite naturally interpreted by those hearing of them for the first time in the light of “What does this mean for me?”

The author was a prolific diarist who wrote thousands of pages from the early 1900s throughout his long life. The original 1995 German publication of the 1933–1945 diaries as Ich will Zeugnis ablegen bis zum letzten was a substantial abridgement of the original document and even so ran to almost 1700 pages. This English translation further abridges the diaries and still often seems repetitive. End notes provide historical context, identify the many people who figure in the diary, and translate the foreign phrases the author liberally sprinkles among the text.

I will certainly read Volume 2, which covers the years 1942–1945, but probably not right away—after this powerful narrative, I'm inclined toward lighter works for a while.

February 2009 Permalink

Klemperer, Victor. I Will Bear Witness. Vol. 2. New York: Modern Library, [1942–1945, 1995, 1999] 2001. ISBN 978-0-375-75697-9.
This is the second volume in Victor Klemperer's diaries of life as a Jew in Nazi Germany. Volume 1 (February 2009) covers the years from 1933 through 1941, in which the Nazis seized and consolidated their power, began to increasingly persecute the Jewish population, and rearm in preparation for their military conquests which began with the invasion of Poland in September 1939.

I described that book as “simultaneously tedious, depressing, and profoundly enlightening”. The author (a cousin of the conductor Otto Klemperer) was a respected professor of Romance languages and literature at the Technical University of Dresden when Hitler came to power in 1933. Although the son of a Reform rabbi, Klemperer had been baptised in a Christian church and considered himself a protestant Christian and entirely German. He volunteered for the German army in World War I and served at the front in the artillery and later, after recovering from a serious illness, in the army book censorship office on the Eastern front. As a fully assimilated German, he opposed all appeals to racial identity politics, Zionist as well as Nazi.

Despite his conversion to protestantism, military service to Germany, exalted rank as a professor, and decades of marriage to a woman deemed “Aryan” under the racial laws promulgated by the Nazis, Klemperer was considered a “full-blooded Jew” and was subject to ever-escalating harassment, persecution, humiliation, and expropriation as the Nazis tightened their grip on Germany. As civil society spiralled toward barbarism, Klemperer lost his job, his car, his telephone, his house, his freedom of movement, the right to shop in “Aryan stores”, access to public and lending libraries, and even the typewriter on which he continued to write in the hope of maintaining his sanity. His world shrank from that of a cosmopolitan professor fluent in many European languages to a single “Jews' house” in Dresden, shared with other once-prosperous families similarly evicted from their homes.

As 1942 begins, it is apparent to many in German, even Jews deprived of the “privilege” of reading newspapers and listening to the radio, not to mention foreign broadcasts, that the momentum of German conquest in the East had stalled and that the Soviet winter counterattack had begun to push the ill-equipped and -supplied German troops back from the lines they held in the fall of 1941. This was reported with euphemisms such as “shortening our line”, but it was obvious to everybody that the Soviets, not long ago reported breathlessly as “annihilated”, were nothing of the sort and that the Nazi hope of a quick victory in the East, like the fall of France in 1940, was not in the cards.

In Dresden, where Klemperer and his wife Eva remained after being forced out of their house (to which, in formalism-obsessed Germany, he retained title and responsibility for maintenance), Jews were subjected to a never-ending ratchet of abuse, oppression, and terror. Klemperer was forced to wear the yellow star (concealing it meant immediate arrest and likely “deportation” to the concentration camps in the East) and was randomly abused by strangers on the street (but would get smiles and quiet words of support from others), with each event shaking or bolstering his confidence in those who, before Hitler, he considered his “fellow Germans”.

He is prohibited from riding the tram, and must walk long distances, avoiding crowded streets where the risk of abuse from passers-by was greater. Another blow falls when Jews are forbidden to use the public library. With his typewriter seized long ago, he can only pursue his profession with pen, ink, and whatever books he can exchange with other Jews, including those left behind by those “deported”. As ban follows ban, even the simplest things such as getting shoes repaired, obtaining coal to heat the house, doing laundry, and securing food to eat become major challenges. Jews are subject to random “house searches” by the Gestapo, in which the discovery of something like his diaries might mean immediate arrest—he arranges to store the work with an “Aryan” friend of Eva, who deposits pages as they are completed. The house searches in many cases amount to pure shakedowns, where rationed and difficult-to-obtain goods such as butter, sugar, coffee, and tobacco, even if purchased with the proper coupons, are simply stolen by the Gestapo goons.

By this time every Jew knows individuals and families who have been “deported”, and the threat of joining them is ever present. Nobody seems to know precisely what is going on in those camps in the East (whose names are known: Auschwitz, Dachau, Theresienstadt, etc.) but what is obvious is that nobody sent there has ever been seen again. Sometimes relatives receive a letter saying the deportee died of disease in the camp, which seemed plausible, while others get notices their loved one was “killed while trying to escape”, which was beyond belief in the case of elderly prisoners who had difficulty walking. In any case, being “sent East” was considered equivalent to a death sentence which, for most, it was. As a war veteran and married to an “Aryan”, Klemperer was more protected than most Jews in Germany, but there was always the risk that the slightest infraction might condemn him to the camps. He knew many others who had been deported shortly after the death of their Aryan wives.

As the war in the East grinds on, it becomes increasingly clear that Germany is losing. The back-and-forth campaign in North Africa was first to show cracks in the Nazi aura of invincibility, but after the disaster at Stalingrad in the winter of 1942–1943, it is obvious the situation is dire. Goebbels proclaims “total war”, and all Germans begin to feel the privation brought on by the war. The topic on everybody's lips in whispered, covert conversations is “How long can it go on?” With each reverse there are hopes that perhaps a military coup will depose the Nazis and seek peace with the Allies.

For Klemperer, such grand matters of state and history are of relatively little concern. Much more urgent are obtaining the necessities of life which, as the economy deteriorates and oppression of the Jews increases, often amount to coal to stay warm and potatoes to eat, hauled long distances by manual labour. Klemperer, like all able-bodied Jews (the definition of which is flexible: he suffers from heart disease and often has difficulty walking long distances or climbing stairs, and has vision problems as well) is assigned “war work”, which in his case amounts to menial labour tending machines producing stationery and envelopes in a paper factory. Indeed, what appear in retrospect as the pivotal moments of the war in Europe: the battles of Stalingrad and Kursk, Axis defeat and evacuation of North Africa, the fall of Mussolini and Italy's leaving the Axis, the Allied D-day landings in Normandy, the assassination plot against Hitler, and more almost seem to occur off-stage here, with news filtering in bit by bit after the fact and individuals trying to piece it together and make sense of it all.

One event which is not off stage is the bombing of Dresden between February 13 and 15, 1945. The Klemperers were living at the time in the Jews' house they shared with several other families, which was located some distance from the city centre. There was massive damage in the area, but it was outside the firestorm which consumed the main targets. Victor and Eva became separated in the chaos, but were reunited near the end of the attack. Given the devastation and collapse of infrastructure, Klemperer decided to bet his life on the hope that the attack would have at least temporarily put the Gestapo out of commission and removed the yellow star, discarded all identity documents marking him as a Jew, and joined the mass of refugees, many also without papers, fleeing the ruins of Dresden. He and Eva made their way on what remained of the transportation system toward Bavaria and eastern Germany, where they had friends who might accommodate them, at least temporarily. Despite some close calls, the ruse worked, and they survived the end of the war, fall of the Nazi regime, and arrival of United States occupation troops.

After a period in which he discovered that the American occupiers, while meaning well, were completely overwhelmed trying to meet the needs of the populace amid the ruins, the Klemperers decided to make it on their own back to Dresden, which was in the Soviet zone of occupation, where they hoped their house still stood and would be restored to them as their property. The book concludes with a description of this journey across ruined Germany and final arrival at the house they occupied before the Nazis came to power.

After the war, Victor Klemperer was appointed a professor at the University of Leipzig and resumed his academic career. As political life resumed in what was then the Soviet sector and later East Germany, he joined the Socialist Unity Party of Germany, which is usually translated to English as the East German Communist Party and was under the thumb of Moscow. Subsequently, he became a cultural ambassador of sorts for East Germany. He seems to have been a loyal communist, although in his later diaries he expressed frustration at the impotence of the “parliament” in which he was a delegate for eight years. Not to be unkind to somebody who survived as much oppression and adversity as he did, but he didn't seem to have much of a problem with a totalitarian, one party, militaristic, intrusive surveillance, police state as long as it wasn't directly persecuting him.

The author was a prolific diarist who wrote thousands of pages from the early 1900s throughout his long life. The original 1995 German publication of the 1933–1945 diaries as Ich will Zeugnis ablegen bis zum letzten was a substantial abridgement of the original document and even so ran to almost 1700 pages. This English translation further abridges the diaries and still often seems repetitive. End notes provide historical context, identify the many people who figure in the diary, and translate the foreign phrases the author liberally sprinkles among the text.

December 2019 Permalink

Kotkin, Stephen. Stalin, Vol. 1: Paradoxes of Power, 1878–1928. New York: Penguin Press, 2014. ISBN 978-0-14-312786-4.
In a Levada Center poll in 2017, Russians who responded named Joseph Stalin the “most outstanding person” in world history. Now, you can argue about the meaning of “outstanding”, but it's pretty remarkable that citizens of a country whose chief of government (albeit several regimes ago) presided over an entirely avoidable famine which killed millions of citizens of his country, ordered purges which executed more than 700,000 people, including senior military leadership, leaving his nation unprepared for the German attack in 1941, which would, until the final victory, claim the lives of around 27 million Soviet citizens, military and civilian, would be considered an “outstanding person” as opposed to a super-villain.

The story of Stalin's career is even less plausible, and should give pause to those who believe history can be predicted without the contingency of things that “just happen”. Ioseb Besarionis dze Jughashvili (the author uses Roman alphabet transliterations of all individuals' names in their native languages, which can occasionally be confusing when they later Russified their names) was born in 1878 in the town of Gori in the Caucasus. Gori, part of the territory of Georgia which had long been ruled by the Ottoman Empire, had been seized by Imperial Russia in a series of bloody conflicts ending in the 1860s with complete incorporation of the territory into the Czar's empire. Ioseb, who was called by the Georgian dimunitive “Sosa” throughout his youth, was the third son born to his parents, but, as both of his older brothers had died not long after birth, was raised as an only child.

Sosa's father, Besarion Jughashvili (often written in the Russian form, Vissarion) was a shoemaker with his own shop in Gori but, as time passed his business fell on hard times and he closed the shop and sought other work, ending his life as a vagrant. Sosa's mother, Ketevan “Keke” Geladze, was ambitious and wanted the best for her son, and left her husband and took a variety of jobs to support the family. She arranged for eight year old Sosa to attend Russian language lessons given to the children of a priest in whose house she was boarding. Knowledge of Russian was the key to advancement in Czarist Georgia, and he had a head start when Keke arranged for him to be enrolled in the parish school's preparatory and four year programs. He was the first member of either side of his family to attend school and he rose to the top of his class under the patronage of a family friend, “Uncle Yakov” Egnatashvili. After graduation, his options were limited. The Russian administration, wary of the emergence of a Georgian intellectual class that might champion independence, refused to establish a university in the Caucasus. Sosa's best option was the highly selective Theological Seminary in Tiflis where he would prepare, in a six year course, for life as a parish priest or teacher in Georgia but, for those who graduated near the top, could lead to a scholarship at a university in another part of the empire.

He took the examinations and easily passed, gaining admission, petitioning and winning a partial scholarship that paid most of his fees. “Uncle Yakov” paid the rest, and he plunged into his studies. Georgia was in the midst of an intense campaign of Russification, and Sosa further perfected his skills in the Russian language. Although completely fluent in spoken and written Russian along with his native Georgian (the languages are completely unrelated, having no more in common than Finnish and Italian), he would speak Russian with a Georgian accent all his life and did not publish in the Russian language until he was twenty-nine years old.

Long a voracious reader, at the seminary Sosa joined a “forbidden literature” society which smuggled in and read works, not banned by the Russian authorities, but deemed unsuitable for priests in training. He read classics of Russian, French, English, and German literature and science, including Capital by Karl Marx. The latter would transform his view of the world and path in life. He made the acquaintance of a former seminarian and committed Marxist, Lado Ketskhoveli, who would guide his studies. In August 1898, he joined the newly formed “Third Group of Georgian Marxists”—many years later Stalin would date his “party card” to then.

Prior to 1905, imperial Russia was an absolute autocracy. The Czar ruled with no limitations on his power. What he decreed and ordered his functionaries to do was law. There was no parliament, political parties, elected officials of any kind, or permanent administrative state that did not serve at the pleasure of the monarch. Political activity and agitation were illegal, as were publishing and distributing any kind of political literature deemed to oppose imperial rule. As Sosa became increasingly radicalised, it was only a short step from devout seminarian to underground agitator. He began to neglect his studies, became increasingly disrespectful to authority figures, and, in April 1899, left the seminary before taking his final examinations.

Saddled with a large debt to the seminary for leaving without becoming a priest or teacher, he drifted into writing articles for small, underground publications associated with the Social Democrat movement, at the time the home of most Marxists. He took to public speaking and, while eschewing fancy flights of oratory, spoke directly to the meetings of workers he addressed in their own dialect and terms. Inevitably, he was arrested for “incitement to disorder and insubordination against higher authority” in April 1902 and jailed. After fifteen months in prison at Batum, he was sentenced to three years of internal exile in Siberia. In January 1904 he escaped and made it back to Tiflis, in Georgia, where he resumed his underground career. By this time the Social Democratic movement had fractured into Lenin's Bolshevik faction and the larger Menshevik group. Sosa, who during his imprisonment had adopted the revolutionary nickname “Koba”, after the hero in a Georgian novel of revenge, continued to write and speak and, in 1905, after the Czar was compelled to cede some of his power to a parliament, organised Battle Squads which stole printing equipment, attacked government forces, and raised money through protection rackets targeting businesses.

In 1905, Koba Jughashvili was elected one of three Bolshevik delegates from Georgia to attend the Third Congress of the Russian Social Democratic Workers' Party in Tampere, Finland, then part of the Russian empire. It was there he first met Lenin, who had been living in exile in Switzerland. Koba had read Lenin's prolific writings and admired his leadership of the Bolshevik cause, but was unimpressed in this first in-person encounter. He vocally took issue with Lenin's position that Bolsheviks should seek seats in the newly-formed State Duma (parliament). When Lenin backed down in the face of opposition, he said, “I expected to see the mountain eagle of our party, a great man, not only politically but physically, for I had formed for myself a picture of Lenin as a giant, as a stately representative figure of a man. What was my disappointment when I saw the most ordinary individual, below average height, distinguished from ordinary mortals by, literally, nothing.”

Returning to Georgia, he resumed his career as an underground revolutionary including, famously, organising a robbery of the Russian State Bank in Tiflis in which three dozen people were killed and two dozen more injured, “expropriating” 250,000 rubles for the Bolshevik cause. Koba did not participate directly, but he was the mastermind of the heist. This and other banditry, criminal enterprises, and unauthorised publications resulted in multiple arrests, imprisonments, exiles to Siberia, escapes, re-captures, and life underground in the years that followed. In 1912, while living underground in Saint Petersburg after yet another escape, he was named the first editor of the Bolshevik party's new daily newspaper, Pravda, although his name was kept secret. In 1913, with the encouragement of Lenin, he wrote an article titled “Marxism and the National Question” in which he addressed how a Bolshevik regime should approach the diverse ethnicities and national identities of the Russian Empire. As a Georgian Bolshevik, Jughashvili was seen as uniquely qualified and credible to address this thorny question. He published the article under the nom de plume “K. [for Koba] Stalin”, which literally translated, meant “Man of Steel” and paralleled Lenin's pseudonym. He would use this name for the rest of his life, reverting to the Russified form of his given name, “Joseph” instead of the nickname Koba (by which his close associates would continue to address him informally). I shall, like the author, refer to him subsequently as “Stalin”.

When Russia entered the Great War in 1914, events were set into motion which would lead to the end of Czarist rule, but Stalin was on the sidelines: in exile in Siberia, where he spent much of his time fishing. In late 1916, as manpower shortages became acute, exiled Bolsheviks including Stalin received notices of conscription into the army, but when he appeared at the induction centre he was rejected due to a crippled left arm, the result of a childhood injury. It was only after the abdication of the Czar in the February Revolution of 1917 that he returned to Saint Petersburg, now renamed Petrograd, and resumed his work for the Bolshevik cause. In April 1917, in elections to the Bolshevik Central Committee, Stalin came in third after Lenin (who had returned from exile in Switzerland) and Zinoviev. Despite having been out of circulation for several years, Stalin's reputation from his writings and editorship of Pravda, which he resumed, elevated him to among the top rank of the party.

As Kerensky's Provisional Government attempted to consolidate its power and continue the costly and unpopular war, Stalin and Trotsky joined Lenin's call for a Bolshevik coup to seize power, and Stalin was involved in all aspects of the eventual October Revolution, although often behind the scenes, while Lenin was the public face of the Bolshevik insurgency.

After seizing power, the Bolsheviks faced challenges from all directions. They had to disentangle Russia from the Great War without leaving the country open to attack and territorial conquest by Germany or Poland. Despite their ambitious name, they were a minority party and had to subdue domestic opposition. They took over a country which the debts incurred by the Czar to fund the war had effectively bankrupted. They had to exert their control over a sprawling, polyglot empire in which, outside of the big cities, their party had little or no presence. They needed to establish their authority over a military in which the officer corps largely regarded the Czar as their legitimate leader. They must restore agricultural production, severely disrupted by levies of manpower for the war, before famine brought instability and the risk of a counter-coup. And for facing these formidable problems, all at the same time, they were utterly unprepared.

The Bolsheviks were, to a man (and they were all men), professional revolutionaries. Their experience was in writing and publishing radical tracts and works of Marxist theory, agitating and organising workers in the cities, carrying out acts of terror against the regime, and funding their activities through banditry and other forms of criminality. There was not a military man, agricultural expert, banker, diplomat, logistician, transportation specialist, or administrator among them, and suddenly they needed all of these skills and more, plus the ability to recruit and staff an administration for a continent-wide empire. Further, although Lenin's leadership was firmly established and undisputed, his subordinates were all highly ambitious men seeking to establish and increase their power in the chaotic and fluid situation.

It was in this environment that Stalin made his mark as the reliable “fixer”. Whether it was securing levies of grain from the provinces, putting down resistance from counter-revolutionary White forces, stamping out opposition from other parties, developing policies for dealing with the diverse nations incorporated into the Russian Empire (indeed, in a real sense, it was Stalin who invented the Soviet Union as a nominal federation of autonomous republics which, in fact, were subject to Party control from Moscow), or implementing Lenin's orders, even when he disagreed with them, Stalin was on the job. Lenin recognised Stalin's importance as his right hand man by creating the post of General Secretary of the party and appointing him to it.

This placed Stalin at the centre of the party apparatus. He controlled who was hired, fired, and promoted. He controlled access to Lenin (only Trotsky could see Lenin without going through Stalin). This was a finely-tuned machine which allowed Lenin to exercise absolute power through a party machine which Stalin had largely built and operated.

Then, in May of 1922, the unthinkable happened: Lenin was felled by a stroke which left him partially paralysed. He retreated to his dacha at Gorki to recuperate, and his communication with the other senior leadership was almost entirely through Stalin. There had been no thought of or plan for a succession after Lenin (he was only fifty-two at the time of his first stroke, although he had been unwell for much of the previous year). As Lenin's health declined, ending in his death in January 1924, Stalin increasingly came to run the party and, through it, the government. He had appointed loyalists in key positions, who saw their own careers as linked to that of Stalin. By the end of 1924, Stalin began to move against the “Old Bolsheviks” who he saw as rivals and potential threats to his consolidation of power. When confronted with opposition, on three occasions he threatened to resign, each exercise in brinksmanship strengthening his grip on power, as the party feared the chaos that would ensue from a power struggle at the top. His status was reflected in 1925 when the city of Tsaritsyn was renamed Stalingrad.

This ascent to supreme power was not universally applauded. Felix Dzierzynski (Polish born, he is often better known by the Russian spelling of his name, Dzerzhinsky) who, as the founder of the Soviet secret police (Cheka/GPU/OGPU) knew a few things about dictatorship, warned in 1926, the year of his death, that “If we do not find the correct line and pace of development our opposition will grow and the country will get its dictator, the grave digger of the revolution irrespective of the beautiful feathers on his costume.”

With or without feathers, the dictatorship was beginning to emerge. In 1926 Stalin published “On Questions of Leninism” in which he introduced the concept of “Socialism in One Country” which, presented as orthodox Leninist doctrine (which it wasn't), argued that world revolution was unnecessary to establish communism in a single country. This set the stage for the collectivisation of agriculture and rapid industrialisation which was to come. In 1928, what was to be the prototype of the show trials of the 1930s opened in Moscow, the Shakhty trial, complete with accusations of industrial sabotage (“wrecking”), denunciations of class enemies, and Andrei Vyshinsky presiding as chief judge. Of the fifty-three engineers accused, five were executed and forty-four imprisoned. A country desperately short on the professionals its industry needed to develop had begin to devour them.

It is a mistake to regard Stalin purely as a dictator obsessed with accumulating and exercising power and destroying rivals, real or imagined. The one consistent theme throughout Stalin's career was that he was a true believer. He was a devout believer in the Orthodox faith while at the seminary, and he seamlessly transferred his allegiance to Marxism once he had been introduced to its doctrines. He had mastered the difficult works of Marx and could cite them from memory (as he often did spontaneously to buttress his arguments in policy disputes), and went on to similarly internalise the work of Lenin. These principles guided his actions, and motivated him to apply them rigidly, whatever the cost may be.

Starting in 1921, Lenin had introduced the New Economic Policy, which lightened state control over the economy and, in particular, introduced market reforms in the agricultural sector, resulting in a mixed economy in which socialism reigned in big city industries, but in the countryside the peasants operated under a kind of market economy. This policy had restored agricultural production to pre-revolutionary levels and largely ended food shortages in the cities and countryside. But to a doctrinaire Marxist, it seemed to risk destruction of the regime. Marx believed that the political system was determined by the means of production. Thus, accepting what was essentially a capitalist economy in the agricultural sector was to infect the socialist government with its worst enemy.

Once Stalin had completed his consolidation of power, he then proceeded as Marxist doctrine demanded: abolish the New Economic Policy and undertake the forced collectivisation of agriculture. This began in 1928.

And it is with this momentous decision that the present volume comes to an end. This massive work (976 pages in the print edition) is just the first in a planned three volume biography of Stalin. The second volume, Stalin: Waiting for Hitler, 1929–1941, was published in 2017 and the concluding volume is not yet completed.

Reading this book, and the entire series, is a major investment of time in a single historical figure. But, as the author observes, if you're interested in the phenomenon of twentieth century totalitarian dictatorship, Stalin is the gold standard. He amassed more power, exercised by a single person with essentially no checks or limits, over more people and a larger portion of the Earth's surface than any individual in human history. He ruled for almost thirty years, transformed the economy of his country, presided over deliberate famines, ruthless purges, and pervasive terror that killed tens of millions, led his country to victory at enormous cost in the largest land conflict in history and ended up exercising power over half of the European continent, and built a military which rivaled that of the West in a bipolar struggle for global hegemony.

It is impossible to relate the history of Stalin without describing the context in which it occurred, and this is as much a history of the final days of imperial Russia, the revolutions of 1917, and the establishment and consolidation of Soviet power as of Stalin himself. Indeed, in this first volume, there are lengthy parts of the narrative in which Stalin is largely offstage: in prison, internal exile, or occupied with matters peripheral to the main historical events. The level of detail is breathtaking: the Bolsheviks seem to have been as compulsive record-keepers as Germans are reputed to be, and not only are the votes of seemingly every committee meeting recorded, but who voted which way and why. There are more than two hundred pages of end notes, source citations, bibliography, and index.

If you are interested in Stalin, the Soviet Union, the phenomenon of Bolshevism, totalitarian dictatorship, or how destructive madness can grip a civilised society for decades, this is an essential work. It is unlikely it will ever be equalled.

December 2018 Permalink

Kotkin, Stephen. Stalin, Vol. 2: Waiting for Hitler, 1929–1941. New York: Penguin Press, 2017. ISBN 978-1-59420-380-0.
This is the second volume in the author's monumental projected three-volume biography of Joseph Stalin. The first volume, Stalin: Paradoxes of Power, 1878–1928 (December 2018) covers the period from Stalin's birth through the consolidation of his sole power atop the Soviet state after the death of Lenin. The third volume, which will cover the period from the Nazi invasion of the Soviet Union in 1941 through the death of Stalin in 1953 has yet to be published.

As this volume begins in 1928, Stalin is securely in the supreme position of the Communist Party of the Soviet Union, and having over the years staffed the senior ranks of the party and the Soviet state (which the party operated like the puppet it was) with loyalists who owed their positions to him, had no serious rivals who might challenge him. (It is often claimed that Stalin was paranoid and feared a coup, but would a despot fearing for his position regularly take summer holidays, months in length, in Sochi, far from the capital?)

By 1928, the Soviet Union had largely recovered from the damage inflicted by the Great War, Bolshevik revolution, and subsequent civil war. Industrial and agricultural production were back to around their 1914 levels, and most measures of well-being had similarly recovered. To be sure, compared to the developed industrial economies of countries such as Germany, France, or Britain, Russia remained a backward economy largely based upon primitive agriculture, but at least it had undone the damage inflicted by years of turbulence and conflict.

But in the eyes of Stalin and his close associates, who were ardent Marxists, there was a dangerous and potentially deadly internal contradiction in the Soviet system as it then stood. In 1921, in response to the chaos and famine following the 1917 revolution and years-long civil war, Lenin had proclaimed the New Economic Policy (NEP), which tempered the pure collectivism of original Bolshevik doctrine by introducing a mixed economy, where large enterprises would continue to be owned and managed by the state, but small-scale businesses could be privately owned and run for profit. More importantly, agriculture, which had previously been managed under a top-down system of coercive requisitioning of grain and other products by the state, was replaced by a market system where farmers could sell their products freely, subject to a tax, payable in product, proportional to their production (and thus creating an incentive to increase production).

The NEP was a great success, and shortages of agricultural products were largely eliminated. There was grousing about the growing prosperity of the so-called NEPmen, but the results of freeing the economy from the shackles of state control were evident to all. But according to Marxist doctrine, it was a dagger pointed at the heart of the socialist state.

By 1928, the Soviet economy could be described, in Marxist terms, as socialism in the industrial cities and capitalism in the agrarian countryside. But, according to Marx, the form of politics was determined by the organisation of the means of production—paraphrasing Brietbart, politics is downstream of economics. This meant that preserving capitalism in a large sector of the country, one employing a large majority of its population and necessary to feed the cities, was an existential risk. In such a situation it would only be normal for the capitalist peasants to eventually prevail over the less numerous urbanised workers and destroy socialism.

Stalin was a Marxist. He was not an opportunist who used Marxism-Leninism to further his own ambitions. He really believed this stuff. And so, in 1928, he proclaimed an end to the NEP and began the forced collectivisation of Soviet agriculture. Private ownership of land would be abolished, and the 120 million peasants essentially enslaved as “workers” on collective or state farms, with planting, quotas to be delivered, and management essentially controlled by the party. After an initial lucky year, the inevitable catastrophe ensued. Between 1931 and 1933 famine and epidemics resulting from it killed between five and seven million people. The country lost around half of its cattle and two thirds of its sheep. In 1929, the average family in Kazakhstan owned 22.6 cattle; in 1933 3.7. This was a calamity on the same order as the Jewish Holocaust in Germany, and just as man-made: during this period there was a global glut of food, but Stalin refused to admit the magnitude of the disaster for fear of inciting enemies to attack and because doing so would concede the failure of his collectivisation project. In addition to the famine, the process of collectivisation resulted in between four and five million people being arrested, executed, deported to other regions, or jailed.

Many in the starving countryside said, “If only Stalin knew, he would do something.” But the evidence is overwhelming: Stalin knew, and did nothing. Marxist theory said that agriculture must be collectivised, and by pure force of will he pushed through the project, whatever the cost. Many in the senior Soviet leadership questioned this single-minded pursuit of a theoretical goal at horrendous human cost, but they did not act to stop it. But Stalin remembered their opposition and would settle scores with them later.

By 1936, it appeared that the worst of the period of collectivisation was over. The peasants, preferring to live in slavery than starve to death, had acquiesced to their fate and resumed production, and the weather co-operated in producing good harvests. And then, in 1937, a new horror was unleashed upon the Soviet people, also completely man-made and driven by the will of Stalin, the Great Terror. Starting slowly in the aftermath of the assassination of Sergey Kirov in 1934, by 1937 the absurd devouring of those most loyal to the Soviet regime, all over Stalin's signature, reached a crescendo. In 1937 and 1938 1,557,259 people would be arrested and 681,692 executed, the overwhelming majority for political offences, this in a country with a working-age population of 100 million. Counting deaths from other causes as a result of the secret police, the overall death toll was probably around 830,000. This was so bizarre, and so unprecedented in human history, it is difficult to find any comparable situation, even in Nazi Germany. As the author remarks,

To be sure, the greater number of victims were ordinary Soviet people, but what regime liquidates colossal numbers of loyal officials? Could Hitler—had he been so inclined—have compelled the imprisonment or execution of huge swaths of Nazi factory and farm bosses, as well as almost all of the Nazi provincial Gauleiters and their staffs, several times over? Could he have executed the personnel of the Nazi central ministries, thousands of his Wehrmacht officers—including almost his entire high command—as well as the Reich's diplomatic corps and its espionage agents, its celebrated cultural figures, and the leadership of Nazi parties throughout the world (had such parties existed)? Could Hitler also have decimated the Gestapo even while it was carrying out a mass bloodletting? And could the German people have been told, and would the German people have found plausible, that almost everyone who had come to power with the Nazi revolution turned out to be a foreign agent and saboteur?

Stalin did all of these things. The damage inflicted upon the Soviet military, at a time of growing threats, was horrendous. The terror executed or imprisoned three of the five marshals of the Soviet Union, 13 of 15 full generals, 8 of the 9 admirals of the Navy, and 154 of 186 division commanders. Senior managers, diplomats, spies, and party and government officials were wiped out in comparable numbers in the all-consuming cataclysm. At the very moment the Soviet state was facing threats from Nazi Germany in the west and Imperial Japan in the east, it destroyed those most qualified to defend it in a paroxysm of paranoia and purification from phantasmic enemies.

And then, it all stopped, or largely tapered off. This did nothing for those who had been executed, or who were still confined in the camps spread all over the vast country, but at least there was a respite from the knocks in the middle of the night and the cascading denunciations for fantastically absurd imagined “crimes”. (In June 1937, eight high-ranking Red Army officers, including Marshal Tukachevsky, were denounced as “Gestapo agents”. Three of those accused were Jews.)

But now the international situation took priority over domestic “enemies”. The Bolsheviks, and Stalin in particular, had always viewed the Soviet Union as surrounded by enemies. As the vanguard of the proletarian revolution, by definition those states on its borders must be reactionary capitalist-imperialist or fascist regimes hostile to or actively bent upon the destruction of the peoples' state.

With Hitler on the march in Europe and Japan expanding its puppet state in China, potentially hostile powers were advancing toward Soviet borders from two directions. Worse, there was a loose alliance between Germany and Japan, raising the possibility of a two-front war which would engage Soviet forces in conflicts on both ends of its territory. What Stalin feared most, however, was an alliance of the capitalist states (in which he included Germany, despite its claim to be “National Socialist”) against the Soviet Union. In particular, he dreaded some kind of arrangement between Britain and Germany which might give Britain supremacy on the seas and its far-flung colonies, while acknowledging German domination of continental Europe and a free hand to expand toward the East at the expense of the Soviet Union.

Stalin was faced with an extraordinarily difficult choice: make some kind of deal with Britain (and possibly France) in the hope of deterring a German attack upon the Soviet Union, or cut a deal with Germany, linking the German and Soviet economies in a trade arrangement which the Germans would be loath to destroy by aggression, lest they lose access to the raw materials which the Soviet Union could supply to their war machine. Stalin's ultimate calculation, again grounded in Marxist theory, was that the imperialist powers were fated to eventually fall upon one another in a destructive war for domination, and that by standing aloof, the Soviet Union stood to gain by encouraging socialist revolutions in what remained of them after that war had run its course.

Stalin evaluated his options and made his choice. On August 27, 1939, a “non-aggression treaty” was signed in Moscow between Nazi Germany and the Soviet Union. But the treaty went far beyond what was made public. Secret protocols defined “spheres of influence”, including how Poland would be divided among the two parties in the case of war. Stalin viewed this treaty as a triumph: yes, doctrinaire communists (including many in the West) would be aghast at a deal with fascist Germany, but at a blow, Stalin had eliminated the threat of an anti-Soviet alliance between Germany and Britain, linked Germany and the Soviet Union in a trade arrangement whose benefits to Germany would deter aggression and, in the case of war between Germany and Britain and France (for which he hoped), might provide an opportunity to recover territory once in the czar's empire which had been lost after the 1917 revolution.

Initially, this strategy appeared to be working swimmingly. The Soviets were shipping raw materials they had in abundance to Germany and receiving high-technology industrial equipment and weapons which they could immediately put to work and/or reverse-engineer to make domestically. In some cases, they even received blueprints or complete factories for making strategic products. As the German economy became increasingly dependent upon Soviet shipments, Stalin perceived this as leverage over the actions of Germany, and responded to delays in delivery of weapons by slowing down shipments of raw materials essential to German war production.

On September 1st, 1939, Nazi Germany invaded Poland, just a week after the signing of the pact between Germany and the Soviet Union. On September 3rd, France and Britain declared war on Germany. Here was the “war among the imperialists” of which Stalin had dreamed. The Soviet Union could stand aside, continue to trade with Nazi Germany, while the combatants bled each other white, and then, in the aftermath, support socialist revolutions in their countries. On September 17th the Soviet Union, pursuant to the secret protocol, invaded Poland from the east and joined the Nazi forces in eradicating that nation. Ominously, greater Germany and the Soviet Union now shared a border.

After the start of hostilities, a state of “phoney war” existed until Germany struck against Denmark, Norway, and France in April and May 1940. At first, this appeared precisely what Stalin had hoped for: a general conflict among the “imperialist powers” with the Soviet Union not only uninvolved, but having reclaimed territory in Poland, the Baltic states, and Bessarabia which had once belonged to the Tsars. Now there was every reason to expect a long war of attrition in which the Nazis and their opponents would grind each other down, as in the previous world war, paving the road for socialist revolutions everywhere.

But then, disaster ensued. In less than six weeks, France collapsed and Britain evacuated its expeditionary force from the Continent. Now, it appeared, Germany reigned supreme, and might turn its now largely idle army toward conquest in the East. After consolidating the position in the west and indefinitely deferring an invasion of Britain due to inability to obtain air and sea superiority in the English Channel, Hitler began to concentrate his forces on the eastern frontier. Disinformation, spread where Soviet spy networks would pick it up and deliver it to Stalin, whose prejudices it confirmed, said that the troop concentrations were in preparation for an assault on British positions in the Near East or to blackmail the Soviet Union to obtain, for example, a long term lease on its breadbasket, the Ukraine.

Hitler, acutely aware that it was a two-front war which spelled disaster to Germany in the last war, rationalised his attack on the Soviet Union as follows. Yes, Britain had not been defeated, but their only hope was an eventual alliance with the Soviet Union, opening a second front against Germany. Knocking out the Soviet Union (which should be no more difficult than the victory over France, which took just six weeks), would preclude this possibility and force Britain to come to terms. Meanwhile, Germany would have secured access to raw materials in Soviet territory for which it was previously paying market prices, but were now available for the cost of extraction and shipping.

The volume concludes on June 21st, 1941, the eve of the Nazi invasion of the Soviet Union. There could not have been more signs that this was coming: Soviet spies around the world sent evidence, and Britain even shared (without identifying the source) decrypted German messages about troop dispositions and war plans. But none of this disabused Stalin of his idée fixe: Germany would not attack because Soviet exports were so important. Indeed, in 1940, 40 percent of nickel, 55 percent of manganese, 65 percent of chromium, 67% of asbestos, 34% of petroleum, and a million tonnes of grain and timber which supported the Nazi war machine were delivered by the Soviet Union. Hours before the Nazi onslaught began, well after the order for it was given, a Soviet train delivering grain, manganese, and oil crossed the border between Soviet-occupied and German-occupied Poland, bound for Germany. Stalin's delusion persisted until reality intruded with dawn.

This is a magisterial work. It is unlikely it will ever be equalled. There is abundant rich detail on every page. Want to know what the telephone number for the Latvian consulate in Leningrad was 1934? It's right here on page 206 (5-50-63). Too often, discussions of Stalin assume he was a kind of murderous madman. This book is a salutary antidote. Everything Stalin did made perfect sense when viewed in the context of the beliefs which Stalin held, shared by his Bolshevik contemporaries and those he promoted to the inner circle. Yes, they seem crazy, and they were, but no less crazy than politicians in the United States advocating the abolition of air travel and the extermination of cows in order to save a planet which has managed just fine for billions of years without the intervention of bug-eyed, arm-waving ignoramuses.

Reading this book is a major investment of time. It is 1154 pages, with 910 pages of main text and illustrations, and will noticeably bend spacetime in its vicinity. But there is so much wisdom, backed with detail, that you will savour every page and, when you reach the end, crave the publication of the next volume. If you want to understand totalitarian dictatorship, you have to ultimately understand Stalin, who succeeded at it for more than thirty years until ultimately felled by illness, not conquest or coup, and who built the primitive agrarian nation he took over into a superpower. Some of us thought that the death of Stalin and, decades later, the demise of the Soviet Union, brought an end to all that. And yet, today, in the West, we have politicians advocating central planning, collectivisation, and limitations on free speech which are entirely consistent with the policies of Uncle Joe. After reading this book and thinking about it for a while, I have become convinced that Stalin was a patriot who believed that what he was doing was in the best interest of the Soviet people. He was sure the (laughably absurd) theories he believed and applied were the best way to build the future. And he was willing to force them into being whatever the cost may be. So it is today, and let us hope those made aware of the costs documented in this history will be immunised against the siren song of collectivist utopia.

Author Stephen Kotkin did a two-part Uncommon Knowledge interview about the book in 2018. In the first part he discusses collectivisation and the terror. In the second, he discusses Stalin and Hitler, and the events leading up to the Nazi invasion of the Soviet Union.

May 2019 Permalink

Krakauer, Jon. Under the Banner of Heaven. New York: Anchor Books, [2003] 2004. ISBN 1-4000-3280-6.
This book uses the true-crime narrative of a brutal 1984 double murder committed by two Mormon fundamentalist brothers as the point of departure to explore the origin and sometimes violent early history of the Mormon faith, the evolution of Mormonism into a major mainstream religion, and the culture of present-day fundamentalist schismatic sects which continue to practice polygamy within a strictly hierarchical male-dominated society, and believe in personal revelation from God. (It should be noted that these sects, although referring to themselves as Mormon, have nothing whatsoever to do with the mainstream Church of Jesus Christ of Latter-day Saints, which excommunicates leaders of such sects and their followers, and has officially renounced the practice of polygamy since the Woodruff Manifesto of 1890. The “Mormon fundamentalist” sects believe themselves to be the true exemplars of the religion founded by Joseph Smith and reject the legitimacy of the mainstream church.)

Mormonism is almost unique among present-day large (more than 11 million members, about half in the United States) religions in having been established recently (1830) in a modern, broadly literate society, so its history is, for better or for worse, among the best historically documented of all religions. This can, of course, pose problems to any religion which claims absolute truth for its revealed messages, as the history of factionalism and schisms in Mormonism vividly demonstrates. The historical parallels between Islam and Mormonism are discussed briefly, and are well worth pondering: both were founded by new revelations building upon the Bible, both incorporated male domination and plural marriage at the outset, both were persecuted by the existing political and religious establishment, fled to a new haven in the desert, and developed in an environment of existential threats and violent responses. One shouldn't get carried away with such analogies—in particular Mormons never indulged in territorial conquest nor conversion at swordpoint. Further, the Mormon doctrine of continued revelation allows the religion to adapt as society evolves: discarding polygamy and, more recently, admitting black men to the priesthood (which, in the Mormon church, is comprised of virtually all adult male members).

Obviously, intertwining the story of the premeditated murder of a young mother and her infant committed by people who believed they were carrying out a divine revelation, with the history of a religion whose present-day believers often perceive themselves as moral exemplars in a decadent secular society is bound to be incendiary, and the reaction of the official Mormon church to the publication of the book was predictably negative. This paperback edition includes an appendix which reprints a review of a pre-publication draft of the original hardcover edition by senior church official Richard E. Turley, Jr., along with the author's response which acknowledges some factual errors noted by Turley (and corrected in this edition) while disputing his claim that the book “presents a decidedly one-sided and negative view of Mormon history” (p. 346). While the book is enlightening on each of the topics it treats, it does seem to me that it may try to do too much in too few pages. The history of the Mormon church, exploration of the present-day fundamentalist polygamous colonies in the western U.S., Canada, and Mexico, and the story of how the Lafferty brothers went from zealotry to murder and their apprehension and trials are all topics deserving of book-length treatment; combining them in a single volume invites claims that the violent acts of a few aberrant (and arguably insane) individuals are being used to slander a church of which they were not even members at the time of their crime.

All of the Mormon scriptures cited in the book are available on-line. Thanks to the reader who recommended this book; I'd never have otherwise discovered it.

December 2005 Permalink

Kuhns, Elizabeth. The Habit. New York: Doubleday, 2003. ISBN 0-385-50588-4.
For decades I've been interested in and worried about how well-intentioned “modernisations” might interrupt the chain of transmission of information and experience between generations and damage, potentially mortally, the very institutions modernisers were attempting to adapt to changing circumstances. Perhaps my concern with this somewhat gloomy topic stems from having endured both “new math” in high school and “new chemistry” in college, in both cases having to later re-learn the subject matter in the traditional way which enables one to, you know, actually solve problems.

Now that the radicals left over from the boomer generation are teachers and professors, we're into the second or third generation of a feedback cycle in which students either never learn the history of their own cultures or are taught contempt and hatred for it. The dearth of young people in the United States and U.K. who know how to think and have the factual framework from which to reason (or are aware what they don't know and how to find it out) is such that I worry about a runaway collapse of Western civilisation there. The very fact that it's impolitic to even raise such an issue in most of academia today only highlights how dire the situation is. (In continental Europe the cultural and educational situation is nowhere near as bad, but given that the population is aging and dying out it hardly matters. I read a prediction a couple of weeks ago that, absent immigration or change in fertility, the population of Switzerland, now more than seven million, could fall to about one million before the end of this century, and much the same situation obtains elsewhere in Europe. There is no precedent in human history for this kind of population collapse unprovoked by disaster, disease, or war.)

When pondering “macro, macro” issues like this, it's often useful to identify a micro-model to serve as a canary in the mineshaft for large-scale problems ahead. In 1965, the Second Vatican Council promulgated a top to bottom modernisation of the Roman Catholic Church. In that same year, there were around 180,000 Catholic nuns in the U.S.—an all time historical high—whose lifestyle, strongly steeped in tradition, began to immediately change in many ways far beyond the clothes they wore. Increasingly, orders opted for increasing invisibility—blending into the secular community. The result: an almost immediate collapse in their numbers, which has continued to the present day (graph). Today, there are only about 70,000 left, and with a mean age of 69, their numbers are sure to erode further in the future. Now, it's impossible to separate the consequences of modernisation of tradition from those of social changes in society at large, but it gives one pause to see an institution which, as this book vividly describes, has tenaciously survived two millennia of rising and falling empires, war, plague, persecution, inquisition, famine, migration, reformation and counter-reformation, disappearing like a puff of smoke within the space of one human lifetime. It makes you wonder about how resilient other, far more recent, components of our culture may be in the face of changes which discard the experience and wisdom of the past.

A paperback edition is scheduled for publication in April 2005.

February 2005 Permalink

Kurlansky, Mark. Salt: A World History. New York: Penguin Books, 2002. ISBN 0-14-200161-9.
You may think this a dry topic, but the history of salt is a microcosm of the history of human civilisation. Carnivorous animals and human tribes of hunters get all the salt they need from the meat they eat. But as soon as humans adopted a sedentary agricultural lifestyle and domesticated animals, they and their livestock had an urgent need for salt—a cow requires ten times as much salt as a human. The collection and production of salt was a prerequisite for human settlements and, as an essential commodity required by every individual, the first to be taxed and regulated by that chronic affliction of civilisation, government. Salt taxes supported the Chinese empire for almost two millennia, the Viennese and Genoan trading empires and the Hanseatic League, precipitated the French Revolution and India's struggle for independence from the British empire. Salt was a strategic commodity in the Roman Empire: most Roman cities were built near saltworks, and the words “salary” and “soldier” are both derived from the Latin word for salt. This and much more is covered in this fascinating look at human civilisation through the crystals of a tasty and essential inorganic compound composed of two poisonous elements. Recipes for salty specialities of cultures around the world and across the centuries are included, along with recommendations for surviving that “surprisingly pleasant” Swedish speciality surströmming (p. 139): “The only remaining problem is how to get the smell out of the house…”.

February 2005 Permalink

Kurlansky, Mark. 1968 : The Year That Rocked the World. New York: Random House, 2004. ISBN 0-345-45582-7.
In the hands of an author who can make an entire book about Salt (February 2005) fascinating, the epochal year of 1968 abounds with people, events, and cultural phenomena which make for a compelling narrative. Many watershed events in history: war, inventions, plague, geographical discoveries, natural disasters, economic booms and busts, etc. have causes which are reasonably easy to determine. But 1968, like the wave of revolutions which swept Europe in 1848 (January 2002), seems to have been driven by a zeitgeist—a spirit in the air which independently inspired people to act in a common way.

The nearly simultaneous “youthquake” which shook societies as widespread and diverse as France, Poland, Mexico, Czechoslovakia, Spain, and the United States, and manifested itself in radical social movements: antiwar, feminism, black power, anti-authoritarianism, psychedelic instant enlightenment, revolutionary and subversive music, and the emergence of “the whole world is watching” wired planetary culture of live satellite television, all of which continue to reverberate today, seemed so co-ordinated that politicians from Charles de Gaulle, Mexican el presidente Dķaz Ordaz, and Leonid Brezhnev were convinced it must be the result of deliberate subversion by their enemies, and were motivated to repressive actions which, in the short term, only fed the fire. In fact, most of the leaders of the various youth movements (to the extent they can be called “leaders”—in those individualistic and anarchistic days, most disdained the title) had never met, and knew about the actions of one another only from what they saw on television. Radicals in the U.S. were largely unaware of the student movement in Mexico before it exploded into televised violence in October.

However the leaders of 1968 may have viewed themselves, in retrospect they were for the most part fascinating, intelligent, well-educated, motivated by a desire to make the world a better place, and optimistic that they could—nothing like the dour, hateful, contemptuous, intolerant, and historically and culturally ignorant people one so often finds today in collectivist movements which believe themselves descended from those of 1968. Consider Mark Rudd's famous letter to Grayson Kirk, president of Columbia University, which ended with the memorable sentence, “I'll use the words of LeRoi Jones, whom I'm sure you don't like a whole lot: ‘Up against the wall, mother****er, this is a stick-up.’” (p. 197), which shocked his contemporaries with the (quoted) profanity, but strikes readers today mostly for the grammatically correct use of “whom”. Who among present-day radicals has the eloquence of Mario Savio's “There's a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can't take part, you can't even tacitly take part, and you've got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you've got to make it stop” (p. 92), yet had the politeness to remove his shoes to avoid damaging the paint before jumping on a police car to address a crowd. In the days of the Free Speech Movement, who would have imagined some of those student radicals, tenured professors four decades later, enacting campus speech codes and enforcing an intellectual monoculture on their own students?

It is remarkable to read on p. 149 how the French soixante-huitards were “dazzled” by their German contemporaries: “We went there and they had their banners and signs and their security forces and everything with militaristic tactics. It was new to me and the other French.” One suspects they weren't paying attention when their parents spoke of the spring of 1940! Some things haven't changed: when New Left leaders from ten countries finally had the opportunity to meet one another at a conference sponsored by the London School of Economics and the BBC (p. 353), the Americans dismissed the Europeans as all talk and no action, while the Europeans mocked the U.S. radicals' propensity for charging into battle without thinking through why, what the goal was supposed to be, or how it was to be achieved.

In the introduction, the author declares his sympathy for the radical movements of 1968 and says “fairness is possible but true objectivity is not”. And, indeed, the book is written from the phrasebook of the leftist legacy media: good guys are “progressives” and “activists”, while bad guys are “right wingers”, “bigots”, or “reactionaries”. (What's “progressive” ought to depend on your idea of progress. Was SNCC's expulsion of all its white members [p. 96] on racial grounds progress?) I do not recall a single observation which would be considered outside the box on the editorial page of the New York Times. While the book provides a thorough recounting of the events and acquaintance with the principal personalities involved, for me it failed to evoke the “anything goes”, “everything is possible” spirit of those days—maybe you just had to have been there. The summation is useful for correcting false memories of 1968, which ended with both Dubček and de Gaulle still in power; the only major world leader defeated in 1968 was Lyndon Johnson, and he was succeeded by Nixon. A “whatever became of” or “where are they now” section would be a useful addition; such information, when it's given, is scattered all over the text.

One wonders whether, in our increasingly interconnected world, something like 1968 could happen again. Certainly, that's the dream of greying radicals nostalgic for their days of glory and young firebrands regretful for having been born too late. Perhaps better channels of communication and the collapse of monolithic political structures have resulted in change becoming an incremental process which adapts to the evolving public consensus before a mass movement has time to develop. It could simply be that the major battles of “liberation” have all been won, and the next major conflict will be incited by those who wish to rein them in. Or maybe it's just that we're still trying to digest the consequences of 1968 and far from ready for another round.

April 2006 Permalink

Kurlansky, Mark. Cod. New York: Penguin Books, 1997. ISBN 978-0-14-027501-8.
There is nothing particularly glamourous about a codfish. It swims near the bottom of the ocean in cold continental shelf waters with its mouth open, swallowing whatever comes along, including smaller cod. While its white flesh is prized, the cod provides little sport for the angler: once hooked, it simply goes limp and must be hauled from the bottom to the boat. And its rather odd profusion of fins and blotchy colour lacks the elegance of marlin or swordfish or the menace of a shark. But the cod has, since the middle ages, played a part not only in the human diet but also in human history, being linked to the Viking exploration of the North Atlantic, the Basque nautical tradition, long-distance voyages in the age of exploration, commercial transatlantic commerce, the Caribbean slave trade, the U.S. war of independence, the expansion of territorial waters from three to twelve and now 200 miles, conservation and the emerging international governance of the law of the sea, and more.

This delightful piece of reportage brings all of this together, from the biology and ecology of the cod, to the history of its exploitation by fishermen over the centuries, the commerce in cod and the conflicts it engendered, the cultural significance of cod in various societies and the myriad ways they have found to use it, and the shameful overfishing which has depleted what was once thought to be an inexhaustible resource (and should give pause to any environmentalist who believes government regulation is the answer to stewardship). But cod wouldn't have made so much history if people didn't eat them, and the narrative is accompanied by dozens of recipes from around the world and across the centuries (one dates from 1393), including many for parts of the fish other than its esteemed white flesh. Our ancestors could afford to let nothing go to waste, and their cleverness in turning what many today would consider offal into delicacies still cherished by various cultures is admirable. Since codfish has traditionally been sold salted and dried (in which form it keeps almost indefinitely, even in tropical climates, if kept dry, and is almost 80% protein by weight—a key enabler of long ocean voyages before the advent of refrigeration), you'll also want to read the author's work on Salt (February 2005).

September 2008 Permalink

Kurlansky, Mark. Paper. New York: W. W. Norton, 2016. ISBN 978-0-393-23961-4.
One of the things that makes us human is our use of extrasomatic memory: we invent ways to store and retrieve things outside our own brains. It's as if when the evolutionary drive which caused the brains of our ancestors to grow over time reached its limit, due to the physical constraints of the birth canal, we applied the cleverness of our bulging brains to figure out not only how to record things for ourselves, but to pass them on to other individuals and transmit them through time to our successors.

This urge to leave a mark on our surroundings is deeply-seated and as old as our species. Paintings at the El Castillo site in Spain have been dated to at least 40,800 years before the present. Complex paintings of animals and humans in the Lascaux Caves in France, dated around 17,300 years ago, seem strikingly modern to observers today. As anybody who has observed young children knows, humans do not need to be taught to draw: the challenge is teaching them to draw only where appropriate.

Nobody knows for sure when humans began to speak, but evidence suggests that verbal communication is at least as old and possibly appeared well before the first evidence of drawing. Once speech appeared, it was not only possible to transmit information from one human to another directly but, by memorising stories, poetry, and songs, to create an oral tradition passed on from one generation to the next. No longer what one individual learned in their life need die with them.

Given the human compulsion to communicate, and how long we've been doing it by speaking, drawing, singing, and sculpting, it's curious we only seem to have invented written language around 5000 years ago. (But recall that the archaeological record is incomplete and consists only of objects which survived through the ages. Evidence of early writing is from peoples who wrote on durable material such as stone or clay tablets, or lived in dry climates such as that of Egypt where more fragile media such as papyrus or parchment would be preserved. It is entirely possible writing was invented much earlier by any number of societies who wrote on more perishable surfaces and lived in climates where they would not endure.)

Once writing appeared, it remained the province of a small class of scribes and clerics who would read texts to the common people. Mass literacy did not appear for millennia, and would require a better medium for the written word and a less time-consuming and costly way to reproduce it. It was in China that the solutions to both of these problems would originate.

Legends date Chinese writing from much earlier, but the oldest known writing in China is dated around 3300 years ago, and was inscribed on bones and turtle shells. Already, the Chinese language used six hundred characters, and this number would only increase over time, with a phonetic alphabet never being adopted. The Chinese may not have invented bureaucracy, but as an ancient and largely stable society they became very skilled at it, and consequently produced ever more written records. These writings employed a variety of materials: stone, bamboo, and wood tablets; bronze vessels; and silk. All of these were difficult to produce, expensive, and many required special skills on the part of scribes.

Cellulose is a main component of the cell wall of plants, and forms the structure of many of the more complex members of the plant kingdom. It forms linear polymers which produce strong fibres. The cellulose content of plants varies widely: cotton is 90% cellulose, while wood is around half cellulose, depending on the species of tree. Sometime around A.D. 100, somebody in China (according to legend, a courtier named Cai Lun) discovered that through a process of cooking, hammering, and chopping, the cellulose fibres in material such as discarded cloth, hemp, and tree bark could be made to separate into a thin slurry of fibres suspended in water. If a frame containing a fine screen were dipped into a vat of this material, rocked back and forth in just the right way, then removed, a fine layer of fibres with random orientation would remain on the screen after the water drained away. This sheet could then be removed, pressed, and dried, yielding a strong, flat material composed of intertwined cellulose fibres. Paper had been invented.

Paper was found to be ideal for writing the Chinese language, which was, and is today, usually written with a brush. Since paper could be made from raw materials previously considered waste (rags, old ropes and fishing nets, rice and bamboo straw), water, and a vat and frame which were easily constructed, it was inexpensive and could be produced in quantity. Further, the papermaker could vary the thickness of the paper by adding more or less pulp to the vat, by the technique in dipping the frame, and produce paper with different surface properties by adding “sizing” material such as starch to the mix. In addition to sating the appetite of the imperial administration, paper was adopted as the medium of choice for artists, calligraphers, and makers of fans, lanterns, kites, and other objects.

Many technologies were invented independently by different societies around the world. Paper, however, appears to have been discovered only once in the eastern hemisphere, in China, and then diffused westward along the Silk Road. The civilisations of Mesoamerica such as the Mayans, Toltecs, and Aztecs, extensively used, prior to the Spanish conquest, what was described as paper, but it is not clear whether this was true paper or a material made from reeds and bark. So thoroughly did the conquistadors obliterate the indigenous civilisations, burning thousands of books, that only three Mayan books and fifteen Aztec documents are known to have survived, and none of these are written on true paper.

Paper arrived in the Near East just as the Islamic civilisation was consolidating after its first wave of conquests. Now faced with administering an empire, the caliphs discovered, like the Chinese before them, that many documents were required and the new innovative writing material met the need. Paper making requires a source of cellulose-rich material and abundant water, neither of which are found in the Arabian peninsula, so the first great Islamic paper mill was founded in Baghdad in A.D. 794, originally employing workers from China. It was the first water-powered paper mill, a design which would dominate paper making until the age of steam. The demand for paper continued to grow, and paper mills were established in Damascus and Cairo, each known for the particular style of paper they produced.

It was the Muslim invaders of Spain who brought paper to Europe, and paper produced by mills they established in the land they named al-Andalus found markets in the territories we now call Italy and France. Many Muslim scholars of the era occupied themselves producing editions of the works of Greek and Roman antiquity, and wrote them on paper. After the Christian reconquest of the Iberian peninsula, papermaking spread to Italy, arriving in time for the awakening of intellectual life which would be called the Renaissance and produce large quantities of books, sheet music, maps, and art: most of it on paper. Demand outstripped supply, and paper mills sprung up wherever a source of fibre and running water was available.

Paper provided an inexpensive, durable, and portable means of storing, transmitting, and distributing information of all kinds, but was limited in its audience as long as each copy had to be laboriously made by a scribe or artist (often introducing errors in the process). Once again, it was the Chinese who invented the solution. Motivated by the Buddhist religion, which values making copies of sacred texts, in the 8th century A.D. the first documents were printed in China and Japan. The first items to be printed were single pages, carved into a single wood block for the whole page, then printed onto paper in enormous quantities: tens of thousands in some cases. In the year 868, the first known dated book was printed, a volume of Buddhist prayers called the Diamond Sutra. Published on paper in the form of a scroll five metres long, each illustrated page was printed from a wood block carved with its entire contents. Such a “block book” could be produced in quantity (limited only by wear on the wood block), but the process of carving the wood was laborious, especially since text and images had to be carved as a mirror image of the printed page.

The next breakthrough also originated in China, but had limited impact there due to the nature of the written language. By carving or casting an individual block for each character, it was possible to set any text from a collection of characters, print documents, then reuse the same characters for the next job. Unfortunately, by the time the Chinese began to experiment with printing from movable type in the twelfth and thirteenth centuries, it took 60,000 different characters to print the everyday language and more than 200,000 for literary works. This made the initial investment in a set of type forbidding. The Koreans began to use movable type cast from metal in the fifteenth century and were so impressed with its flexibility and efficiency that in 1444 a royal decree abolished the use of Chinese characters in favour of a phonetic alphabet called Hangul which is still used today.

It was in Europe that movable type found a burgeoning intellectual climate ripe for its adoption, and whence it came to change the world. Johannes Gutenberg was a goldsmith, originally working with his brother Friele in Mainz, Germany. Fleeing political unrest, the brothers moved to Strasbourg, where around 1440 Johannes began experimenting with movable type for printing. His background as a goldsmith equipped him with the required skills of carving, stamping, and casting metal; indeed, many of the pioneers of movable type in Europe began their careers as goldsmiths. Gutenberg carved letters into hard metal, forming what he called a punch. The punch was used to strike a copper plate, forming an impression called the matrix. Molten lead was then poured into the matrix, producing individual characters of type. Casting letters in a matrix allowed producing as many of each letter as needed to set pages of type, and for replacement of worn type as required. The roman alphabet was ideal for movable type: while the Chinese language required 60,000 or more characters, a complete set of upper and lower case letters, numbers, and punctuation for German came to only around 100 pieces of type. Accounting for duplicates of commonly used letters, Gutenberg's first book, the famous Gutenberg Bible, used a total of 290 pieces of type. Gutenberg also developed a special ink suited for printing with metal type, and adapted a press he acquired from a paper mill to print pages.

Gutenberg was secretive about his processes, likely aware he had competition, which he did. Movable type was one of those inventions which was “in the air”—had Gutenberg not invented and publicised it, his contemporaries working in Haarlem, Bruges, Avignon, and Feltre, all reputed by people of those cities to have gotten there first, doubtless would have. But it was the impact of Gutenberg's Bible, which demonstrated that movable type could produce book-length works of quality comparable to those written by the best scribes, which established the invention in the minds of the public and inspired others to adopt the new technology.

Its adoption was, by the standards of the time, swift. An estimated eight million books were printed and sold in Europe in the second half of the fifteenth century—more books than Europe had produced in all of history before that time. Itinerant artisans would take their type punches from city to city, earning money by setting up locals in the printing business, then moving on.

In early sixteenth century Germany, the printing revolution sparked a Reformation. Martin Luther, an Augustinian monk, completed his German translation of the Bible in 1534 (he had earlier published a translation of the New Testament in 1522). This was the first widely-available translation of the Bible into a spoken language, and reinforced the Reformation idea that the Bible was directly accessible to all, without need for interpretation by clergy. Beginning with his original Ninety-five Theses, Luther authored thirty publications, which it is estimated sold 300,000 copies (in a territory of around 14 million German speakers). Around a third of all publications in Germany in the era were related to the Reformation.

This was a new media revolution. While the incumbent Church reacted at the speed of sermons read occasionally to congregations, the Reformation produced a flood of tracts, posters, books, and pamphlets written in vernacular German and aimed directly at an increasingly literate population. Luther's pamphlets became known as Flugschriften: “quick writing”. One such document, written in 1520, sold 4000 copies in three weeks and 50,000 in two years. Whatever the merits of the contending doctrines, the Reformation had fully embraced and employed the new communication technology to speak directly to the people. In modern terms, you might say the Reformation was the “killer app” for movable type printing.

Paper and printing with movable type were the communication and information storage technologies the Renaissance needed to express and distribute the work of thinkers and writers across a continent, who were now able to read and comment on each other's work and contribute to a culture that knew no borders. Interestingly, the technology of paper making was essentially unchanged from that of China a millennium and a half earlier, and printing with movable type hardly different from that invented by Gutenberg. Both would remain largely the same until the industrial revolution. What changed was an explosion in the volume of printed material and, with increasing literacy among the general public, the audience and market for it. In the eighteenth century a new innovation, the daily newspaper, appeared. Between 1712 and 1757, the circulation of newspapers in Britain grew eightfold. By 1760, newspaper circulation in Britain was 9 million, and would increase to 24 million by 1811.

All of this printing required ever increasing quantities of paper, and most paper in the West was produced from rags. Although the population was growing, their thirst for printed material expanded much quicker, and people, however fastidious, produce only so many rags. Paper shortages became so acute that newspapers limited their size based on the availability and cost of paper. There were even cases of scavengers taking clothes from the dead on battlefields to sell to paper mills making newsprint used to report the conflict. Paper mills resorted to doggerel to exhort the public to save rags:

The scraps, which you reject, unfit
To clothe the tenant of a hovel,
May shine in sentiment and wit,
And help make a charming novel…

René Antoine Ferchault de Réaumur, a French polymath who published in numerous fields of science, observed in 1719 that wasps made their nests from what amounted to paper they produced directly from wood. If humans could replicate this vespidian technology, the forests of Europe and North America could provide an essentially unlimited and renewable source of raw material for paper. This idea was to lie fallow for more than a century. Some experimenters produced small amounts of paper from wood through various processes, but it was not until 1850 that paper was manufactured from wood in commercial quantities in Germany, and 1863 that the first wood-based paper mill began operations in America.

Wood is about half cellulose, while the fibres in rags run up to 90% cellulose. The other major component of wood is lignin, a cross-linked polymer which gives it its strength and is useless for paper making. In the 1860s a process was invented where wood, first mechanically cut into small chips, was chemically treated to break down the fibrous structure in a device called a “digester”. This produced a pulp suitable for paper making, and allowed a dramatic expansion in the volume of paper produced. But the original wood-based paper still contained lignin, which turns brown over time. While this was acceptable for newspapers, it was undesirable for books and archival documents, for which rag paper remained preferred. In 1879, a German chemist invented a process to separate lignin from cellulose in wood pulp, which allowed producing paper that did not brown with age.

The processes used to make paper from wood involved soaking the wood pulp in acid to break down the fibres. Some of this acid remained in the paper, and many books printed on such paper between 1840 and 1970 are now in the process of slowly disintegrating as the acid eats away at the paper. Only around 1970 was it found that an alkali solution works just as well when processing the pulp, and since then acid-free paper has become the norm for book publishing.

Most paper is produced from wood today, and on an enormous, industrial scale. A single paper mill in China, not the largest, produces 600,000 tonnes of paper per year. And yet, for all of the mechanisation, that paper is made by the same process as the first sheet of paper produced in China: by reducing material to cellulose fibres, mixing them with water, extracting a sheet (now a continuous roll) with a screen, then pressing and drying it to produce the final product.

Paper and printing is one of those technologies which is so simple, based upon readily-available materials, and potentially revolutionary that it inspires “what if” speculation. The ancient Egyptians, Greeks, and Romans each had everything they needed—raw materials, skills, and a suitable written language—so that a Connecticut Yankee-like time traveller could have explained to artisans already working with wood and metal how to make paper, cast movable type, and set up a printing press in a matter of days. How would history have differed had one of those societies unleashed the power of the printed word?

December 2016 Permalink

Lamont, Peter. The Rise of the Indian Rope Trick. New York: Thunder's Mouth Press, 2004. ISBN 1-56025-661-3.
Charmed by a mysterious swami, the end of a rope rises up of its own accord high into the air. A small boy climbs the rope and, upon reaching the top, vanishes. The Indian rope trick: ancient enigma of the subcontinent or 1890 invention by a Chicago newspaper embroiled in a circulation war? Peter Lamont, magician and researcher at the University of Edinburgh, traces the origin and growth of this pervasive legend. Along the way we encounter a cast of characters including Marco Polo; a Chief of the U.S. Secret Service; Madame Blavatsky; Charles Dickens; Colonel Stodare, an Englishman who impersonated a Frenchman performing Indian magic; William H. Seward, Lincoln's Secretary of State; Professor Belzibub; General Earl Haig and his aptly named aide-de-camp, Sergeant Secrett; John Nevil Maskelyne, conjurer, debunker of spiritualism, and inventor of the pay toilet; and a host of others. The author's style is occasionally too clever for its own good, but this is a popular book about the Indian rope trick, not a quantum field theory text after all, so what the heck. I read the U.K. edition.

January 2005 Permalink

Lansing, Alfred. Endurance. New York: Carroll & Graf [1959, 1986] 1999. ISBN 978-0-7867-0621-1.
Novels and dramatisations of interplanetary missions, whether (reasonably) scrupulously realistic, highly speculative, or utterly absurd, often focus on the privation of their hardy crews and the psychological and interpersonal stresses they must endure when venturing so distant from the embrace of the planetary nanny state.

Balderdash! Unless a century of socialism succeeds in infantilising its subjects into pathetic, dependent, perpetual adolescents (see the last item cited above as an example), such voyages of discovery will be crewed by explorers, that pinnacle of the human species who volunteers to pay any price, bear any burden, and accept any risk to be among the first to see what's over the horizon.

This chronicle of Ernest Shackleton's Imperial Trans-Antarctic Expedition will acquaint you with real explorers, and leave you in awe of what those outliers on the bell curve of our species can and will endure in circumstances which almost defy description on the printed page.

At the very outbreak of World War I, Shackleton's ship, the Endurance, named after the motto of his family, Fortitudine vincimus: “By endurance we conquer”, sailed for Antarctica. The mission was breathtaking in its ambition: to land a party in Vahsel Bay area of the Weddell Sea, which would cross the entire continent of Antarctica, proceeding to the South Pole with the resources landed from their ship, and then crossing to the Ross Sea with the aid of caches of supplies emplaced by a second party landing at McMurdo Sound. So difficult was the goal that Shackleton's expedition was attempting to accomplish that it was not achieved until 1957–1958, when the Commonwealth Trans-Antarctic Expedition made the crossing with the aid of motorised vehicles and aerial reconnaissance.

Shackleton's expedition didn't even manage to land on the Antarctic shore; the Endurance was trapped in the pack ice of the Weddell Sea in January 1915, and the crew were forced to endure the Antarctic winter on the ship, frozen in place. Throughout the long polar night, conditions were tolerable and morale was high, but much worse was to come. As the southern summer approached, the pack ice began to melt, break up, and grind floe against floe, and on 27th October 1915, pressure of the ice against the ship became unsustainable and Shackleton gave the order to abandon ship and establish a camp on the ice floe, floating on the Weddell Sea. The original plan was to use the sled dogs and the men to drag supplies and the ship's three lifeboats across the ice toward a cache of supplies known to have been left at Paulet Island by an earlier expedition, but pressure ridges in the sea ice soon made it evident that such an ambitious traverse would be impossible, and the crew resigned themselves to camping on the ice pack, whose drift was taking them north, until its breakup would allow them to use the boats to make for the nearest land. And so they waited, until April 8th, 1916, when the floe on which they were camped began to break up and they were forced into the three lifeboats to head for Elephant Island, a forbidding and uninhabited speck of land in the Southern Ocean. After a harrowing six day voyage, the three lifeboats arrived at the island, and for the first time in 497 days the crew of the Endurance were able to sleep on terra firma.

Nobody, even sealers and whalers operating of Antarctica, ever visited Elephant Island: Shackleton's crew were the first to land there. So the only hope of rescue was for a party to set out from there to the nearest reachable inhabited location, South Georgia Island, 1,300 kilometres across the Drake Passage, the stormiest and most treacherous sea on Earth. (There were closer destinations, but due to the winds and currents of the Southern Ocean, none of them were achievable in a vessel with the limited capabilities of their lifeboat.) Well, it had to be done, and so they did it. In one of the most remarkable achievements of seamanship of all time, Frank Worsley sailed his small open boat through these forbidding seas, surviving hurricane-force winds, rogue waves, and unimaginable conditions at the helm, arriving at almost a pinpoint landing on a tiny island in a vast sea with only his sextant and a pocket chronometer, the last remaining of the 24 the Endurance carried when it sailed from the Thames, worn around his neck to keep it from freezing.

But even then it wasn't over. Shackleton's small party had landed on the other side of South Georgia Island from the whaling station, and the state of their boat and prevailing currents and winds made it impossible to sail around the coast to there. So, there was no alternative but to go cross-country, across terrain completely uncharted (all maps showed only the coast, as nobody had ventured inland). And, with no other option, they did it. Since Shackleton's party, there has been only one crossing of South Georgia Island, done in 1955 by a party of expert climbers with modern equipment and a complete aerial survey of their route. They found it difficult to imagine how Shackleton's party, in their condition and with their resources, managed to make the crossing, but of course it was because they had to.

Then it was a matter of rescuing the party left at the original landing site on South Georgia, and then mounting an expedition to relieve those waiting at Elephant Island. The latter was difficult and frustrating—it was not until 30th August 1916 that Shackleton was able to take those he left on Elephant Island back to civilisation. And every single person who departed from South Georgia on the Endurance survived the expedition and returned to civilisation. All suffered from the voyage, but only stowaway Perce Blackboro lost a foot to frostbite; all the rest returned without consequences from their ordeal.

Bottom line—there were men on this expedition, and if similarly demanding expeditions in the future are crewed by men and women equal to their mettle, they will come through just fine without any of the problems the touchy-feely inkblot drones worry about. People with the “born as victim” self-image instilled by the nanny state are unlikely to qualify for such a mission, and should the all-smothering state manage to reduce its subjects to such larvæ, it is unlikely in the extreme that it would mount such a mission, choosing instead to huddle in its green enclaves powered by sewage and the unpredictable winds until the giant rock from the sky calls down the curtain on their fruitless existence.

I read the Kindle edition; unless you're concerned with mass and volume taking this book on a long trip (for which it couldn't be more appropriate!), I'd recommend the print edition, which is not only less expensive (neglecting shipping charges), but also reproduces with much higher quality the many photographs taken by expedition photographer Frank Hurley and preserved through the entire ordeal.

August 2010 Permalink

Large, Christine. Hijacking Enigma. Chichester, England: John Wiley & Sons, 2003. ISBN 0-470-86346-3.
The author, Director of the Bletchley Park Trust, recounts the story of the April 2000 theft and eventual recovery of Bletchley's rare Abwehr Engima cipher machine, interleaved with a history of Bletchley's World War II exploits in solving the Engima and its significance in the war. If the latter is your primary interest, you'll probably prefer Michael Smith's Station X (July 2001), which provides much more technical and historical detail. Readers who didn't follow the Enigma theft as it played out and aren't familiar with the names of prominent British news media figures may feel a bit at sea in places. A Web site devoted to the book is now available, and a U.S. edition is scheduled for publication later in 2003.

September 2003 Permalink

Larson, Erik. The Devil in the White City. New York: Vintage Books, 2003. ISBN 0-375-72560-1.
It's conventional wisdom in the publishing business that you never want a book to “fall into the crack” between two categories: booksellers won't know where to shelve it, promotional campaigns have to convey a complicated mixed message, and you run the risk of irritating readers who bought it solely for one of the two topics. Here we have a book which evokes the best and the worst of the Gilded Age of the 1890s in Chicago by interleaving the contemporary stories of the 1893 World's Columbian Exposition and the depraved series of murders committed just a few miles from the fairgrounds by the archetypal American psychopathic serial killer, the chillingly diabolical Dr. H. H. Holmes (the principal alias among many used by a man whose given name was Herman Webster Mudgett; his doctorate was a legitimate medical degree from the University of Michigan). Architectural and industrial history and true crime are two genres you might think wouldn't mix, but in the hands of the author they result in a compelling narrative which I found as difficult to put down as any book I have read in the last several years. For once, this is not just my eccentric opinion; at this writing the book has been on The New York Times Best-Seller list for more than two consecutive years and won the Edgar award for best fact crime in 2004. As I rarely frequent best-seller lists, it went right under my radar. Special thanks to the visitor to this page who recommended I read it!

Boosters saw the Columbian Exposition not so much as a commemoration of the 400th anniversary of the arrival of Columbus in the New World but as a brash announcement of the arrival of the United States on the world stage as a major industrial, commercial, financial, and military power. They viewed the 1889 Exposition Universelle in Paris (for which the Eiffel Tower was built) as a throwing down of the gauntlet by the Old World, and vowed to assert the preeminence of the New by topping the French and “out-Eiffeling Eiffel”. Once decided on by Congress, the site of the exposition became a bitterly contested struggle between partisans of New York, Washington, and Chicago, with the latter seeing its victory as marking its own arrival as a peer of the Eastern cities who looked with disdain at what Chicagoans considered the most dynamic city in the nation.

Charged with building the Exposition, a city in itself, from scratch on barren, wind-swept, marshy land was architect Daniel H. Burnham, he who said, “Make no little plans; they have no magic to stir men's blood.” He made no little plans. The exposition was to have more than 200 buildings in a consistent neo-classical style, all in white, including the largest enclosed space ever constructed. While the electric light was still a novelty, the fair was to be illuminated by the the first large-scale application of alternating current. Edison's kinetoscope amazed visitors with moving pictures, and a theatre presented live music played by an orchestra in New York and sent over telephone wires to Chicago. Nikola Tesla amazed fairgoers with huge bolts of electrical fire, and a giant wheel built by a man named George Washington Gale Ferris lifted more than two thousand people at once into the sky to look down upon the fair like gods. One of the army of workers who built the fair was a carpenter named Elias Disney, who later regaled his sons Roy and Walt with tales of the magic city; they must have listened attentively.

The construction of the fair in such a short time seemed miraculous to onlookers (and even more so to those accustomed to how long it takes to get anything built a century later), but the list of disasters, obstacles, obstructions, and outright sabotage which Burnham and his team had to overcome was so monumental you'd have almost thought I was involved in the project! (Although if you've ever set up a trade show booth in Chicago, you've probably gotten a taste of it.) A total of 27.5 million people visited the fair between May and October of 1893, and this in a country whose total population (1890 census) was just 62.6 million. Perhaps even more astonishing to those acquainted with comparable present-day undertakings, the exposition was profitable and retired all of its bank debt.

While the enchanted fair was rising on the shore of Lake Michigan and enthralling visitors from around the world, in a gloomy city block size building not far away, Dr. H. H. Holmes was using his almost preternatural powers to charm the young, attractive, and unattached women who flocked to Chicago from the countryside in search of careers and excitement. He offered them the former in various capacities in the businesses, some legitimate and other bogus, in his “castle”, and the latter in his own person, until he killed them, disposed of their bodies, and in some cases sold their skeletons to medical schools. Were the entire macabre history of Holmes not thoroughly documented in court proceedings, investigators' reports, and reputable contemporary news items, he might seem to be a character from an over-the-top Gothic novel, like Jack the Ripper. But wait—Jack the Ripper was real too. However, Jack the Ripper is only believed to have killed five women; Holmes is known for certain to have killed nine men, women, and children. He confessed to killing 27 in all, but this was the third of three mutually inconsistent confessions all at variance with documented facts (some of those he named in the third confession turned up alive). Estimates ran as high as two hundred, but that seems implausible. In any case, he was a monster the likes of which no American imagined inhabited their cities until his crimes were uncovered. Remarkably, and of interest to libertarians who advocate the replacement of state power by insurance-like private mechanisms, Holmes never even came under suspicion by any government law enforcement agency during the entire time he committed his murder spree, nor did any of his other scams (running out on debts, forging promissory notes, selling bogus remedies) attract the attention of the law. His undoing was when he attempted insurance fraud (one of his favourite activities) and ended up with Nemesis-like private detective Frank Geyer on his trail. Geyer, through tireless tracking and the expenditure of large quantities of shoe leather, got the goods on Holmes, who met his end on the gallows in May of 1896. His jailers considered him charming.

I picked this book up expecting an historical recounting of a rather distant and obscure era. Was I ever wrong—I finished the whole thing in two and half days; the story is that fascinating and the writing that good. More than 25 pages of source citations and bibliography are included, but this is not a dry work of history; it reads like a novel. In places, the author has invented descriptions of events for which no eyewitness account exists; he says that in doing this, his goal is to create a plausible narrative as a prosecutor does at a trial. Most such passages are identified in the end notes and justifications given for the inferences made therein. The descriptions of the Exposition cry out for many more illustrations than are included: there isn't even a picture of the Ferris wheel! If you read this book, you'll probably want to order the Dover Photographic Record of the Fair—I did.

March 2006 Permalink

Larson, Erik. In the Garden of Beasts. New York: Crown Publishers, 2011. ISBN 978-0-307-40884-6.
Ambassadors to high-profile postings are usually chosen from political patrons and contributors to the president who appoints them, depending upon career Foreign Service officers to provide the in-country expertise needed to carry out their mandate. Newly-elected Franklin Roosevelt intended to follow this tradition in choosing his ambassador to Germany, where Hitler had just taken power, but discovered that none of the candidates he approached were interested in being sent to represent the U.S. in Nazi Germany. William E. Dodd, a professor of history and chairman of the department of history at the University of Chicago, growing increasingly frustrated with his administrative duties preventing him from completing his life's work: a comprehensive history of the ante-bellum American South, mentioned to a friend in Roosevelt's inner circle that he'd be interested in an appointment as ambassador to a country like Belgium or the Netherlands, where he thought his ceremonial obligations would be sufficiently undemanding that he could concentrate on his scholarly work.

Dodd was astonished when Roosevelt contacted him directly and offered him the ambassadorship to Germany. Roosevelt appealed to Dodd's fervent New Deal sympathies, and argued that in such a position he could be an exemplar of American liberal values in a regime hostile to them. Dodd realised from the outset that a mission to Berlin would doom his history project, but accepted because he agreed with Roosevelt's goal and also because FDR was a very persuasive person. His nomination was sent to the Senate and confirmed the very same day.

Dodd brought his whole family along on the adventure: wife Mattie and adult son and daughter Bill and Martha. Dodd arrived in Berlin with an open mind toward the recently-installed Nazi regime. He was inclined to dismiss the dark view of the career embassy staff and instead adopt what might be called today “smart diplomacy”, deceiving himself into believing that by setting an example and scolding the Nazi slavers he could shame them into civilised behaviour. He immediately found himself at odds not only with the Nazis but also his own embassy staff: he railed against the excesses of diplomatic expense, personally edited the verbose dispatches composed by his staff to save telegraph charges, and drove his own aged Chevrolet, shipped from the U.S., to diplomatic functions where all of the other ambassadors arrived in stately black limousines.

Meanwhile, daughter Martha embarked upon her own version of Girl Gone Wild—Third Reich Edition. Initially exhilarated by the New Germany and swept into its social whirl, before long she was carrying on simultaneous affairs with the head of the Gestapo and a Soviet NKVD agent operating under diplomatic cover in Berlin, among others. Those others included Ernst “Putzi” Hanfstaengl, who tried to set her up with Hitler (nothing came of it; they met at lunch and that was it). Martha's trajectory through life was extraordinary. After affairs with the head of the Gestapo and one of Hitler's inner circle, she was recruited by the NKVD and spied on behalf of the Soviet Union in Berlin and after her return to the U.S. It is not clear that she provided anything of value to the Soviets, as she had no access to state secrets during this period. With investigations of her Soviet affiliations intensifying in the early 1950s, in 1956 she fled with her American husband and son to Prague, Czechoslovakia where they lived until her death in 1990 (they may have spent some time in Cuba, and apparently applied for Soviet citizenship and were denied it).

Dodd père was much quicker to figure out the true nature of the Nazi regime. Following Roosevelt's charge to represent American values, he spoke out against the ever-increasing Nazi domination of every aspect of German society, and found himself at odds with the patrician “Pretty Good Club” at the State Department who wished to avoid making waves, regardless of how malevolent and brutal the adversary might be. Today, we'd call them the “reset button crowd”. Even Dodd found the daily influence of immersion in gleichschaltung difficult to resist. On several occasions he complained of the influence of Jewish members of his staff and the difficulties they posed in dealing with the Nazi regime.

This book focuses upon the first two years of Dodd's tenure as ambassador in Berlin, as that was the time in which the true nature of the regime became apparent to him and he decided upon his policy of distancing himself from it: for example, refusing to attend any Nazi party-related events such as the Nuremberg rallies. It provides an insightful view of how seductive a totalitarian regime can be to outsiders who see only its bright-eyed marching supporters, while ignoring the violence which sustains it, and how utterly futile “constructive engagement” is with barbarians that share no common values with civilisation.

Thanks to James Lileks for suggesting this book.

December 2011 Permalink

Lawrie, Alan. Sacramento's Moon Rockets. Charleston, SC: Arcadia Publishing, 2015. ISBN 978-1-4671-3389-0.
In 1849 gold was discovered in California, setting off a gold rush which would bring a wave of prospectors and fortune seekers into one of the greatest booms in American history. By the early 20th century, the grizzled prospector panning for gold had given way to industrial extraction of the metal. In an age before anybody had heard the word “environmentalism”, this was accomplished in the most direct way possible: man made lakes were created on gold-bearing land, then a barge would dredge up the bottom and mix it with mercury, which would form an amalgam with the gold. The gold could later be separated, purified, and sold.

The process effectively destroyed the land on which it was used. The topsoil was ripped out, vegetation killed, and the jumbled remains after extraction dumped in barren hills of tailings. Half a century later, the mined-out land was considered unusable for either agriculture or residential construction. Some described it as a “moonscape”.

It was perhaps appropriate that, in the 1960s, this stark terrain became home to the test stands on which the upper stage of NASA's Saturn rockets were developed and tested before flight. Every Saturn upper stage, including those which launched Apollo flights to the Moon, underwent a full-duration flight qualification firing there before being shipped to Florida for launch.

When the Saturn project was approved, Douglas Aircraft Company won the contract to develop the upper stage, which would be powered by liquid hydrogen and liquid oxygen (LH2/LOX) and have the ability to restart in space, allowing the Apollo spacecraft to leave Earth orbit on a trajectory bound for the Moon. The initial upper stage was called the S-IV, and was used as the second stage of the Saturn I launcher flown between 1961 and 1965 to demonstrate heavy lift booster operations and do development work related to the Apollo project. The S-IV used a cluster of six RL10 engines, at the time the largest operational LH2/LOX engine. The Saturn I had eight engines on its first stage and six engines on the S-IV. Given the reliability of rocket engines at the time, many engineers were dubious of getting fourteen engines to work on every launch (although the Saturn I did have a limited engine out capability). Skeptics called it “Cluster's last stand.”

The S-IV stages were manufactured at the Douglas plant in Huntington Beach, California, but there was no suitable location near the plant where they could be tested. The abandoned mining land near Sacramento had been acquired by Aerojet for rocket testing, and Douglas purchased a portion for its own use. The outsized S-IV stage was very difficult to transport by road, so the ability to ship it by water from southern California to the test site via San Francisco Bay and the Sacramento River was a major advantage of the location.

The operational launchers for Apollo missions would be the Saturn IB and Saturn V, with the Saturn IB used for Earth orbital missions and the Saturn V for Moon flights and launching space stations. An upgraded upper stage, the S-IVB, would be used by these launchers, as the second stage of the Saturn IB and the third stage of the Saturn V. (S-IVBs for the two launchers differed in details, but the basic configuration was the same.) The six RL-10 engines of the S-IV were replaced by a single much more powerful J-2 engine which had, by that time, become available.

The Sacramento test facility was modified to do development and preflight testing of the S-IVB, and proceeded to test every flight stage. No rocket firing is ever routine, and in 1965 and 1967 explosions destroyed an S-IV test article and a flight S-IVB stage which was scheduled to be used in Apollo 8. Fortunately, there were no casualties from these spectacular accidents, and they provided the first data on the effects of large scale LH2/LOX explosions which proved to be far more benign than had been feared. It had been predicted that a LH2/LOX explosion would produce a blast equal to 65% of the propellant mass of TNT when, in fact, the measured blast was just 5% TNT equivalent mass. It's nice to know, but an expensive way to learn.

This book is not a detailed history of the Sacramento test facility but rather a photo gallery showing the construction of the site; transportation of stages by sea, road, and later by the amazing Super Guppy airplane; testing of S-IV and S-IVB stages; explosions and their aftermath; and a visit to the site fifty years later. The photos have well-researched and informative captions.

When you think of the Apollo program, the Cape, Houston, Huntsville, and maybe Slidell come to mind, but rarely Sacramento. And yet every Apollo mission relied upon a rocket stage tested at the Rancho Cordova site near that city. Here is a part of the grandiose effort to go to the Moon you probably haven't seen before. The book is just 96 pages and expensive (a small print run and colour on almost every page will do that), but there are many pictures collected here I've seen nowhere else.

September 2015 Permalink

LeBlanc, Steven A. with Katherine E. Register. Constant Battles. New York: St. Martin's Griffin, 2003. ISBN 0-312-31090-0.
Steven LeBlanc is the Director of Collections at Harvard University's Peabody Museum of Archaeology and Ethnology. When he began his fieldwork career in the early 1970s, he shared the opinion of most of the archaeologists and anthropologists of his generation and present-day laymen that most traditional societies in the hunter-gatherer and tribal farming eras were mostly peaceful and lived in balance with their environments. It was, according to this view, only with the emergence of large chiefdoms and state-level societies that environmental degradation began to appear and mass conflict emerge, culminating in the industrialised slaughter of the 20th century.

But, to the author, as a dispassionate scientist, looking at the evidence on the ground or dug up from beneath it in expeditions in the American Southwest, Turkey, and Peru, and in the published literature, there were many discrepancies from this consensus narrative. In particular, why would “peaceful” farming people build hilltop walled citadels far from their fields and sources of water if not for defensibility? And why would hard-working farmers obsess upon defence were there not an active threat from their neighbours?

Further investigations argue convincingly that the human experience, inherited directly from our simian ancestors, has been one of relentless population growth beyond the carrying capacity of our local environment, degradation of the ecosystem, and the inevitable conflict with neighbouring bands over scarce resources. Ironically, many of the reports of early ethnographers which appeared to confirm perennially-wrong philosopher Rousseau's vision of the “noble savage” were based upon observations of traditional societies which had recently been impacted by contact with European civilisation: population collapse due to exposure to European diseases to which they had no immunity, and increases in carrying capacity of the land thanks to introduction of European technologies such as horses, steel tools, and domestic animals, which had temporarily eased the Malthusian pressure upon these populations and suspended resource wars. But the archaeological evidence is that such wars are the norm, not an aberration.

In fact, notwithstanding the horrific death toll of twentieth century warfare, the rate of violent death among the human population has fallen to an all-time low in the nation-state era. Hunter-gatherer (or, as the authors prefer to call them, “forager”) and tribal farming societies typically lose about 25% of their male population and 5% of the females to warfare with neighbouring bands. Even the worst violence of the nation-state era, averaged over a generation, has a death toll only one eighth this level.

Are present-day humans (or, more specifically, industrialised Western humans) unprecedented despoilers of our environment and aggressors against inherently peaceful native people? Nonsense argues this extensively documented book. Unsustainable population growth, resource exhaustion, environmental degradation, and lethal conflict with neighbours are as human as bipedalism and speech. Conflict is not inevitable, and civilisation, sustainable environmental policy, and yield-improving and resource-conserving technology are the best course to reducing the causes of conflict. Dreaming of a nonexistent past of peaceful people living in harmony with their environment isn't.

You can read any number of books about military history, from antiquity to the present, without ever encountering a discussion of “Why we fight”—that's the subtitle of this book, and I've never encountered a better source to begin to understand the answer to this question than you'll find here.

August 2007 Permalink

Lehto, Steve. Chrysler's Turbine Car. Chicago: Chicago Review Press, 2010. ISBN 978-1-56976-549-4.
There were few things so emblematic of the early 1960s as the jet airliner. Indeed, the period was often referred to contemporarily as the “jet age”, and products from breakfast cereal to floor wax were positioned as modern wonders of that age. Anybody who had experienced travel in a piston powered airliner and then took their first flight in a jet felt that they had stepped into the future: gone was the noise, rattling, and shaking from the cantankerous and unreliable engines that would knock the fillings loose in your teeth, replaced by a smooth whoosh which (although, in the early jets, deafening to onlookers outside), allowed carrying on a normal conversation inside the cabin. Further, notwithstanding some tragic accidents in the early days as pilots became accustomed to the characteristics of the new engines and airframes, it soon became apparent that these new airliners were a great deal safer and more reliable than their predecessors: they crashed a lot less frequently, and flights delayed and cancelled due to mechanical problems became the rare exception rather than something air travellers put up with only because the alternative was so much worse.

So, if the jet age had arrived, and jet power had proven itself to be so superior to the venerable and hideously overcomplicated piston engine, where were the jet cars? This book tells the long and tangled story of just how close we came to having turbine powered automobiles in the 1960s, how a small group of engineers plugging away at problem after problem over twenty years managed to produce an automotive powerplant so clearly superior to contemporary piston engines that almost everybody who drove a vehicle powered by it immediately fell in love and wished they could have one of their own, and ultimately how financial problems and ill-considered government meddling destroyed the opportunity to replace automotive powerplants dependent upon petroleum-based fuels (which, at the time, contained tetraethyl lead) with one which would run on any combustible liquid, emit far less pollution from the tailpipe, run for hundreds of thousands of miles without an oil change or need for a tune-up, start instantly and reliably regardless of the ambient temperature, and run so smoothly and quietly that for the first time passengers were aware of the noise of the tires rolling over the road.

In 1945, George Huebner, who had worked on turboprop aircraft for Chrysler during World War II, returned to the civilian automotive side of the company as war work wound down. A brilliant engineer as well as a natural-born promoter of all things he believed in, himself most definitely included, by 1946 he was named Chrysler's chief engineer and used his position to champion turbine propulsion, which he had already seen was the future in aviation, for automotive applications. The challenges were daunting: turboshaft engines (turbines which delivered power by turning a shaft coupled to the turbine rotor, as used in turboprop airplanes and helicopters) gulped fuel at a prodigious rate, including when at “idle”, took a long time to “spool up” to maximum power, required expensive exotic materials in the high-temperature section of the engine, and had tight tolerances which required parts to be made by costly and low production rate investment casting, which could not produce parts in the quantity, nor at a cost acceptable for a mass market automotive powerplant.

Like all of the great engineers, Huebner was simultaneously stubborn and optimistic: stubborn in his belief that a technology so much simpler and inherently more thermodynamically efficient must eventually prevail, and optimistic that with patient engineering, tackling one problem after another and pursuing multiple solutions in parallel, any challenge could be overcome. By 1963, coming up on the twentieth year of the effort, progress had been made on all fronts to the extent that Huebner persuaded Chrysler management that the time had come to find out whether the driving public was ready to embrace the jet age in their daily driving. In one of the greatest public relations stunts of all time, Chrysler ordered 55 radically styled (for the epoch) bodies from the Ghia shop in Italy, and mated them with turbine drivetrains and chassis in a Michigan factory previously used to assemble taxicabs. Fifty of these cars (the other five being retained for testing and promotional purposes) were loaned, at no charge, for periods of three months each, to a total of 203 drivers and their families. Delivery of one of these loaners became a media event, and the lucky families instant celebrities in their communities: a brief trip to the grocery store would turn into several hours fielding questions about the car and offering rides around the block to gearheads who pleaded for them.

The turbine engines, as turbine engines are wont to, once the bugs have been wrung out, performed superbly. Drivers of the loaner cars put more than a million miles on them with only minor mechanical problems. One car was rear-ended at a stop light, but you can't blame the engine for that. (Well, perhaps the guilty party was transfixed by the striking design of the rear of the car!) Drivers did notice slower acceleration from a stop due to “turbine lag”—the need for the turbine to spool up in RPM from idle, and poorer fuel economy in city driving. Fuel economy on the highway was comparable to contemporary piston engine cars. What few drivers noticed in the era of four gallons a buck gasoline, was that the turbine could run on just about any fuel you can imagine: unleaded gasoline, kerosene, heating oil, ethanol, methanol, aviation jet fuel, diesel, or any mix thereof. As a stunt, while visiting a peanut festival in Georgia, a Chrysler Turbine filled up with peanut oil, with tequila during a tour through Mexico, and with perfume at a French auto show; in each case the engine ran perfectly on the eccentric fuel (albeit with a distinctive aroma imparted to the exhaust).

So, here we are all these many years later in the twenty-first century. Where are our jet cars? That's an interesting story which illustrates the unintended consequences of well-intended public policy. Just as the turbine engine was being refined and perfected as an automotive power plant, the U.S. government started to obsess about air quality, and decided, in the spirit of the times, to impose detailed mandates upon manufacturers which constrained the design of their products. (As opposed, say, to imposing an excise tax upon vehicles based upon their total emissions and allowing manufacturers to weigh the trade-offs across their entire product line, or leaving it to states and municipalities most affected by pollution to enforce their own standards on vehicles licensed in their jurisdiction.) Since almost every vehicle on the road was piston engine powered, it was inevitable that regulators would draft their standards around the characteristics of that powerplant. In doing so, they neglected to note that the turbine engine already met all of the most stringent emissions standards they then envisioned for piston engines (and in addition, ran on unleaded fuels, completely eliminating the most hazardous emission of piston engines) with a single exception: oxides of nitrogen (NOx). The latter was a challenge for turbine engineers, because the continuous combustion in a turbine provides a longer time for nitrogen to react with oxygen. Engineers were sure they'd be able to find a way to work around this single remaining challenge, having already solved all of the emission problems the piston engine still had to overcome.

But they never got the chance. The government regulations were imposed with such short times for compliance that automakers were compelled to divert all of their research, development, and engineering resources to modifying their existing engines to meet the new standards, which proved to be ever-escalating: once a standard was met, it was made more stringent with another near-future deadline. At Chrysler, the smallest of the Big Three, this hit particularly hard, and the turbine project found its budget and engineering staff cannibalised to work on making ancient engines run rougher, burn more fuel, perform more anæmicly, and increase their cost and frequency of maintenance to satisfy a tailpipe emission standard written into law by commissars in Washington who probably took the streetcar to work. Then the second part of the double whammy hit: the oil embargo and the OPEC cartel hike in the price of oil, which led to federal fuel economy standards, which pulled in the opposite direction from the emissions standards and consumed all resources which might have been devoted to breakthroughs in automotive propulsion which would have transcended the increasingly baroque tweaks to the piston engine. A different time had arrived, and increasingly people who once eagerly awaited the unveiling of the new models from Detroit each fall began to listen to their neighbours who'd bought one of those oddly-named Japanese models and said, “Well, it's tiny and it looks odd, but it costs a whole lot less, goes almost forever on a gallon of gas, and it never, ever breaks”. From the standpoint of the mid-1970s, this began to sound pretty good to a lot of folks, and Detroit, the city and the industry which built it, began its descent from apogee to the ruin it is today.

If we could go back and change a few things in history, would we all be driving turbine cars today? I'm not so sure. At the point the turbine was undone by ill-advised public policy, one enormous engineering hurdle remained, and in retrospect it isn't clear that it could have been overcome. All turbine engines, to the present day, require materials and manufacturing processes which have never been scaled up to the volumes of passenger car manufacturing. The pioneers of the automotive turbine were confident that could be done, but they conceded that it would require at least the investment of building an entire auto plant from scratch, and that is something that Chrysler could not remotely fund at the time. It's much like building a new semiconductor fabrication facility with a new scaling factor, but without the confidence that if it succeeds a market will be there for its products. At the time the Chrysler Turbine cars were tested, Huebner estimated their cost of manufacturing at around US$50,000: roughly half of that the custom-crafted body and the rest the powertrain—the turbine engines were essentially hand-built. Such has been the depreciation of the U.S. dollar that this is equivalent to about a third of a million present-day greenbacks. Then or now, getting this cost down to something the average car buyer could afford was a formidable challenge, and it isn't obvious that the problem could have been solved, even without the resources needed to do so having been expended to comply with emissions and fuel economy diktats.

Further, turbine engines become less efficient as you scale them down—in the turbine world, the bigger the better, and they work best when run at a constant load over a long period of time. Consequently, turbine power would seem optimal for long-haul trucks, which require more power than a passenger car, run at near-constant speed over highways for hours on end, and already run on the diesel fuel which is ideal for turbines. And yet, despite research and test turbine vehicles having been built by manufacturers in the U.S., Britain, and Sweden, the diesel powerplant remains supreme. Truckers and trucking companies understand long-term investment and return, and yet the apparent advantages of the turbine haven't allowed it to gain a foothold in that market. Perhaps the turbine passenger car was one of those great ideas for which, in the final analysis, the numbers just didn't work.

I actually saw one of these cars on the road in 1964, doubtlessly driven by one the lucky drivers chosen to test it. There was something sweet about seeing the Jet Car of the Future waiting to enter a congested tunnel while we blew past it in our family Rambler station wagon, but that's just cruel. In the final chapter, we get to vicariously accompany the author on a drive in the Chrysler Turbine owned by Jay Leno, who contributes the foreword to this book.

Mark Olson's turbinecar.com has a wealth of information, photographs, and original documents relating to the Chrysler Turbine Car. The History Channel's documentary, The Chrysler Turbine, is available on DVD.

January 2011 Permalink

Leličvre, Domnique. L'Empire américain en échec sous l'éclairage de la Chine impériale. Chatou, France: Editions Carnot, 2004. ISBN 2-84855-097-X.
This is a very odd book. About one third of the text is a fairly conventional indictment of the emerging U.S. “virtuous empire” along the lines of America the Virtuous (earlier this month), along with the evils of globalisation, laissez-faire capitalism, cultural imperialism, and the usual scélérats du jour. But the author, who has published three earlier books of Chinese history, anchors his analysis of current events in parallels between the present day United States and the early Ming dynasty in China, particularly the reign of Zhu Di (朱棣), the Emperor Yongle (永樂), A.D. 1403-1424. (Windows users: if you didn't see the Chinese characters in the last sentence and wish to, you'll need to install Chinese language support using the Control Panel / Regional Options / Language Settings item, enabling “Simplified Chinese”. This may require you to load the original Windows install CD, reboot your machine after the installation is complete, and doubtless will differ in detail from one version of Windows to another. It may be a global village, but it can sure take a lot of work to get from one hut to the next.) Similarities certainly exist, some of them striking: both nations had overwhelming naval superiority and command of the seas, believed themselves to be the pinnacle of civilisation, sought large-scale hegemony (from the west coast of Africa to east Asia in the case of China, global for the U.S.), preferred docile vassal states to allies, were willing to intervene militarily to preserve order and their own self-interests, but for the most part renounced colonisation, annexation, territorial expansion, and religious proselytising. Both were tolerant, multi-cultural, multi-racial societies which believed their values universal and applicable to all humanity. Both suffered attacks from Islamic raiders, the Mongols under Tamerlane (Timur) and his successors in the case of Ming China. And both even fought unsuccessful wars in what is now Vietnam which ended in ignominious withdrawals. All of this is interesting, but how useful it is in pondering the contemporary situation is problematic, for along with the parallels, there are striking differences in addition to the six centuries of separation in time and all that implies for cultural and technological development including communications, weapons, and forms of government. Ming dynasty China was the archetypal oriental despotism, where the emperor's word was law, and the administrative and military bureaucracy was in the hands of eunuchs. The U.S., on the other hand, seems split right about down the middle regarding its imperial destiny, and many observers of U.S. foreign and military policy believe it suffers a surfeit of balls, not their absence. Fifteenth century China was self-sufficient in everything except horses, and its trade with vassal states consisted of symbolic potlatch-type tribute payments in luxury goods. The U.S., on the other hand, is the world's largest debtor nation, whose economy is dependent not only on an assured supply of imported petroleum, but also a wide variety of manufactured goods, access to cheap offshore labour, and the capital flows which permit financing its chronic trade deficits. I could go on listing fundamental differences which make any argument by analogy between these two nations highly suspect, but I'll close by noting that China's entire career as would-be hegemon began with Yongle and barely outlasted his reign—six of the seven expeditions of the great Ming fleet occurred during his years on the throne. Afterward China turned inward and largely ignored the rest of the world until the Europeans came knocking in the 19th century. Is it likely the U.S. drift toward empire which occupied most of the last century will end so suddenly and permanently? Stranger things have happened, but I wouldn't bet on it.

August 2004 Permalink

Levinson, Marc. The Box. Princeton: Princeton University Press, [2006] 2008. ISBN 978-0-691-13640-0.
When we think of developments in science and technology which reshape the world economy, we often concentrate upon those which build on fundamental breakthroughs in our understanding of the world we live in, or technologies which employ them to do things never imagined. Examples of these are electricity and magnetism, which gave us the telegraph, electric power, the telephone, and wireless communication. Semiconductor technology, the foundation of the computer and Internet revolutions, is grounded in quantum mechanics, elaborated only in the early 20th century. The global positioning satellites which you use to get directions when you're driving or walking wouldn't work if they did not compensate for the effects of special and general relativity upon the rate at which clocks tick in moving objects and those in gravitational fields.

But sometimes a revolutionary technology doesn't require a scientific breakthrough, nor a complicated manufacturing process to build, but just the realisation that people have been looking at a problem all wrong, or have been earnestly toiling away trying to solve some problem other than the one which people are ready to pay vast sums of money to have solved, once the solution is placed on the market.

The cargo shipping container may be, physically, the one of the least impressive technological achievements of the 20th century, right up there with the inanimate carbon rod, as it required no special materials, fabrication technologies, or design tools which did not exist a century before, and yet its widespread adoption in the latter half of the 20th century was fundamental to the restructuring of the global economy which we now call “globalisation”, and changed assumptions about the relationship between capital, natural resources, labour, and markets which had existed since the start of the industrial revolution.

Ever since the start of ocean commerce, ships handled cargo in much the same way. The cargo was brought to the dock (often after waiting for an extended period in a dockside warehouse for the ship to arrive), then stevedores (or longshoremen, or dockers) would load the cargo into nets, or onto pallets hoisted by nets into the hold of the ship, where other stevedores would unload it and stow the individual items, which might consist of items as varied as bags of coffee beans, boxes containing manufactured goods, barrels of wine or oil, and preserved food items such as salted fish or meat. These individual items were stored based upon the expertise of the gangs working the ship to make the most of the irregular space of the ship's cargo hold, and if the ship was to call upon multiple ports, in an order so cargo could be unloaded with minimal shifting of that bound for subsequent destinations on the voyage. Upon arrival at a port, this process was reversed to offload cargo bound there, and then the loading began again. It was not unusual for a cargo ship to spend 6 days or more in each port, unloading and loading, before the next leg on its voyage.

Shipping is both capital- and labour-intensive. The ship has to be financed and incurs largely fixed maintenance costs, and the crew must be paid regardless of whether they're at sea or waiting in port for cargo to be unloaded and loaded. This means that what engineers call the “duty cycle” of the ship is critical to its cost of operation and, consequently, what the shipowner must charge shippers to make a profit. A ship operating coastal routes in the U.S., say between New York and a port in the Gulf, could easily spend half its time in ports, running up costs but generating no revenue. This model of ocean transport, called break bulk cargo, prevailed from the age of sail until the 1970s.

Under the break bulk model, ocean transport was very expensive. Further, with cargos sitting in warehouses waiting for ships to arrive on erratic schedules, delivery times were not just long but also unpredictable. Goods shipped from a factory in the U.S. midwest to a destination in Europe would routinely take three months to arrive end to end, with an uncertainty measured in weeks, accounting for trucking, railroads, and ocean shipping involved in getting them to their destination. This meant that any importation of time-sensitive goods required keeping a large local inventory to compensate for unpredictable delivery times, and paying the substantial shipping cost included in their price. Economists, going back to Ricardo, often modelled shipping as free, but it was nothing of the kind, and was often the dominant factor in the location and structure of firms.

When shipping is expensive, firms have an advantage in being located in proximity to both their raw materials (or component suppliers) and customers. Detroit became the Motor City in large part because its bulk inputs: iron ore and coal, could be transported at low cost from mines to factories by ships plying the Great Lakes. Industries dependent on imports and exports would tend to cluster around major ports, since otherwise the cost of transporting their inputs and outputs overland from the nearest port would be prohibitive. And many companies simply concentrated on their local market, where transportation costs were not a major consideration in their cost structure. In 1964, when break bulk shipping was the norm, 40% of exports from Britain originated within 25 miles of their port of export, and two thirds of all imports were delivered to destinations a similar distance from their port of arrival.

But all of this was based upon the cost structure of break bulk ocean cargo shipping, and a similarly archaic way of handling rail and truck cargo. A manufacturing plant in Iowa might pack its goods destined for a customer in Belgium into boxes which were loaded onto a truck, driven to a terminal in Chicago where they were unloaded and reloaded into a boxcar, then sent by train to New Jersey, where they were unloaded and put onto a small ship to take them to the port of New York, where after sitting in a warehouse they'd be put onto a ship bound for a port in Germany. After arrival, they'd be transported by train, then trucked to the destination. Three months or so later, plus or minus a few, the cargo would arrive—at least that which wasn't stolen en route.

These long delays, and the uncertainty in delivery times, required those engaging in international commerce to maintain large inventories, which further increased the cost of doing business overseas. Many firms opted for vertical integration in their own local region.

Malcom McLean started his trucking company in 1934 with one truck and one driver, himself. What he lacked in capital (he often struggled to pay bridge tolls when delivering to New York), he made up in ambition, and by 1945, his company operated 162 trucks. He was a relentless cost-cutter, and from his own experience waiting for hours on New York docks for his cargo to be unloaded onto ships, in 1953 asked why shippers couldn't simply put the entire truck trailer on a ship rather than unload its cargo into the ship's hold, then unload it piece by piece at the destination harbour and load it back onto another truck. War surplus Liberty ships were available for almost nothing, and they could carry cargo between the U.S. northeast and south at a fraction of the cost of trucks, especially in the era before expressways.

McLean immediately found himself in a tangled web of regulatory and union constraints. Shipping, trucking, and railroads were all considered completely different businesses, each of which had accreted its own, often bizarre, government regulation and union work rules. The rate a carrier could charge for hauling a ton of cargo from point A to point B depended not upon its mass or volume, but what it was, with radically different rates for say, coal as opposed to manufactured goods. McLean's genius was in seeing past all of this obstructionist clutter and realising that what the customer—the shipper—wanted was not to purchase trucking, railroad, and shipping services, but rather delivery of the shipment, however accomplished, at a specified time and cost.

The regulatory mess made it almost impossible for a trucking company to own ships, so McLean created a legal structure which would allow his company to acquire a shipping line which had fallen on hard times. He then proceeded to convert a ship to carry containers, which would not be opened from the time they were loaded on trucks at the shipper's location until they arrived at the destination, and could be transferred between trucks and ships rapidly. Working out the details of the construction of the containers, setting their size, and shepherding all of this through a regulatory gauntlet which had never heard of such concepts was daunting, but the potential payoff was enormous. Loading break bulk cargo onto a ship the size of McLean's first container vessel cost US$ 5.83 per ton. Loading freight in containers cost US$ 0.16 per ton. This reduction in cost, passed on to the shipper, made containerised freight compelling, and sparked a transformation in the global economy.

Consider Barbie. Her body is manufactured in China, using machines from Japan and Europe and moulds designed in the U.S. Her hair comes from Japan, the plastic for her body from Taiwan, dyed with U.S. pigments, and her clothes are produced in other factories in China. The final product is shipped worldwide. There are no large inventories anywhere in the supply chain: every step depends upon reliable delivery of containers of intermediate products. Managers setting up such a supply chain no longer care whether the products are transported by truck, rail, or sea, and since transportation costs for containers are so small compared to the value of their contents (and trade barriers such as customs duties have fallen), the location of suppliers and factories is based almost entirely upon cost, with proximity to resources and customers almost irrelevant. We think of the Internet as having abolished distance, but the humble ocean cargo container has done so for things as much as the Internet has for data.

This is a thoroughly researched and fascinating look at how the seemingly most humble technological innovation can have enormous consequences, and also how the greatest barriers to restructuring economies may be sclerotic government and government-enabled (union) structures which preserve obsolete models long after they have become destructive of prosperity. It also demonstrates how those who try to freeze innovation into a model fixed in the past will be bypassed by those willing to embrace a more efficient way of doing business. The container ports which handle most of the world's cargo are, for the most part, not the largest ports of the break bulk era. They are those which, unencumbered by history, were able to build the infrastructure required to shift containers at a rapid rate.

The Kindle edition has some flaws. In numerous places, spaces appear within words which don't belong there (perhaps words hyphenated across lines in the print edition and not re-joined?) and the index is just a list of searchable terms, not linked to references in the text.

October 2014 Permalink

Lewis, Bernard. What Went Wrong? New York: Perennial, 2002. ISBN 0-06-051605-4.
Bernard Lewis is the preeminent Western historian of Islam and the Middle East. In his long career, he has written more than twenty volumes (the list includes those currently in print) on the subject. In this book he discusses the causes of the centuries-long decline of Islamic civilisation from a once preeminent empire and culture to the present day. The hardcover edition was in press when the September 2001 terrorist attacks took place. So thoroughly does Lewis cover the subject matter that a three page Afterword added in October 2002 suffices to discuss their causes and consequences. This is an excellent place for anybody interested in the “clash of civilisations” to discover the historical context of Islam's confrontation with modernity. Lewis writes with a wit which is so dry you can easily miss it if you aren't looking. For example, “Even when the Ottoman Turks were advancing into southeastern Europe, they were always able to buy much needed equipment for their fleets and armies from Christian European suppliers, to recruit European experts, and even to obtain financial cover from Christian European banks. What is nowadays known as ‘constructive engagement’ has a long history.” (p. 13).

April 2005 Permalink

Lewis, Damien. The Ministry of Ungentlemanly Warfare. New York: Quercus, 2015. ISBN 978-1-68144-392-8.
After becoming prime minister in May 1940, one of Winston Churchill's first acts was to establish the Special Operations Executive (SOE), which was intended to conduct raids, sabotage, reconnaissance, and support resistance movements in Axis-occupied countries. The SOE was not part of the military: it was a branch of the Ministry of Economic Warfare and its very existence was a state secret, camouflaged under the name “Inter-Service Research Bureau”. Its charter was, as Churchill described it, to “set Europe ablaze”.

The SOE consisted, from its chief, Brigadier Colin McVean Gubbins, who went by the designation “M”, to its recruits, of people who did not fit well with the regimentation, hierarchy, and constraints of life in the conventional military branches. They could, in many cases, be easily mistaken for blackguards, desperadoes, and pirates, and that's precisely what they were in the eyes of the enemy—unconstrained by the rules of warfare, striking by stealth, and sowing chaos, mayhem, and terror among occupation troops who thought they were far from the front.

Leading some of the SOE's early exploits was Gustavus “Gus” March-Phillipps, founder of the British Army's Small Scale Raiding Force, and given the SOE designation “Agent W.01”, meaning the first agent assigned to the west Africa territory with the leading zero identifying him as “trained and licensed to use all means to liquidate the enemy”—a license to kill. The SOE's liaison with the British Navy, tasked with obtaining support for its operations and providing cover stories for them, was a fellow named Ian Fleming.

One of the SOE's first and most daring exploits was Operation Postmaster, with the goal of seizing German and Italian ships anchored in the port of Santa Isabel on the Spanish island colony of Fernando Po off the coast of west Africa. Given the green light by Churchill over the strenuous objections of the Foreign Office and Admiralty, who were concerned about the repercussions if British involvement in what amounted to an act of piracy in a neutral country were to be disclosed, the operation was mounted under the strictest secrecy and deniability, with a cover story prepared by Ian Fleming. Despite harrowing misadventures along the way, the plan was a brilliant success, capturing three ships and their crews and delivering them to the British-controlled port of Lagos without any casualties. Vindicated by the success, Churchill gave the SOE the green light to raid Nazi occupation forces on the Channel Islands and the coast of France.

On his first mission in Operation Postmaster was Anders Lassen, an aristocratic Dane who enlisted as a private in the British Commandos after his country was occupied by the Nazis. With his silver-blond hair, blue eyes, and accent easily mistaken for German, Lassen was apprehended by the Home Guard on several occasions while on training missions in Britain and held as a suspected German spy until his commanders intervened. Lassen was given a field commission, direct from private to second lieutenant, immediately after Operation Postmaster, and went on to become one of the most successful leaders of special operations raids in the war. As long as Nazis occupied his Danish homeland, he was possessed with a desire to kill as many Nazis as possible, wherever and however he could, and when in combat was animated by a berserker drive and ability to improvise that caused those who served with him to call him the “Danish Viking”.

This book provides a look into the operations of the SOE and its successor organisations, the Special Air Service and Special Boat Service, seen through the career of Anders Lassen. So numerous were special operations, conducted in many theatres around the world, that this kind of focus is necessary. Also, attrition in these high-risk raids, often far behind enemy lines, was so high there are few individuals one can follow throughout the war. As the war approached its conclusion, Lassen was the only surviving participant in Operation Postmaster, the SOE's first raid.

Lassen went on to lead raids against Nazi occupation troops in the Channel Islands, leading Churchill to remark, “There comes from the sea from time to time a hand of steel which plucks the German sentries from their posts with growing efficiency.” While these “butcher-and-bolt” raids could not liberate territory, they yielded prisoners, code books, and radio contact information valuable to military intelligence and, more importantly, forced the Germans to strengthen their garrisons in these previously thought secure posts, tying down forces which could otherwise be sent to active combat fronts. Churchill believed that the enemy should be attacked wherever possible, and SOE was a precision weapon which could be deployed where conventional military forces could not be used.

As the SOE was absorbed into the military Special Air Service, Lassen would go on to fight in North Africa, Crete, the Aegean islands, then occupied by Italian and German troops, and mainland Greece. His raid on a German airbase on occupied Crete took out fighters and bombers which could have opposed the Allied landings in Sicily. Later, his small group of raiders, unsupported by any other force, liberated the Greek city of Salonika, bluffing the German commander into believing Lassen's forty raiders and two fishing boats were actually a British corps of thirty thousand men, with armour, artillery, and naval support.

After years of raiding in peripheral theatres, Lassen hungered to get into the “big war”, and ended up in Italy, where his irregular form of warfare and disdain for military discipline created friction with his superiors. But he got results, and his unit was tasked with reconnaissance and pathfinding for an Allied crossing of Lake Comacchio (actually, more of a swamp) in Operation Roast in the final days of the war. It was there he was to meet his end, in a fierce engagement against Nazi troops defending the north shore. For this, he posthumously received the Victoria Cross, becoming the only non-Commonwealth citizen so honoured in World War II.

It is a cliché to say that a work of history “reads like a thriller”, but in this case it is completely accurate. The description of the raid on the Kastelli airbase on Crete would, if made into a movie, probably cause many viewers to suspect it to be fictionalised, but that's what really happened, based upon after action reports by multiple participants and aerial reconnaissance after the fact.

World War II was a global conflict, and while histories often focus on grand battles such as D-day, Stalingrad, Iwo Jima, and the fall of Berlin, there was heroism in obscure places such as the Greek islands which also contributed to the victory, and combatants operating in the shadows behind enemy lines who did their part and often paid the price for the risks they willingly undertook. This is a stirring story of this shadow war, told through the short life of one of its heroes.

February 2018 Permalink

Lewis, Michael. The Big Short. New York: W. W. Norton, 2010. ISBN 978-0-393-07223-5.
After concluding his brief career on Wall Street in the 1980s, the author wrote Liar's Poker, a memoir of a period of financial euphoria and insanity which he assumed would come crashing down shortly after his timely escape. Who could have imagined that the game would keep on going for two decades more, in the process raising the stakes from mere billions to trillions of dollars, extending its tendrils into financial institutions around the globe, and fuelling real estate and consumption bubbles in which individuals were motivated to lie to obtain money they couldn't pay back to lenders who were defrauded as to the risk they were taking?

Most descriptions of the financial crisis which erupted in 2007 and continues to play out at this writing gloss over the details, referring to “arcanely complex transactions that nobody could understand” or some such. But, in the hands of a master explainer like the author, what happened isn't at all difficult to comprehend. Irresponsible lenders (in some cases motivated by government policy) made mortgage loans to individuals which they could not afford, with an initial “teaser” rate of interest. The only way the borrower could avoid default when the interest rate “reset” to market rates was to refinance the property, paying off the original loan. But since housing prices were rising rapidly, and everybody knew that real estate prices never fall, by that time the house would have appreciated in value, giving the “homeowner” equity in the house which would justify a higher grade mortgage the borrower could afford to pay. Naturally, this flood of money into the housing market accelerated the bubble in housing prices, and encouraged lenders to create ever more innovative loans in the interest of “affordable housing for all”, including interest-only loans, those with variable payments where the borrower could actually increase the principal amount by underpaying, no-money-down loans, and “liar loans” which simply accepted the borrower's claims of income and net worth without verification.

But what financial institution would be crazy enough to undertake the risk of carrying these junk loans on its books? Well, that's where the genius of Wall Street comes in. The originators of these loans, immediately after collecting the loan fee, bundled them up into “mortgage-backed securities” and sold them to other investors. The idea was that by aggregating a large number of loans into a pool, the risk of default, estimated from historical rates of foreclosure, would be spread just as insurance spreads the risk of fire and other damages. Further, the mortgage-backed securities were divided into “tranches”: slices which bore the risk of default in serial order. If you assumed, say, a 5% rate of default on the loans making up the security, the top-level tranche would have little or no risk of default, and the rating agencies concurred, giving it the same AAA rating as U.S. Treasury Bonds. Buyers of the lower-rated tranches, all the way down to the lowest investment grade of BBB, were compensated for the risk they were assuming by higher interest rates on the bonds. In a typical deal, if 15% of the mortgages defaulted, the BBB tranche would be completely wiped out.

Now, you may ask, who would be crazy enough to buy the BBB bottom-tier tranches? This indeed posed a problem to Wall Street bond salesmen (who are universally regarded as the sharpest-toothed sharks in the tank). So, they had the back-office “quants” invent a new kind of financial derivative, the “collateralised debt obligation” (CDO), which bundled up a whole bunch of these BBB tranche bonds into a pool, divided it into tranches, et voilà, the rating agencies would rate the lowest risk tranches of the pool of junk as triple A. How to get rid of the riskiest tranches of the CDO? Lather; rinse; repeat.

Investors worried about the risk of default in these securities could insure against them by purchasing a “credit default swap”, which is simply an insurance contract which pays off if the bond it insures is not repaid in full at maturity. Insurance giant AIG sold tens of billions of these swaps, with premiums ranging from a fraction of a percent on the AAA tranches to on the order of two percent on BBB tranches. As long as the bonds did not default, these premiums were a pure revenue stream for AIG, which went right to the bottom line.

As long as the housing bubble continued to inflate, this created an unlimited supply of AAA rated securities, rated as essentially without risk (historical rates of default on AAA bonds are about one in 100,000), ginned up on Wall Street from the flakiest and shakiest of mortgages. Naturally, this caused a huge flow of funds into the housing market, which kept the bubble expanding ever faster.

Until it popped.

Testifying before a hearing by the U.S. House of Representatives on October 22nd, 2008, Deven Sharma, president of Standard & Poor's, said, “Virtually no one—be they homeowners, financial institutions, rating agencies, regulators, or investors—anticipated what is occurring.” Notwithstanding the claim of culpable clueless clown Sharma, there were a small cadre of insightful investors who saw it all coming, had the audacity to take a position against the consensus of the entire financial establishment—in truth a bet against the Western world's financial system, and the courage to hang in there, against gnawing self-doubt (“Can I really be right and everybody else wrong?”) and skittish investors, to finally cash out on the trade of the century. This book is their story. Now, lots of people knew well in advance that the derivatives-fuelled housing bubble was not going to end well: I have been making jokes about “highly-leveraged financial derivatives” since at least 1996. But it's one thing to see an inevitable train wreck coming and entirely another to figure out approximately when it's going to happen, discover (or invent) the financial instruments with which to speculate upon it, put your own capital and reputation on the line making the bet, persist in the face of an overwhelming consensus that you're not only wrong but crazy, and finally cash out in a chaotic environment where there's a risk your bets won't be paid off due to bankruptcy on the other side (counterparty risk) or government intervention.

As the insightful investors profiled here dug into the details of the fairy castle of mortgage-backed securities, they discovered that it wouldn't even take a decline in housing prices to cause defaults sufficient to wipe out the AAA rated derivatives: a mere stagnation in real estate prices would suffice to render them worthless. And yet even after prices in the markets most affected by the bubble had already levelled off, the rating agencies continued to deem the securities based on their mortgages riskless, and insurance against their default could be bought at nominal cost. And those who bought it made vast fortunes as every other market around the world plummeted.

People who make bets like that tend to be way out on the tail of the human bell curve, and their stories, recounted here, are correspondingly fascinating. This book reads like one of Paul Erdman's financial thrillers, with the difference that the events described are simultaneously much less probable and absolutely factual. If this were a novel and not reportage, I doubt many readers would find the characters plausible.

There are many lessons to be learnt here. The first is that the human animal, and therefore the financial markets in which they interact, frequently mis-estimates and incorrectly prices the risk of outcomes with low probability: Black Swan (January 2009) events, and that investors who foresee them and can structure highly leveraged, long-term bets on them can do very well indeed. Second, Wall Street is just as predatory and ruthless as you've heard it to be: Goldman Sachs was simultaneously peddling mortgage-backed securities to its customers while its own proprietary traders were betting on them becoming worthless, and this is just one of a multitude of examples. Third, never assume that “experts”, however intelligent, highly credentialed, or richly compensated, actually have any idea what they're doing: the rating agencies grading these swampgas securities AAA had never even looked at the bonds from which they were composed, no less estimated the probability that an entire collection of mortgages made at the same time, to borrowers in similar circumstances, in the same bubble markets might all default at the same time.

We're still in the early phases of the Great Deleveraging, in which towers of debt which cannot possibly be repaid are liquidated through default, restructuring, and/or inflation of the currencies in which they are denominated. This book is a masterful and exquisitely entertaining exposition of the first chapter of this drama, and reading it is an excellent preparation for those wishing to ride out, and perhaps even profit from the ongoing tragedy. I have just two words to say to you: sovereign debt.

July 2010 Permalink

Lewis, Michael. Flash Boys. New York: W. W. Norton, 2014. ISBN 978-0-393-24466-3.
Back in the bad old days before regulation of financial markets, one of the most common scams perpetrated by stockbrokers against their customers was “front running”. When a customer placed an order to buy a large block of stock, which order would be sufficient to move the market price of the stock higher, the broker would first place a smaller order to buy the same stock for its own account which would be filled without moving the market very much. Then the customer order would be placed, resulting in the market moving higher. The broker would then immediately sell the stock it had bought at the higher market price and pocket the difference. The profit on each individual transaction would be small, but if you add this up over all the volume of a broker's trades it is substantial. (For a sell order, the broker simply inverts the sense of the transactions.) Front running amounts to picking the customer's pocket to line that of the broker: if the customer's order were placed directly, it would execute at a better price had it not been front run. Consequently, front running has long been illegal and market regulators look closely at transaction histories to detect evidence of such criminality.

In the first decade of the 21st century, traders in the U.S. stock market discovered the market was behaving in a distinctly odd fashion. They had been used to seeing the bids (offers to buy) and asks (offers to sell) on their terminals and were accustomed to placing an order and seeing it hit by the offers in the market. But now, when they placed an order, the offers on the other side of the trade would instantly evaporate, only to come back at a price adverse to them. Many people running hundreds of billions of dollars in hedge, mutual, and pension funds had no idea what was going on, but they were certain the markets were rigged against them. Brad Katsuyama, working at the Royal Bank of Canada's Wall Street office, decided to get to the bottom of the mystery, and eventually discovered the financial equivalent of what you see when you lift up a sheet of wet cardboard in your yard. Due to regulations intended to make financial markets more efficient and fair, the monolithic stock exchanges in the U.S. had fractured into dozens of computer-mediated exchanges which traded the same securities. A broker seeking to buy stock on behalf of a customer could route the order to any of these exchanges based upon its own proprietary algorithm, or might match the order with that of another customer within its own “dark pool”, whence the transaction was completely opaque to the outside market.

But there were other players involved. Often co-located in or near the buildings housing the exchanges (most of which are in New Jersey, which has such a sterling reputation for probity) were the servers of “high frequency traders” (HFTs), who placed and cancelled orders in times measured in microseconds. What the HFTs were doing was, in a nutshell, front running. Here's how it works: the HFT places orders of a minimum size (typically 100 shares) for a large number of frequently traded stocks on numerous exchanges. When one of these orders is hit, the HFT immediately blasts in orders to other exchanges, which have not yet reacted to the buy order, and acquires sufficient shares to fill the original order before the price moves higher. This will, in turn, move the market higher and once it does, the original buy order is filled at the higher price. The HFT pockets the difference. A millisecond in advance can, and does, turn into billions of dollars of profit looted from investors. And all of this is not only completely legal, many of the exchanges bend over backward to attract and support HFTs in return for the fees they pay, creating bizarre kinds of orders whose only purpose for existing is to facilitate HFT strategies.

As Brad investigated the secretive world of HFTs, he discovered the curious subculture of Russian programmers who, having spent part of their lives learning how to game the Soviet system, took naturally to discovering how to game the much more lucrative world of Wall Street. Finally, he decides there is a business opportunity in creating an exchange which distinguishes itself from the others by not being crooked. This exchange, IEX, (it was originally to be called “Investors Exchange”, but the founders realised that the obvious Internet domain name, investorsexchange.com, could be infelicitously parsed into three words as well as two), would include technological constraints (including 38 miles of fibre optic cable in a box to create latency between the point of presence where traders could attach and the servers which matched bids and asks) which rendered the strategies of the HFTs impotent and obsolete.

Was it conceivable one could be successful on Wall Street by being honest? Perhaps one had to be a Canadian to entertain such a notion, but in the event, it was. But it wasn't easy. IEX rapidly discovered that Wall Street firms, given orders by customers to be executed on IEX, sent them elsewhere to venues more profitable to the broker. Confidentiality rules prohibited IEX from identifying the miscreants, but nothing prevented them, with the brokers' permission, from identifying those who weren't crooked. This worked quite well.

I'm usually pretty difficult to shock when it comes to the underside of the financial system. For decades, my working assumption is that anything, until proven otherwise, is a scam aimed at picking the pockets of customers, and sadly I have found this presumption correct in a large majority of cases. Still, this book was startling. It's amazing the creepy crawlers you see when you lift up that piece of cardboard, and to anybody with an engineering background the rickety structure and fantastic instability of what are supposed to be the capital markets of the world's leading economy is nothing less than shocking. It is no wonder such a system is prone to “flash crashes” and other excursions. An operating system designer who built such a system would be considered guilty of malfeasance (unless, I suppose, he worked for Microsoft, in which case he'd be a candidate for employee of the year), and yet it is tolerated at the heart of a financial system which, if it collapses, can bring down the world's economy.

Now, one can argue that it isn't such a big thing if somebody shaves a penny or two off the price of a stock you buy or sell. If you're a medium- or long-term investor, that'll make little difference in the results. But what will make your blood boil is that the stock broker with whom you're doing business may be complicit in this, and pocketing part of the take. Many people in the real world look at Wall Street and conclude “The markets are rigged; the banks and brokers are crooked; and the system is stacked against the investor.” As this book demonstrates, they are, for the most part, absolutely right.

May 2014 Permalink

Lowe, Keith. Savage Continent. New York: Picador, [2012] 2013. ISBN 978-1-250-03356-7.
On May 8th, 1945, World War II in Europe formally ended when the Allies accepted the unconditional surrender of Germany. In popular myth, especially among those too young to have lived through the war and its aftermath, the defeat of Italy and Germany ushered in, at least in Western Europe not occupied by Soviet troops, a period of rebuilding and rapid economic growth, spurred by the Marshall Plan. The French refer to the three decades from 1945 to 1975 as Les Trente Glorieuses. But that isn't what actually happened, as this book documents in detail. Few books cover the immediate aftermath of the war, or concentrate exclusively upon that chaotic period. The author has gone to great lengths to explore little-known conflicts and sort out conflicting accounts of what happened still disputed today by descendants of those involved.

The devastation wreaked upon cities where the conflict raged was extreme. In Germany, Berlin, Hanover, Duisburg, Dortmund, and Cologne lost more than half their habitable buildings, with the figure rising to 70% in the latter city. From Stalingrad to Warsaw to Caen in France, destruction was general with survivors living in the rubble. The transportation infrastructure was almost completely obliterated, along with services such as water, gas, electricity, and sanitation. The industrial plant was wiped out, and along with it the hope of employment. This was the state of affairs in May 1945, and the Marshall Plan did not begin to deliver assistance to Western Europe until three years later, in April 1948. Those three years were grim, and compounded by score-settling, revenge, political instability, and multitudes of displaced people returning to areas with no infrastructure to support them.

And this was in Western Europe. As is the case with just about everything regarding World War II in Europe, the further east you go, the worse things get. In the Soviet Union, 70,000 villages were destroyed, along with 32,000 factories. The redrawing of borders, particularly those of Poland and Germany, set the stage for a paroxysm of ethnic cleansing and mass migration as Poles were expelled from territory now incorporated into the Soviet Union and Germans from the western part of Poland. Reprisals against those accused of collaboration with the enemy were widespread, with murder not uncommon. Thirst for revenge extended to the innocent, including children fathered by soldiers of occupying armies.

The end of the War did not mean an end to the wars. As the author writes, “The Second World War was therefore not only a traditional conflict for territory: it was simultaneously a war of race, and a war of ideology, and was interlaced with half a dozen civil wars fought for purely local reasons.” Defeat of Germany did nothing to bring these other conflicts to an end. Guerrilla wars continued in the Baltic states annexed by the Soviet Union as partisans resisted the invader. An all-out civil war between communists and anti-communists erupted in Greece and was ended only through British and American aid to the anti-communists. Communist agitation escalated to violence in Italy and France. And country after country in Eastern Europe came under Soviet domination as puppet regimes were installed through coups, subversion, or rigged elections.

When reading a detailed history of a period most historians ignore, one finds oneself exclaiming over and over, “I didn't know that!”, and that is certainly the case here. This was a dark period, and no group seemed immune from regrettable acts, including Jews liberated from Nazi death camps and slave labourers freed as the Allies advanced: both sometimes took their revenge upon German civilians. As the author demonstrates, the aftermath of this period still simmers beneath the surface among the people involved—it has become part of the identity of ethnic groups which will outlive any person who actually remembers the events of the immediate postwar period.

In addition to providing an enlightening look at this neglected period, the events in the years following 1945 have much to teach us about those playing out today around the globe. We are seeing long-simmering ethnic and religious strife boil into open conflict as soon as the system is perturbed enough to knock the lid off the kettle. Borders drawn by politicians mean little when people's identity is defined by ancestry or faith, and memories are very long, measured sometimes in centuries. Even after a cataclysmic conflict which levels cities and reduces populations to near-medieval levels of subsistence, many people do not long for peace but instead seek revenge. Economic growth and prosperity can, indeed, change the attitude of societies and allow for alliances among former enemies (imagine how odd the phrase “Paris-Berlin axis”, heard today in discussions of the European Union, would have sounded in 1946), but the results of a protracted conflict can prevent the emergence of the very prosperity which might allow consigning it to the past.

August 2014 Permalink

Lukacs, John. Five Days in London. New Haven, CT: Yale University Press, 1999. ISBN 0-300-08466-8.
Winston Churchill titled the fourth volume of his memoirs of The Second World War, describing the events of 1942, The Hinge of Fate. Certainly, in the military sense, it was in that year that the tide turned in favour of the allies—the entry of the United States into the war and the Japanese defeat in the Battle of Midway, Germany's failure at Stalingrad and the beginning of the disastrous consequences for the German army, and British defeat of Rommel's army at El Alamein together marked what Churchill described as, “…not the end, nor is it even the beginning of the end, but, it is perhaps, the end of the beginning.”

But in this book, distinguished historian John Lukacs argues that the true “hinge of fate” not only of World War II, but for Western civilisation against Nazi tyranny, occurred in the five days of 24–28 May of 1940, not on the battlefields in France, but in London, around conference tables, in lunch and dinner meetings, and walks in the garden. This was a period of unmitigated, accelerating disaster for the French army and the British Expeditionary Force in France: the channel ports of Boulogne and Calais fell to the Germans, the King of Belgium capitulated to the Nazis, and more than three hundred thousand British and French troops were surrounded at Dunkirk, the last channel port still in Allied hands. Despite plans for an evacuation, as late as May 28, Churchill estimated that at most about 50,000 could be evacuated, with all the rest taken prisoner and all the military equipment lost. In his statement in the House of Commons that day, he said, “Meanwhile, the House should prepare itself for hard and heavy tidings.” It was only in the subsequent days that the near-miraculous evacuation was accomplished, with a total of 338,226 soldiers rescued by June 3rd.

And yet it was in these darkest of days that Churchill vowed that Britain would fight on, alone if necessary (which seemed increasingly probable), to the very end, whatever the cost or consequences. On May 31st, he told French premier Paul Reynaud, “It would be better far that the civilisation of Western Europe with all of its achievements should come to a tragic but splendid end than that the two great democracies should linger on, stripped of all that made life worth living.” (p. 217).

From Churchill's memoirs and those of other senior British officials, contemporary newspapers, and most historical accounts of the period, one gains the impression of a Britain unified in grim resolve behind Churchill to fight on until ultimate victory or annihilation. But what actually happened in those crucial War Cabinet meetings as the disaster in France was unfolding? Oddly, the memoirs and collected papers of the participants are nearly silent on the period, with the author describing the latter as having been “weeded” after the fact. It was not until the minutes of the crucial cabinet meetings were declassified in 1970 (thanks to a decision by the British government to reduce the “closed period” of such records from fifty to thirty years), that it became possible to reconstruct what transpired there. This book recounts a dramatic and fateful struggle of which the public and earlier historians of the period were completely unaware—a moment when Hitler may have come closer to winning the war than at any other.

The War Cabinet was, in fact, deeply divided. Churchill, who had only been Prime Minister for two weeks, was in a precarious position, with his predecessor Neville Chamberlain and the Foreign Secretary Lord Halifax, who King George VI had preferred to Churchill for Prime Minister as members, along with Labour leaders Clement Attlee and Arthur Greenwood. Halifax did not believe that Britain could resist alone, and that fighting on would surely result in the loss of the Empire and perhaps independence and liberty in Britain as well. He argued vehemently for an approach, either by Britain and France together or Britain alone, to Mussolini, with the goal of keeping Italy out of the war and making some kind of deal with Hitler which would preserve independence and the Empire, and he met on several occasions with the Italian ambassador in London to explore such possibilities.

Churchill opposed any effort to seek mediation, either by Mussolini or Roosevelt, both because he thought the chances of obtaining acceptable terms from Hitler were “a thousand to one against” (May 28, p. 183) and because any approach would put Britain on a “slippery slope” (Churchill's words in the same meeting) from which it would be impossible to restore the resolution to fight rather than make catastrophic concessions. But this was a pragmatic decision, not a Churchillian declaration of “never, never, never, never”. In the May 26 War Cabinet meeting (p. 113), Churchill made the rather astonishing statement that he “would be thankful to get out of our present difficulties on such terms, provided we retained the essentials and the elements of our vital strength, even at the cost of some territory”. One can understand why the personal papers of the principals were so carefully weeded.

Speaking of another conflict where the destiny of Europe hung in the balance, the Duke of Wellington said of Waterloo that it was “the nearest run thing you ever saw in your life”. This account makes it clear that this moment in history was much the same. It is, of course, impossible to forecast what the consequences would have been had Halifax prevailed and Britain approached Mussolini to broker a deal with Hitler. The author argues forcefully that nothing less than the fate of Western civilisation was at stake. With so many “what ifs”, one can never know. (For example, it appears that Mussolini had already decided by this date to enter the war and he might have simply rejected a British approach.) But in any case this fascinating, thoroughly documented, and lucidly written account of a little-known but crucial moment in history makes for compelling reading.

February 2007 Permalink

Macdonald, Lyn. 1915: The Death of Innocence. London: Penguin Books, [1993] 1997. ISBN 0-14-025900-7.
I'm increasingly coming to believe that World War I was the defining event of the twentieth century: not only a cataclysm which destroyed the confident assumptions of the past, but which set history inexorably on a path which would lead to even greater tragedies and horrors as that century ran its course. This book provides an excellent snapshot of what the British people, both at the front and back home, were thinking during the first full year of the war, as casualties mounted and hope faded for the quick victory almost all expected at the outset.

The book does not purport to be a comprehensive history of the war, nor even of the single year it chronicles. It covers only the British Army: the Royal Navy is mentioned only in conjunction with troop transport and landings, and the Royal Flying Corps scarcely at all. The forces of other countries, allied or enemy, are mentioned only in conjunction with their interaction with the British, and no attempt is made to describe the war from their perspective. Finally, the focus is almost entirely on the men in the trenches and their commanders in the field: there is little focus on the doings of politicians and the top military brass, nor on grand strategy, although there was little of that in evidence in the events of 1915 in any case.

Within its limited scope, however, the book succeeds superbly. About a third of the text is extended quotations from people who fought at the front, many from contemporary letters home. Not only do you get an excellent insight into how horrific conditions were in the field, but also how stoically those men accepted them, hardly ever questioning the rationale for the war or the judgement of those who commanded them. And this in the face of a human cost which is nearly impossible to grasp by the standards of present-day warfare. Between the western front and the disastrous campaign in Gallipoli, the British suffered more than half a million casualties (killed, wounded, and missing) (p. 597). In “quiet periods” when neither side was mounting attacks, simply manning their own trenches, British casualties averaged five thousand a week (p. 579), mostly from shelling and sniper fire.

And all of the British troops who endured these appalling conditions were volunteers—conscription did not begin in Britain until 1916. With the Regular Army having been largely wiped out in the battles of 1914, the trenches were increasingly filled with Territorial troops who volunteered for service in France, units from around the Empire: India, Canada, Australia, and New Zealand, and as the year progressed, Kitchener's “New Army” of volunteer recruits rushed through training and thrown headlong into the killing machine. The mindset that motivated these volunteers and the conclusions drawn from their sacrifice set the stage for the even greater subsequent horrors of the twentieth century.

Why? Because they accepted as given that their lives were, in essence, the property of the state which governed the territory in which they happened to live, and that the rulers of that state, solely on the authority of having been elected by a small majority of the voters in an era when suffrage was far from universal, had every right to order them to kill or be killed by subjects of other states with which they had no personal quarrel. (The latter point was starkly illustrated when, at Christmas 1914, British and German troops declared an impromptu cease-fire, fraternised, and played football matches in no man's land before, the holiday behind them, returning to the trenches to resume killing one another for King and Kaiser.) This was a widely shared notion, but the first year of the Great War demonstrated that the populations of the countries on both sides really believed it, and would charge to almost certain death even after being told by Lord Kitchener himself on the parade ground, “that our attack was in the nature of a sacrifice to help the main offensive which was to be launched ‘elsewhere’” (p. 493). That individuals would accept their rôle as property of the state was a lesson which the all-encompassing states of the twentieth century, both tyrannical and more or less democratic, would take to heart, and would manifest itself not only in conscription and total war, but also in expropriation, confiscatory taxation, and arbitrary regulation of every aspect of subjects' lives. Once you accept that the state is within its rights to order you to charge massed machine guns with a rifle and bayonet, you're unlikely to quibble over lesser matters.

Further, the mobilisation of the economy under government direction for total war was taken as evidence that central planning of an industrial economy was not only feasible but more efficient than the market. Unfortunately, few observed that there is a big difference between consuming capital to build the means of destruction over a limited period of time and creating new wealth and products in a productive economy. And finally, governments learnt that control of mass media could mould the beliefs of their subjects as the rulers wished: the comical Fritz with which British troops fraternised at Christmas 1914 had become the detested Boche whose trenches they shelled continuously on Christmas Day a year later (p. 588).

It is these disastrous “lessons” drawn from the tragedy of World War I which, I suspect, charted the tragic course of the balance of the twentieth century and the early years of the twenty-first. Even a year before the outbreak of World War I, almost nobody imagined such a thing was possible, or that it would have the consequences it did. One wonders what will be the equivalent defining event of the twenty-first century, when it will happen, and in what direction it will set the course of history?

A U.S. edition is also available.

November 2006 Permalink

Macintyre, Ben. Agent Zigzag. New York: Three Rivers Press, 2007. ISBN 978-0-307-35341-2.
I'm not sure I'd agree with the cover blurb by the Boston Globe reviewer who deemed this “The best book ever written”, but it's a heck of a great read and will keep you enthralled from start to finish. Imagine the best wartime espionage novel you've ever read, stir in exploits from a criminal caper yarn, leaven with an assortment of delightfully eccentric characters, and then make the whole thing totally factual, exhaustively documented from archives declassified decades later by MI5, and you have this compelling story.

The protagonist, Eddie Chapman was, over his long and convoluted career, a British soldier; deserter; safecracker; elite criminal; prisoner of His Majesty, the government of the Isle of Jersey, and the Nazi occupation in Paris; volunteer spy and saboteur for the German Abwehr; parachute spy in Britain; double agent for MI5; instructor at a school for German spies in Norway; spy once again in Britain, deceiving the Germans about V-1 impact locations; participant in fixed dog track races; serial womaniser married to the same woman for fifty years; and for a while an “honorary crime correspondent” to the Sunday Telegraph. That's a lot to fit into even a life as long as Chapman's, and a decade after his death, those who remember him still aren't sure where his ultimate allegiance lay or even if the concept applied to him. If you simply look at him as an utterly amoral person who managed to always come up standing, even after intensive interrogations by MI5, the Abwehr, Gestapo, and SS, you miss his engaging charm, whether genuine or feigned, which engendered deeply-felt and long-lasting affection among his associates, both British and Nazi, criminal and police, all of whom describe him as a unique character.

Information on Chapman's exploits has been leaking out ever since he started publishing autobiographical information in 1953. Dodging the Official Secrets Act, in 1966 he published a more detailed account of his adventures, which was made into a very bad movie starring Christopher Plummer as Eddie Chapman. Since much of this information came from Chapman, it's not surprising that a substantial part of it was bogus. It is only with the release of the MI5 records, and through interviews with surviving participants in Chapman's exploits that the author was able to piece together an account which, while leaving many questions of motivation uncertain, at least pins down the facts and chronology.

This is a thoroughly delightful story of a totally ambiguous character: awarded the Iron Cross for his services to the Nazi Reich, having mistresses simultaneously supported in Britain and Norway by MI5 and the Abwehr, covertly pardoned for his high-profile criminal record for his service to the Crown, and unreconstructed rogue in his long life after the war. If published as spy fiction, this would be considered implausible in the extreme; the fact that it really happened makes this one of the most remarkable wartime stories I've read and an encounter with a character few novelists could invent.

November 2008 Permalink

Mallan, Lloyd. Russia and the Big Red Lie. Greenwich, CT: Fawcett, 1959. LCCN 59004006.
It is difficult for those who did not live through the era to appreciate the extent to which Sputnik shook the self-confidence of the West and defenders of the open society and free markets around the world. If the West's social and economic systems were genuinely superior to totalitarian rule and central planning, then how had the latter, starting from a base only a half century before where illiterate peasants were bound to the land as serfs, and in little more than a decade after their country was devastated in World War II, managed to pull off a technological achievement which had so far eluded the West and was evidence of a mastery of rocketry which could put the United States heartland at risk? Suddenly the fellow travellers and useful idiots in the West were energised: “Now witness the power of this fully armed and operational socialist economy!”

The author, a prolific writer on aerospace and technology, was as impressed as anybody else by the stunning Soviet accomplishment, and undertook the daunting task of arranging a visit to the Soviet Union to see for himself the prowess of Soviet science and technology. After a halting start, he secured a visa and introductions from prominent U.S. scientists to their Soviet counterparts, and journeyed to the Soviet Union in April of 1958, travelled extensively in the country, visiting, among other destinations, Moscow, Leningrad, Odessa, Yalta, Krasnodar, Rostov-on-Don, Yerevan, Kharkov, and Alma-Ata, leaving Soviet soil in June 1958. He had extensive, on the record, meetings with a long list of eminent Soviet scientists and engineers, many members of the Soviet Academy of Sciences. And he came back with a conclusion utterly opposed to that of the consensus in the West: Soviet technological prowess was about 1% military-style brute force and 99% bluff and hoax.

As one intimately acquainted with Western technology, what he saw in the Soviet Union was mostly comparable to the state of the art in the West a decade earlier, and in many cases obviously copied from Western equipment. The scientists he interviewed, who had been quoted in the Soviet press as forecasting stunning achievements in the near future, often, when interviewed in person, said “that's all just theory—nobody is actually working on that”. The much-vaunted Soviet jet and turboprop airliners he'd heard of were nowhere in evidence anywhere he travelled, and evidence suggested that Soviet commercial aviation lacked navigation and instrument landing systems which were commonplace in the West.

Faced with evidence that Soviet technological accomplishments were simply another front in a propaganda offensive aimed at persuading the world of the superiority of communism, the author dug deeper into the specifics of Soviet claims, and here (from the perspective of half a century on) he got some things right and goofed on others. He goes to great length to argue that the Luna 1 Moon probe was a total hoax, based both on Soviet technological capability and the evidence of repeated failure by Western listening posts to detect its radio signals. Current thinking is that Luna 1 was a genuine mission intended to impact on the Moon, but the Soviet claim it was deliberately launched into solar orbit as an “artificial planet” propaganda aimed at covering up its missing the Moon due to a guidance failure. (This became obvious to all when the near-identical Luna 2 impacted the moon eight months later.) The fact that the Soviets possessed the technology to conduct lunar missions was demonstrated when Luna 3 flew around the Moon in October 1959 and returned the first crude images of its far side (other Luna 3 images). Although Mallan later claimed these images were faked and contained brush strokes, we now know they were genuine, since they are strikingly similar to subsequent imagery, including the albedo map from the Clementine lunar orbiter. “Vas you dere, Ivan?” Well, actually, yes. Luna 3 was the “boomerang” mission around the Moon which Mallan had heard of before visiting the Soviet Union but was told was just a theory when he was there. And yet, had the Soviets had the ability to communicate with Luna 1 at the distance of the Moon, there would have been no reason to make Luna 3 loop around the Moon in order to transmit its pictures from closer to the Earth—enigmas, enigmas, enigmas.

In other matters, the author is dead on, where distinguished Western “experts” and “analysts” were completely taken in by the propaganda. He correctly identifies the Soviet “ICBM” from the 1957 Red Square parade as an intermediate range missile closer to the German V-2 than an intercontinental weapon. (The Soviet ICBM, the R-7, was indeed tested in 1957, but it was an entirely different design and could never have been paraded on a mobile launcher; it did not enter operational service until 1959.) He is also almost precisely on the money when he estimates the Soviet “ICBM arsenal” as on the order of half a dozen missiles, while the CIA was talking about hundreds of Soviet missiles aimed at the West and demagogues were ratcheting up rhetoric about a “missile gap”.

You don't read this for factual revelations: everything discussed here is now known much better, and there are many conclusions drawn in this text from murky contemporary evidence which have proven incorrect. But if you wish to immerse yourself in the Cold War and imagine yourself trying to figure it all out from the sketchy and distorted information coming from the adversary, it is very enlightening. One wishes more people had listened to Mallan—how much folly we might have avoided.

There is also wisdom in what he got wrong. Space spectaculars can be accomplished in a military manner by expending vast resources coercively taken from the productive sector on centrally-planned projects with narrow goals. Consequently, it isn't surprising a command economy such as that of the Soviet Union managed to achieve milestones in space (while failing to deliver adequate supplies of soap and toilet paper to workers toiling in their “paradise”). Indeed, in many ways, the U.S. Apollo program was even more centrally planned than its Soviet counterpart, and the pernicious example it set has damaged efforts to sustainably develop and exploit space ever since.

This “Fawcett Book” is basically an issue of Mechanix Illustrated containing a single long article. It even includes the usual delightful advertisements. This work is, of course, hopelessly out of print. Used copies are available, but often at absurdly elevated prices for what amounts to a pulp magazine. Is this work in the public domain and hence eligible to be posted on the Web? I don't know. It may well be: it was published before 1978, and unless its copyright was renewed in 1987 when its original 28 year term expired, it is public domain. Otherwise, as a publication by a “corporate author”, it will remain in copyright until 2079, which makes a mockery of the “limited Times to Authors” provision of the U.S. Constitution. If somebody can confirm this work is in the public domain, I'll scan it and make it available on the Web.

March 2012 Permalink

Manto, Cindy Donze. Michoud Assembly Facility. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-5316-6969-0.
In March, 1763, King Louis XV of France made a land grant of 140 square kilometres to Gilbert Antoine St Maxent, the richest man in Louisiana Territory and commander of the militia. The grant required St Maxent to build a road across the swampy property, develop a plantation, and reserve all the trees in forested areas for the use of the French navy. When the Spanish took over the territory five years later, St Maxent changed his first names to “Gilberto Antonio” and retained title to the sprawling estate. In the decades that followed, the property changed hands and nations several times, eventually, now part of the United States, being purchased by another French immigrant, Antoine Michoud, who had left France after the fall of Napoleon, who his father had served as an official.

Michoud rapidly established himself as a prosperous businessman in bustling New Orleans, and after purchasing the large tract of land set about buying pieces which had been sold off by previous owners, re-assembling most of the original French land grant into one of the largest private land holdings in the United States. The property was mostly used as a sugar plantation, although territory and rights were ceded over the years for construction of a lighthouse, railroads, and telegraph and telephone lines. Much of the land remained undeveloped, and like other parts of southern Louisiana was a swamp or, as they now say, “wetlands”.

The land remained in the Michoud family until 1910, when it was sold in its entirety for US$410,000 in cash (around US$11 million today) to a developer who promptly defaulted, leading to another series of changes of ownership and dodgy plans for the land, which most people continued to refer to as the Michoud Tract. At the start of World War II, the U.S. government bought a large parcel, initially intended for construction of Liberty ships. Those plans quickly fell through, but eventually a huge plant was erected on the site which, starting in 1943, began to manufacture components for cargo aircraft, lifeboats, and components which were used in the Manhattan Project's isotope separation plants in Oak Ridge, Tennessee.

At the end of the war, the plant was declared surplus but, a few years later, with the outbreak of the Korean War, it was re-purposed to manufacture engines for Army tanks. It continued in that role until 1954 when it was placed on standby and, in 1958, once again declared surplus. There things stood until mid-1961 when NASA, charged by the new Kennedy administration to “put a man on the Moon” was faced with the need to build rockets in sizes and quantities never before imagined, and to do so on a tight schedule, racing against the Soviet Union.

In June, 1961, Wernher von Braun, director of the NASA Marshall Space Flight Center in Huntsville, Alabama, responsible for designing and building those giant boosters, visited the then-idle Michoud Ordnance Plant and declared it ideal for NASA's requirements. It had 43 acres (17 hectares) under one roof, the air conditioning required for precision work in the Louisiana climate, and was ready to occupy. Most critically, it was located adjacent to navigable waters which would allow the enormous rocket stages, far too big to be shipped by road, rail, or air, to be transported on barges to and from Huntsville for testing and Cape Canaveral in Florida to be launched.

In September 1961 NASA officially took over the facility, renaming it “Michoud Operations”, to be managed by NASA Marshall as the manufacturing site for the rockets they designed. Work quickly got underway to set up manufacturing of the first stage of the Saturn I and 1B rockets and prepare to build the much larger first stage of the Saturn V Moon rocket. Before long, new buildings dedicated to assembly and test of the new rockets, occupied both by NASA and its contractors, began to spring up around the original plant. In 1965, the installation was renamed the Michoud Assembly Facility, which name it bears to this day.

With the end of the Apollo program, it looked like Michoud might once again be headed for white elephant status, but the design selected for the Space Shuttle included a very large External Tank comparable in size to the first stage of the Saturn V which would be discarded on every flight. Michoud's fabrication and assembly facilities, and its access to shipping by barge were ideal for this component of the Shuttle, and a total of 135 tanks built at Michoud were launched on Shuttle missions between 1981 and 2011.

The retirement of the Space Shuttle once again put the future of Michoud in doubt. It was originally tapped to build the core stage of the Constellation program's Ares V booster, which was similar in size and construction to the Shuttle External Tank. The cancellation of Constellation in 2010 brought that to a halt, but then Congress and NASA rode to the rescue with the absurd-as-a-rocket but excellent-as-a-jobs-program Space Launch System (SLS), whose centre core stage also resembles the External Tank and Ares V. SLS first stage fabrication is presently underway at Michoud. Perhaps when the schedule-slipping, bugget-busting SLS is retired after a few flights (if, in fact, it ever flies at all), bringing to a close the era of giant taxpayer-funded throwaway rockets, the Michoud facility can be repurposed to more productive endeavours.

This book is largely a history of Michoud in photos and captions, with text introducing chapters on each phase of the facility's history. All of the photos are in black and white, and are well-reproduced. In the Kindle edition many can be expanded to show more detail. There are a number of copy-editing and factual errors in the text and captions, but not too many to distract or mislead the reader. The unidentified “visitors” shown touring the Michoud facility in July 1967 (chapter 3, Kindle location 392) are actually the Apollo 7 crew, Walter Schirra, Donn Eisele, and Walter Cunningham, who would fly on a Michoud-built Saturn 1B in October 1968.

For a book of just 130 pages, most of which are black and white photographs, the hardcover is hideously expensive (US$29 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

June 2019 Permalink

Mayer, Milton. They Thought They Were Free. 2nd. ed. Chicago: University of Chicago Press, [1955] 1966. ISBN 0-226-51192-8.
The author, a journalist descended from German Jewish immigrants to the United States, first visited Nazi Germany in 1935, spending a month in Berlin attempting to obtain, unsuccessfully, an interview with Hitler, notwithstanding the assistance of his friend, the U.S. ambassador, then travelled through the country reporting for a U.S. magazine. It was then that he first discovered, meeting with ordinary Germans, that Nazism was not, as many perceived it then and now, “the tyranny of a diabolical few over helpless millions” (p. xviii), but rather a mass movement grounded in the “little people” with a broad base of non-fanatic supporters.

Ten years after the end of the war, Mayer arranged a one year appointment as a visiting professor at the University of Frankfurt and moved, with his family, to a nearby town of about 20,000 he calls “Kronenberg”. There, he spent much of his time cultivating the friendship of ten men he calls “my ten Nazi friends”, all of whom joined the party for various reasons ranging from ideology, assistance in finding or keeping employment, to admiration of what they saw as Hitler's success (before the war) in restoring the German economy and position in the world. A large part of the book is reconstructed conversations with these people, exploring the motivations of those who supported Hitler (many of whom continued, a decade after Germany's disastrous defeat in the war he started, to believe the years of his rule prior to the war were Germany's golden age). Together they provide a compelling picture of life in a totalitarian society as perceived by people who liked it.

This is simultaneously a profoundly enlightening and disturbing book. The author's Nazi friends come across as almost completely unexceptional, and one comes to understand how the choices they made, rooted in the situation they found themselves, made perfect sense to them. And then, one cannot help but ask, “What would I have done in the same circumstances?” Mayer has no truck with what has come to be called multiculturalism—he is a firm believer in national character (although, of course, only on the average, with large individual variation), and he explains how history, over almost two millennia, has forged the German character and why it is unlikely to be changed by military defeat and a few years of occupation.

Apart from the historical insights, this book is highly topical when a global superpower is occupying a very different country, with a tradition and history far more remote from its own than was Germany's, and trying to instill institutions with no historical roots there. People forget, but ten years after the end of World War II many, Mayer included, considered the occupation of Germany to have been a failure. He writes (p. 303):

The failure of the Occupation could not, perhaps, have been averted in the very nature of the case. But it might have been mitigated. Its mitigation would have required the conquerors to do something they had never had to do in their history. They would have had to stop doing what they were doing and ask themselves some questions, hard questions, like, What is the German character? How did it get that way? What is wrong with its being that way? What way would be better, and what, if anything, could anybody do about it?
Wise questions, indeed, for any conqueror of any country.

The writing is so superb that you may find yourself re-reading paragraphs just to savour how they're constructed. It is also thought-provoking to ponder how many things, from the perspective of half a century later, the author got wrong. In his view the occupation of West Germany would fail to permanently implant democracy, that German re-militarisation and eventual aggression was almost certain unless blocked by force, and that the project of European unification was a pipe dream of idealists and doomed to failure. And yet, today, things seem to have turned out pretty well for Germany, the Germans, and their neighbours. The lesson of this may be that national character can be changed, but changing it is the work of generations, not a few years of military occupation. That is also something modern-day conquerors, especially Western societies with a short attention span, might want to bear in mind.

September 2006 Permalink

Mazur, Joseph. Enlightening Symbols. Princeton: Princeton University Press, 2014. ISBN 978-0-691-15463-3.
Sometimes an invention is so profound and significant yet apparently obvious in retrospect that it is difficult to imagine how people around the world struggled over millennia to discover it, and how slowly it was to diffuse from its points of origin into general use. Such is the case for our modern decimal system of positional notation for numbers and the notation for algebra and other fields of mathematics which permits rapid calculation and transformation of expressions. This book, written with the extensive source citations of a scholarly work yet accessible to any reader familiar with arithmetic and basic algebra, traces the often murky origins of this essential part of our intellectual heritage.

From prehistoric times humans have had the need to count things, for example, the number of sheep in a field. This could be done by establishing a one-to-one correspondence between the sheep and something else more portable such as one's fingers (for a small flock), or pebbles kept in a sack. To determine whether a sheep was missing, just remove a pebble for each sheep and if any remained in the sack, that indicates how many are absent. At a slightly more abstract level, one could make tally marks on a piece of bark or clay tablet, one for each sheep. But all of this does not imply number as an abstraction independent of individual items of some kind or another. Ancestral humans don't seem to have required more than the simplest notion of numbers: until the middle of the 20th century several tribes of Australian aborigines had no words for numbers in their languages at all, but counted things by making marks in the sand. Anthropologists discovered tribes in remote areas of the Americas, Pacific Islands, and Australia whose languages had no words for numbers greater than four.

With the emergence of settled human populations and the increasingly complex interactions of trade between villages and eventually cities, a more sophisticated notion of numbers was required. A merchant might need to compute how many kinds of one good to exchange for another and to keep records of his inventory of various items. The earliest known written records of numerical writing are Sumerian cuneiform clay tablets dating from around 3400 B.C. These tablets show number symbols formed from two distinct kinds of marks pressed into wet clay with a stylus. While the smaller numbers seem clearly evolved from tally marks, larger numbers are formed by complicated combinations of the two symbols representing numbers from 1 to 59. Larger numbers were written as groups of powers of 60 separated by spaces. This was the first known instance of a positional number system, but there is no evidence it was used for complicated calculations—just as a means of recording quantities.

Ancient civilisations: Egypt, Hebrew, Greece, China, Rome, and the Aztecs and Mayas in the Western Hemisphere all invented ways of writing numbers, some sophisticated and capable of representing large quantities. Many of these systems were additive: they used symbols, sometimes derived from letters in their alphabets, and composed numbers by writing symbols which summed to the total. To write the number 563, a Greek would write “φξγ”, where φ=500, ξ=60, and γ=3. By convention, numbers were written with letters in descending order of the value they represented, but the system was not positional. This made the system clumsy for representing large numbers, reusing letters with accent marks to represent thousands and an entirely different convention for ten thousands.

How did such advanced civilisations get along using number systems in which it is almost impossible to compute? Just imagine a Roman faced with multiplying MDXLIX by XLVII (1549 × 47)—where do you start? You don't: all of these civilisations used some form of mechanical computational aid: an abacus, counting rods, stones in grooves, and so on to actually manipulate numbers. The Sun Zi Suan Jing, dating from fifth century China, provides instructions (algorithms) for multiplication, division, and square and cube root extraction using bamboo counting sticks (or written symbols representing them). The result of the computation was then written using the numerals of the language. The written language was thus a way to represent numbers, but not compute with them.

Many of the various forms of numbers and especially computational tools such as the abacus came ever-so-close to stumbling on the place value system, but it was in India, probably before the third century B.C. that a positional decimal number system including zero as a place holder, with digit forms recognisably ancestral to those we use today emerged. This was a breakthrough in two regards. Now, by memorising tables of addition, subtraction, multiplication, and division and simple algorithms once learned by schoolchildren before calculators supplanted that part of their brains, it was possible to directly compute from written numbers. (Despite this, the abacus remained in common use.) But, more profoundly, this was a universal representation of whole numbers. Earlier number systems (with the possible exception of that invented by Archimedes in The Sand Reckoner [but never used practically]) either had a limit on the largest number they could represent or required cumbersome and/or lengthy conventions for large numbers. The Indian number system needed only ten symbols to represent any non-negative number, and only the single convention that each digit in a number represented how many of that power of ten depending on its position.

Knowledge diffused slowly in antiquity, and despite India being on active trade routes, it was not until the 13th century A.D. that Fibonacci introduced the new number system, which had been transmitted via Islamic scholars writing in Arabic, to Europe in his Liber Abaci. This book not only introduced the new number system, it provided instructions for a variety of practical computations and applications to higher mathematics. As revolutionary as this book was, in an era of hand-copied manuscripts, its influence spread very slowly, and it was not until the 16th century that the new numbers became almost universally used. The author describes this protracted process, about which a great deal of controversy remains to the present day.

Just as the decimal positional number system was becoming established in Europe, another revolution in notation began which would transform mathematics, how it was done, and our understanding of the meaning of numbers. Algebra, as we now understand it, was known in antiquity, but it was expressed in a rhetorical way—in words. For example, proposition 7 of book 2 of Euclid's Elements states:

If a straight line be cut at random, the square of the whole is equal to the squares on the segments and twice the rectangle contained by the segments.

Now, given such a problem, Euclid or any of those following in his tradition would draw a diagram and proceed to prove from the axioms of plane geometry the correctness of the statement. But it isn't obvious how to apply this identity to other problems, or how it illustrates the behaviour of general numbers. Today, we'd express the problem and proceed as follows:

\begin{eqnarray*}
    (a+b)^2 & = & (a+b)(a+b) \\
    & = & a(a+b)+b(a+b) \\
    & = & aa+ab+ba+bb \\
    & = & a^2+2ab+b^2 \\
    & = & a^2+b^2+2ab
\end{eqnarray*}

Once again, faced with the word problem, it's difficult to know where to begin, but once expressed in symbolic form, it can be solved by applying rules of algebra which many master before reaching high school. Indeed, the process of simplifying such an equation is so mechanical that computer tools are readily available to do so.

Or consider the following brain-twister posed in the 7th century A.D. about the Greek mathematician and father of algebra Diophantus: how many years did he live?

“Here lies Diophantus,” the wonder behold.
Through art algebraic, the stone tells how old;
“God gave him his boyhood one-sixth of his life,
One twelfth more as youth while whiskers grew rife;
And then one-seventh ere marriage begun;
In five years there came a bounding new son.
Alas, the dear child of master and sage
After attaining half the measure of his father's life chill fate took him.
After consoling his fate by the science of numbers for four years, he ended his life.”

Oh, go ahead, give it a try before reading on!

Today, we'd read through the problem and write a system of two simultaneous equations, where x is the age of Diophantus at his death and y the number of years his son lived. Then:

\begin{eqnarray*}
    x & = & (\frac{1}{6}+\frac{1}{12}+\frac{1}{7})x+5+y+4 \\
    y & = & \frac{x}{2}
\end{eqnarray*}

Plug the second equation into the first, do a little algebraic symbol twiddling, and the answer, 84, pops right out. Note that not only are the rules for solving this equation the same as for any other, with a little practice it is easy to read the word problem and write down the equations ready to solve. Go back and re-read the original problem and the equations and you'll see how straightforwardly they follow.

Once you have transformed a mass of words into symbols, they invite you to discover new ways in which they apply. What is the solution of the equation x+4=0? In antiquity many would have said the equation is meaningless: there is no number you can add to four to get zero. But that's because their conception of number was too limited: negative numbers such as −4 are completely valid and obey all the laws of algebra. By admitting them, we discovered we'd overlooked half of the real numbers. What about the solution to the equation x² + 4 = 0? This was again considered ill-formed, or imaginary, since the square of any real number, positive or negative, is positive. Another leap of imagination, admitting the square root of minus one to the family of numbers, expanded the number line into the complex plane, yielding the answer 2i as we'd now express it, and extending our concept of number into one which is now fundamental not only in abstract mathematics but also science and engineering. And in recognising negative and complex numbers, we'd come closer to unifying algebra and geometry by bringing rotation into the family of numbers.

This book explores the groping over centuries toward a symbolic representation of mathematics which hid the specifics while revealing the commonality underlying them. As one who learned mathematics during the height of the “new math” craze, I can't recall a time when I didn't think of mathematics as a game of symbolic transformation of expressions which may or may not have any connection with the real world. But what one discovers in reading this book is that while this is a concept very easy to brainwash into a 7th grader, it was extraordinarily difficult for even some of the most brilliant humans ever to have lived to grasp in the first place. When Newton invented calculus, for example, he always expressed his “fluxions” as derivatives of time, and did not write of the general derivative of a function of arbitrary variables.

Also, notation is important. Writing something in a more expressive and easily manipulated way can reveal new insights about it. We benefit not just from the discoveries of those in the past, but from those who created the symbolic language in which we now express them.

This book is a treasure chest of information about how the language of science came to be. We encounter a host of characters along the way, not just great mathematicians and scientists, but scoundrels, master forgers, chauvinists, those who preserved precious manuscripts and those who burned them, all leading to the symbolic language in which we so effortlessly write and do mathematics today.

January 2015 Permalink

McCullough, David. The Wright Brothers. New York: Simon & Schuster, 2015. ISBN 978-1-4767-2874-2.
On December 8th, 1903, all was in readiness. The aircraft was perched on its launching catapult, the brave airman at the controls. The powerful internal combustion engine roared to life. At 16:45 the catapult hurled the craft into the air. It rose straight up, flipped, and with its wings coming apart, plunged into the Potomac river just 20 feet from the launching point. The pilot was initially trapped beneath the wreckage but managed to free himself and swim to the surface. After being rescued from the river, he emitted what one witness described as “the most voluble series of blasphemies” he had ever heard.

So ended the last flight of Samuel Langley's “Aerodrome”. Langley was a distinguished scientist and secretary of the Smithsonian Institution in Washington D.C. Funded by the U.S. Army and the Smithsonian for a total of US$ 70,000 (equivalent to around 1.7 million present-day dollars), the Aerodrome crashed immediately on both of its test flights, and was the subject of much mockery in the press.

Just nine days later, on December 17th, two brothers, sons of a churchman, with no education beyond high school, and proprietors of a bicycle shop in Dayton, Ohio, readied their own machine for flight near Kitty Hawk, on the windswept sandy hills of North Carolina's Outer Banks. Their craft, called just the Flyer, took to the air with Orville Wright at the controls. With the 12 horsepower engine driving the twin propellers and brother Wilbur running alongside to stabilise the machine as it moved down the launching rail into the wind, Orville lifted the machine into the air and achieved the first manned heavier-than-air powered flight, demonstrating the Flyer was controllable in all three axes. The flight lasted just 12 seconds and covered a distance of 120 feet.

After the first flight, the brothers took turns flying the machine three more times on the 17th. On the final flight Wilbur flew a distance of 852 feet in a flight of 59 seconds (a strong headwind was blowing, and this flight was over half a mile through the air). After completion of the fourth flight, while being prepared to fly again, a gust of wind caught the machine and dragged it, along with assistant John T. Daniels, down the beach toward the ocean. Daniels escaped, but the Flyer was damaged beyond repair and never flew again. (The Flyer which can seen in the Smithsonian's National Air and Space Museum today has been extensively restored.)

Orville sent a telegram to his father in Dayton announcing the success, and the brothers packed up the remains of the aircraft to be shipped back to their shop. The 1903 season was at an end. The entire budget for the project between 1900 through the successful first flights was less than US$ 1000 (24,000 dollars today), and was funded entirely by profits from the brothers' bicycle business.

How did two brothers with no formal education in aerodynamics or engineering succeed on a shoestring budget while Langley, with public funds at his disposal and the resources of a major scientific institution fail so embarrassingly? Ultimately it was because the Wright brothers identified the key problem of flight and patiently worked on solving it through a series of experiments. Perhaps it was because they were in the bicycle business. (Although they are often identified as proprietors of a “bicycle shop”, they also manufactured their own bicycles and had acquired the machine tools, skills, and co-workers for the business, later applied to building the flying machine.)

The Wrights believed the essential problem of heavier than air flight was control. The details of how a bicycle is built don't matter much: you still have to learn to ride it. And the problem of control in free flight is much more difficult than riding a bicycle, where the only controls are the handlebars and, to a lesser extent, shifting the rider's weight. In flight, an airplane must be controlled in three axes: pitch (up and down), yaw (left and right), and roll (wings' angle to the horizon). The means for control in each of these axes must be provided, and what's more, just as for a child learning to ride a bike, the would-be aeronaut must master the skill of using these controls to maintain his balance in the air.

Through a patient program of subscale experimentation, first with kites controlled by from the ground by lines manipulated by the operators, then gliders flown by a pilot on board, the Wrights developed their system of pitch control by a front-mounted elevator, yaw by a rudder at the rear, and roll by warping the wings of the craft. Further, they needed to learn how to fly using these controls and verify that the resulting plane would be stable enough that a person could master the skill of flying it. With powerless kites and gliders, this required a strong, consistent wind. After inquiries to the U.S. Weather Bureau, the brothers selected the Kitty Hawk site on the North Carolina coast. Just getting there was an adventure, but the wind was as promised and the sand and lack of large vegetation was ideal for their gliding experiments. They were definitely “roughing it” at this remote site, and at times were afflicted by clouds of mosquitos of Biblical plague proportions, but starting in 1900 they tested a series of successively larger gliders and by 1902 had a design which provided three axis control, stability, and the controls for a pilot on board. In the 1902 season they made more than 700 flights and were satisfied the control problem had been mastered.

Now all that remained was to add an engine and propellers to the successful glider design, again scaling it up to accommodate the added weight. In 1903, you couldn't just go down to the hardware store and buy an engine, and automobile engines were much too heavy, so the Wrights' resourceful mechanic, Charlie Taylor, designed and built the four cylinder motor from scratch, using the new-fangled material aluminium for the engine block. The finished engine weighed just 152 pounds and produced 12 horsepower. The brothers could find no references for the design of air propellers and argued intensely over the topic, but eventually concluded they'd just have to make a best guess and test it on the real machine.

The Flyer worked the on the second attempt (an earlier try on December 14th ended in a minor crash when Wilbur over-controlled at the moment of take-off). But this stunning success was the product of years of incremental refinement of the design, practical testing, and mastery of airmanship through experience.

Those four flights in December of 1903 are now considered one of the epochal events of the twentieth century, but at the time they received little notice. Only a few accounts of the flights appeared in the press, and some of them were garbled and/or sensationalised. The Wrights knew that the Flyer (whose wreckage was now in storage crates at Dayton), while a successful proof of concept and the basis for a patent filing, was not a practical flying machine. It could only take off into the strong wind at Kitty Hawk and had not yet demonstrated long-term controlled flight including aerial maneuvers such as turns or flying around a closed course. It was just too difficult travelling to Kitty Hawk, and the facilities of their camp there didn't permit rapid modification of the machines based upon experimentation.

They arranged to use an 84 acre cow pasture called Huffman Prairie located eight miles from Dayton along an interurban trolley line which made it easy to reach. The field's owner let them use it without charge as long as they didn't disturb the livestock. The Wrights devised a catapult to launch their planes, powered by a heavy falling weight, which would allow them to take off in still air. It was here, in 1904, that they refined the design into a practical flying machine and fully mastered the art of flying it over the course of about fifty test flights. Still, there was little note of their work in the press, and the first detailed account was published in the January 1905 edition of Gleanings in Bee Culture. Amos Root, the author of the article and publisher of the magazine, sent a copy to Scientific American, saying they could republish it without fee. The editors declined, and a year later mocked the achievements of the Wright brothers.

For those accustomed to the pace of technological development more than a century later, the leisurely pace of progress in aviation and lack of public interest in the achievement of what had been a dream of humanity since antiquity seems odd. Indeed, the Wrights, who had continued to refine their designs, would not become celebrities nor would their achievements be widely acknowledged until a series of demonstrations Wilbur would perform at Le Mans in France in the summer of 1908. Le Figaro wrote, “It was not merely a success, but a triumph…a decisive victory for aviation, the news of which will revolutionize scientific circles throughout the world.” And it did: stories of Wilbur's exploits were picked up by the press on the Continent, in Britain, and, belatedly, by papers in the U.S. Huge crowds came out to see the flights, and the intrepid American aviator's name was on every tongue.

Meanwhile, Orville was preparing for a series of demonstration flights for the U.S. Army at Fort Myer, Virginia. The army had agreed to buy a machine if it passed a series of tests. Orville's flights also began to draw large crowds from nearby Washington and extensive press coverage. All doubts about what the Wrights had wrought were now gone. During a demonstration flight on September 17, 1908, a propeller broke in flight. Orville tried to recover, but the machine plunged to the ground from an altitude of 75 feet, severely injuring him and killing his passenger, Lieutenant Thomas Selfridge, who became the first person to die in an airplane crash. Orville's recuperation would be long and difficult, aided by his sister, Katharine.

In early 1909, Orville and Katharine would join Wilbur in France, where he was to do even more spectacular demonstrations in the south of the country, training pilots for the airplanes he was selling to the French. Upon their return to the U.S., the Wrights were awarded medals by President Taft at the White House. They were feted as returning heroes in a two day celebration in Dayton. The diligent Wrights continued their work in the shop between events.

The brothers would return to Fort Myer, the scene of the crash, and complete their demonstrations for the army, securing the contract for the sale of an airplane for US$ 30,000. The Wrights would continue to develop their company, defend their growing portfolio of patents against competitors, and innovate. Wilbur was to die of typhoid fever in 1912, aged only 45 years. Orville sold his interest in the Wright Company in 1915 and, in his retirement, served for 28 years on the National Advisory Committee for Aeronautics, the precursor of NASA. He died in 1948. Neither brother ever married.

This book is a superb evocation of the life and times of the Wrights and their part in creating, developing, promoting, and commercialising one of the key technologies of the modern world.

February 2016 Permalink

McDonald, Allan J. and James R. Hansen. Truth, Lies, and O-Rings. Gainesville, FL: University Press of Florida, 2009. ISBN 978-0-8130-3326-6.
More than two decades have elapsed since Space Shuttle Challenger met its tragic end on that cold Florida morning in January 1986, and a shelf-full of books have been written about the accident and its aftermath, ranging from the five volume official report of the Presidential commission convened to investigate the disaster to conspiracy theories and accounts of religious experiences. Is it possible, at this remove, to say anything new about Challenger? The answer is unequivocally yes, as this book conclusively demonstrates.

The night before Challenger was launched on its last mission, Allan McDonald attended the final day before launch flight readiness review at the Kennedy Space Center, representing Morton Thiokol, manufacturer of the solid rocket motors, where he was Director of the Space Shuttle Solid Rocket Motor Project. McDonald initially presented Thiokol's judgement that the launch should be postponed because the temperatures forecast for launch day were far below the experience base of the shuttle program and an earlier flight at the lowest temperature to date had shown evidence of blow-by the O-ring seals in the solid rocket field joints. Thiokol engineers were concerned that low temperatures would reduce the resiliency of the elastomeric rings, causing them to fail to seal during the critical ignition transient. McDonald was astonished when NASA personnel, in a reversal of their usual rôle of challenging contractors to prove why their hardware was safe to fly, demanded that Thiokol prove the solid motor was unsafe in order to scrub the launch. Thiokol management requested a five minute offline caucus back at the plant in Utah (in which McDonald did not participate) which stretched to thirty minutes and ended up with a recommendation to launch. NASA took the unprecedented step of requiring a written approval to launch from Thiokol, which McDonald refused to provide, but which was supplied by his boss in Utah.

After the loss of the shuttle and its crew, and the discovery shortly thereafter that the proximate cause was almost certainly a leak in the aft field joint of the right solid rocket booster, NASA and Thiokol appeared to circle the wagons, trying to deflect responsibility from themselves and obscure the information available to decision makers in a position to stop the launch. It was not until McDonald's testimony to the Presidential Commission chaired by former Secretary of State William P. Rogers that the truth began to come out. This thrust McDonald, up to then an obscure engineering manager, into the media spotlight and the political arena, which he quickly discovered was not at all about his priorities as an engineer: finding out what went wrong and fixing it so it could never happen again.

This memoir, composed by McDonald from contemporary notes and documents with the aid of space historian James R. Hansen (author of the bestselling authorised biography of Neil Armstrong) takes the reader through the catastrophe and its aftermath, as seen by an insider who was there at the decision to launch, on a console in the firing room when disaster struck, before the closed and public sessions of the Presidential commission, pursued by sensation-hungry media, testifying before congressional committees, and consumed by the redesign and certification effort and the push to return the shuttle to flight. It is a personal story, but told in terms, as engineers are wont to do, based in the facts of the hardware, the experimental evidence, and the recollection of meetings which made the key decisions before and after the tragedy.

Anybody whose career may eventually land them, intentionally or not (the latter almost always the case), in the public arena can profit from reading this book. Even if you know nothing about and have no interest in solid rocket motors, O-rings, space exploration, or NASA, the dynamics of a sincere, dedicated engineer who was bent on doing the right thing encountering the ravenous media and preening politicians is a cautionary tale for anybody who finds themselves in a similar position. I wish I'd had the opportunity to read this book before my own Dark Night of the Soul encounter with a reporter from the legacy media. I do not mean to equate my own mild experience with the Hell that McDonald experienced—just to say that his narrative would have been a bracing preparation for what was to come.

The chapters on the Rogers Commission investigation provided, for me, a perspective I'd not previously encountered. Many people think of William P. Rogers primarily as Nixon's first Secretary of State who was upstaged and eventually replaced by Henry Kissinger. But before that Rogers was a federal prosecutor going after organised crime in New York City and then was Attorney General in the Eisenhower administration from 1957 to 1961. Rogers may have aged, but his skills as an interrogator and cross-examiner never weakened. In the sworn testimony quoted here, NASA managers, who come across like the kids who were the smartest in their high school class and then find themselves on the left side of the bell curve when they show up as freshmen at MIT, are pinned like specimen bugs to their own viewgraphs when they try to spin Rogers and his tag team of technical takedown artists including Richard Feynman, Neil Armstrong, and Sally Ride.

One thing which is never discussed here, but should be, is just how totally insane it is to use large solid rockets, in any form, in a human spaceflight program. Understand: solid rockets are best thought of as “directed bombs”, but if detonated at an inopportune time, or when not in launch configuration, can cause catastrophe. A simple spark of static electricity can suffice to ignite the propellant in a solid rocket, and once ignited there is no way to extinguish it until it is entirely consumed. Consider: in the Shuttle era, there are usually one or more Shuttle stacks in the Vehicle Assembly Building (VAB), and if NASA's Constellation Program continues, this building will continue to stack solid rocket motors in decades to come. Sooner or later, the inevitable is going to happen: a static spark, a crane dropping a segment, or an interference fit of two segments sending a hot fragment into the propellant below. The consequence: destruction of the VAB, all hardware inside, and the death of all people working therein. The expected stand-down of the U.S. human spaceflight program after such an event is on the order of a decade. Am I exaggerating the risks here? Well, maybe; you decide. But within two years, three separate disasters struck the production of large solid motors in 1985–1986. I shall predict: if NASA continue to use large solid motors in their human spaceflight program, there will be a decade-long gap in U.S. human spaceflight sometime in the next twenty years.

If you're sufficiently interested in these arcane matters to have read this far, you should read this book. Based upon notes, it's a bit repetitive, as many of the same matters were discussed in the various venues in which McDonald testified. But if you want to read a single book to prepare you for being unexpectedly thrust into the maw of ravenous media and politicians, I know of none better.

September 2009 Permalink

McGovern, Patrick E. Uncorking the Past. Berkeley: University of California Press, 2009. ISBN 978-0-520-25379-7.
While a variety of animals are attracted to and consume the alcohol in naturally fermented fruit, only humans have figured out how to promote the process, producing wine from fruit and beer from cereal crops. And they've been doing it since at least the Neolithic period: the author discovered convincing evidence of a fermented beverage in residues on pottery found at the Jiahu site in China, inhabited between 7000 and 5800 B.C.

Indeed, almost every human culture which had access to fruits or grains which could be turned into an alcoholic beverage did so, and made the production and consumption of spirits an important part of their economic and spiritual life. (One puzzle is why the North American Indians, who lived among an abundance of fermentable crops never did—there are theories that tobacco and hallucinogenic mushrooms supplanted alcohol for shamanistic purposes, but basically nobody really knows.)

The author is a pioneer in the field of biomolecular archæology and head of the eponymous laboratory at the University of Pennsylvania Museum of Archæology and Anthropology; in this book takes us on a tour around the world and across the centuries exploring, largely through his own research and that of associates, the history of fermented beverages in a variety of cultures and what we can learn from this evidence about how they lived, were organised, and interacted with other societies. Only in recent decades has biochemical and genetic analysis progressed to the point that it is possible not only to determine from some gunk found at the bottom of an ancient pot not only that it was some kind of beer or wine, but from what species of fruit and grain it was produced, how it was prepared and fermented, and what additives it may have contained and whence they originated. Calling on experts in related disciplines such as palynology (the study of pollen and spores, not of the Alaskan politician), the author is able to reconstruct the economics of the bustling wine trade across the Mediterranean (already inferred from shipwrecks carrying large numbers of casks of wine) and the diffusion of the ancestral cultivated grape around the world, displacing indigenous grapes which were less productive for winemaking.

While the classical period around the Mediterranean is pretty much soaked in wine, and it'd be difficult to imagine the Vikings and other North Europeans without their beer and grogs, much less was known about alcoholic beverages in China, South America, and Africa. Once again, the author is on their trail, and not only reports upon his original research, but also attempts, in conjunction with micro-brewers and winemakers, to reconstruct the ancestral beverages of yore.

The biochemical anthropology of booze is not exactly a crowded field, and in this account written by one of its leaders, you get the sense of having met just about all of the people pursuing it. A great deal remains to be learnt—parts of the book read almost like a list of potential Ph.D. projects for those wishing to follow in the author's footsteps. But that's the charm of opening a new window into the past: just as DNA and other biochemical analyses revolutionised the understanding of human remains in archæology, the arsenal of modern analytical tools allows reconstructing humanity's almost universal companion through the ages, fermented beverages, and through them, uncork the way in which those cultures developed and interacted.

A paperback edition will be published in December 2010.

October 2010 Permalink

Mercer, Ilana. Into the Cannibal's Pot. Mount Vernon, WA, 2011. ISBN 978-0-9849070-1-4.
The author was born in South Africa, the daughter of Rabbi Abraham Benzion Isaacson, a leader among the Jewish community in the struggle against apartheid. Due to her father's activism, the family, forced to leave the country, emigrated to Israel, where the author grew up. In the 1980s, she moved back to South Africa, where she married, had a daughter, and completed her university education. In 1995, following the first elections with universal adult suffrage which resulted in the African National Congress (ANC) taking power, she and her family emigrated to Canada with the proceeds of the sale of her apartment hidden in the soles of her shoes. (South Africa had adopted strict controls to prevent capital flight in the aftermath of the election of a black majority government.) After initially settling in British Columbia, her family subsequently emigrated to the United States where they reside today.

From the standpoint of a member of a small minority (the Jewish community) of a minority (whites) in a black majority country, Mercer has reason to be dubious of the much-vaunted benefits of “majority rule”. Describing herself as a “paleolibertarian”, her outlook is shaped not by theory but the experience of living in South Africa and the accounts of those who remained after her departure. For many in the West, South Africa scrolled off the screen as soon as a black majority government took power, but that was the beginning of the country's descent into violence, injustice, endemic corruption, expropriation of those who built the country and whose ancestors lived there since before the founding of the United States, and what can only be called a slow-motion genocide against the white farmers who were the backbone of the society.

Between 1994 and 2005, the white population of South Africa fell from 5.22 million to 4.37 million. Two of the chief motivations for emigration have been an explosion of violent crime, often racially motivated and directed against whites, a policy of affirmative action which amounts to overt racial discrimination against whites, endemic corruption, and expropriation of businesses in the interest of “fairness”.

In the forty-four years of apartheid in South Africa from 1950 to 1993, there were a total of 309,583 murders in the country: an average of 7,036 per year. In the first eight years after the end of apartheid (1994—2001), under one-party black majority rule, 193,649 murders were reported, or 24,206 per year. And the latter figure is according to the statistics of the ANC-controlled South Africa Police Force, which both Interpol and the South African Medical Research Council say may be understated by as much as a factor of two. The United States is considered to be a violent country, with around 4.88 homicides per 100,000 people (by comparison, the rate in the United Kingdom is 0.92 and in Switzerland is 0.69). In South Africa, the figure is 34.27 (all estimates are 2015 figures from the United Nations Office on Drugs and Crime). And it isn't just murder: in South Africa,where 65 people are murdered every day, around 200 are raped and 300 are victims of assault and violent robbery.

White farmers, mostly Afrikaner, have frequently been targets of violence. In the periods 1996–2007 and 2010–2016 (no data were published for the years 2008 and 2009), according to statistics from the South African Police Service (which may be understated), there were 11,424 violent attacks on farms in South Africa, with a total of 1609 homicides, in some cases killing entire farm families and some of their black workers. The motives for these attacks remain a mystery according to the government, whose leaders have been known to sing the stirring anthem “Kill the Boer” at party rallies. Farm attacks follow the pattern in Zimbabwe, where such attacks, condoned by the Mugabe regime, resulted in the emigration of almost all white farmers and the collapse of the country's agricultural sector (only 200 white farmers remain in the country, 5% of the number before black majority rule). In South Africa, white farmers who have not already emigrated find themselves trapped: they cannot sell to other whites who fear they would become targets of attacks and/or eventual expropriation without compensation, nor to blacks who expect they will eventually receive the land for free when it is expropriated.

What is called affirmative action in the U.S. is implemented in South Africa under the Black Economic Empowerment (BEE) programme, a set of explicitly racial preferences and requirements which cover most aspects of business operation including ownership, management, employment, training, supplier selection, and internal investment. Mining companies must cede co-ownership to blacks in order to obtain permits for exploration. Not surprisingly, in many cases the front men for these “joint ventures” are senior officials of the ruling ANC and their family members. So corrupt is the entire system that Archbishop Desmond Tutu, one of the most eloquent opponents of apartheid, warned that BEE has created a “powder keg”, where benefits accrue only to a small, politically-connected, black elite, leaving others in “dehumanising poverty”.

Writing from the perspective of one who got out of South Africa just at the point where everything started to go wrong (having anticipated in advance the consequences of pure majority rule) and settled in the U.S., Mercer then turns to the disturbing parallels between the two countries. Their histories are very different, and yet there are similarities and trends which are worrying. One fundamental problem with democracy is that people who would otherwise have to work for a living discover that they can vote for a living instead, and are encouraged in this by politicians who realise that a dependent electorate is a reliable electorate as long as the benefits continue to flow. Back in 2008, I wrote about the U.S. approaching a tipping point where nearly half of those who file income tax returns owe no income tax. At that point, among those who participate in the economy, there is a near-majority who pay no price for voting for increased government benefits paid for by others. It's easy to see how this can set off a positive feedback loop where the dependent population burgeons, the productive minority shrinks, the administrative state which extracts the revenue from that minority becomes ever more coercive, and those who channel the money from the producers to the dependent grow in numbers and power.

Another way to look at the tipping point is to compare the number of voters to taxpayers (those with income tax liability). In the U.S., this number is around two to one, which is dangerously unstable to the calamity described above. Now consider that in South Africa, this ratio is eleven to one. Is it any wonder that under universal adult suffrage the economy of that country is in a down-spiral?

South Africa prior to 1994 was in an essentially intractable position. By encouraging black and later Asian immigration over its long history (most of the ancestors of black South Africans arrived after the first white settlers), it arrived at a situation where a small white population (less than 10%) controlled the overwhelming majority of the land and wealth, and retained almost all of the political power. This situation, and the apartheid system which sustained it (which the author and her family vehemently opposed) was unjust and rightly was denounced and sanctioned by countries around the globe. But what was to replace it? The experience of post-colonial Africa was that democracy almost always leads to “One man, one vote, one time”: a leader of the dominant ethnic group wins the election, consolidates power, and begins to eliminate rival groups, often harking back to the days of tribal warfare which preceded the colonial era, but with modern weapons and a corresponding death toll. At the same time, all sources of wealth are plundered and “redistributed”, not to the general population, but to the generals and cronies of the Great Man. As the country sinks into savagery and destitution, whites and educated blacks outside the ruling clique flee. (Indeed, South Africa has a large black illegal immigrant population made of those who fled the Mugabe tyranny in Zimbabwe.)

Many expected this down-spiral to begin in South Africa soon after the ANC took power in 1994. The joke went, “What's the difference between Zimbabwe and South Africa? Ten years.” That it didn't happen immediately and catastrophically is a tribute to Nelson Mandela's respect for the rule of law and for his white partners in ending apartheid. But now he is gone, and a new generation of more radical leaders has replaced him. Increasingly, it seems like the punch line might be revised to be “Twenty-five years.”

The immediate priority one takes away from this book is the need to address the humanitarian crisis faced by the Afrikaner farmers who are being brutally murdered and face expropriation of their land without compensation as the regime becomes ever more radical. Civilised countries need to open immigration to this small, highly-productive, population. Due to persecution and denial of property rights, they may arrive penniless, but are certain to quickly become the backbone of the communities they join.

In the longer term, the U.S. and the rest of the Anglosphere and civilised world should be cautious and never indulge in the fantasy “it can't happen here”. None of these countries started out with the initial conditions of South Africa, but it seems like, over the last fifty years, much of their ruling class seems to have been bent on importing masses of third world immigrants with no tradition of consensual government, rule of law, or respect for property rights, concentrating them in communities where they can preserve the culture and language of the old country, and ensnaring them in a web of dependency which keeps them from climbing the ladder of assimilation and economic progress by which previous immigrant populations entered the mainstream of their adopted countries. With some politicians bent on throwing the borders open to savage, medieval, inbred “refugees” who breed much more rapidly than the native population, it doesn't take a great deal of imagination to see how the tragedy now occurring in South Africa could foreshadow the history of the latter part of this century in countries foolish enough to lay the groundwork for it now.

This book was published in 2011, but the trends it describes have only accelerated in subsequent years. It's an eye-opener to the risks of democracy without constraints or protection of the rights of minorities, and a warning to other nations of the grave risks they face should they allow opportunistic politicians to recreate the dire situation of South Africa in their own lands.

May 2018 Permalink

Miller, John J. and Mark Molesky. Our Oldest Enemy. New York: Doubleday, 2004. ISBN 0-385-51219-8.
In this history of relations between the America and France over three centuries—starting in 1704, well before the U.S. existed, the authors argue that the common perception of sympathy and shared interest between the “two great republics” from Lafayette to “Lafayette, we are here” and beyond is not borne out by the facts, that the recent tension between the U.S. and France over Iraq is consistent with centuries of French scheming in quest of its own, now forfeit, status as a great power. Starting with French-incited and led Indian raids on British settlements in the 18th century, through the undeclared naval war of 1798–1800, Napoleon's plans to invade New Orleans, Napoleon III's adventures in Mexico, Clemenceau's subverting Wilson's peace plans after being rescued by U.S. troops in World War I, Eisenhower's having to fight his way through Vichy French troops in North Africa in order to get to the Germans, Stalinst intellectuals in the Cold War, Suez, de Gaulle's pulling out of NATO, Chirac's long-term relationship with his “personal friend” Saddam Hussein, through recent perfidy at the U.N., the case is made that, with rare exceptions, France has been the most consistent opponent of the U.S. over all of their shared history. The authors don't hold France and the French in very high esteem, and there are numerous zingers and turns of phrase such as “Time and again in the last two centuries, France has refused to come to grips with its diminished status as a country whose greatest general was a foreigner, whose greatest warrior was a teenage girl, and whose last great military victory came on the plains of Wagram in 1809” (p. 10). The account of Vichy in chapter 9 is rather sketchy and one-dimensional; readers interested in that particular shameful chapter in French history will find more details in Robert Paxton's Vichy France and Marc Ferro's biography, Pétain or the eponymous movie made from it.

November 2004 Permalink

Miller, Richard L. Under The Cloud. The Woodlands, TX: Two Sixty Press, [1986] 1991. ISBN 978-1-881043-05-8.
Folks born after the era of atmospheric nuclear testing, and acquainted with it only through accounts written decades later, are prone to react with bafflement—“What were they thinking?” This comprehensive, meticulously researched, and thoroughly documented account of the epoch not only describes what happened and what the consequences were for those in the path of fallout, but also places events in the social, political, military, and even popular culture context of that very different age. A common perception about the period is “nobody really understood the risks”. Well, it's quite a bit more complicated than that, as you'll understand after reading this exposition. As early as 1953, when ranchers near Cedar City, Utah lost more than 4000 sheep and lambs after they grazed on grass contaminated by fallout, investigators discovered the consequences of ingestion of Iodine-131, which is concentrated by the body in the thyroid gland, where it can not only lead to thyroid cancer but faster-developing metabolic diseases. The AEC reacted immediately to this discovery. Commissioner Eugene Zuckert observed that “In the present frame of mind of the public, it would only take a single illogical and unforeseeable incident to preclude holding any future tests in the United States”, and hence the author of the report on the incident was ordered to revise the document, “eliminating any reference to radiation damage or effects”. In a subsequent meetings with the farmers, the AEC denied any connection between fallout and the death of the sheep and denied compensation, claiming that the sheep, including grotesquely malformed lambs born to irradiated ewes, had died of “malnutrition”.

It was obvious to others that something serious was happening. Shortly after bomb tests began in Nevada, the Eastman Kodak plant in Rochester, New York which manufactured X-ray film discovered that when a fallout cloud was passing overhead their film batches would be ruined by pinhole fogging due to fallout radiation, and that they could not even package the film in cardboard supplied by a mill whose air and water supplies were contaminated by fallout. Since it was already known that radiologists with occupational exposure to X-rays had mean lifespans several years shorter than the general public, it was pretty obvious that exposing much of the population of a continent (and to a lesser extent the entire world) to a radiation dose which could ruin X-ray film had to be problematic at best and recklessly negligent at worst. And yet the tests continued, both in Nevada and the Pacific, until the Limited Test Ban Treaty between the U.S., USSR, and Great Britain was adopted in 1963. France and China, not signatories to the treaty, continued atmospheric tests until 1971 and 1980 respectively.

What were they thinking? Well, this was a world in which the memory of a cataclysmic war which had killed tens of millions of people was fresh, which appeared to be on the brink of an even more catastrophic conflict, which might be triggered if the adversary developed a weapon believed to permit a decisive preemptive attack or victory through intimidation. In such an environment where everything might be lost through weakness and dilatory progress in weapons research, the prospect of an elevated rate of disease among the general population was weighed against the possibility of tens of millions of deaths in a general conflict and the decision was made to pursue the testing. This may very well have been the correct decision—since you can't test a counterfactual, we'll never know—but there wasn't a general war between the East and West, and to this date no nuclear weapon has been used in war since 1945. But what is shocking and reprehensible is that the élites who made this difficult judgement call did not have the courage to share the facts with the constituents and taxpayers who paid their salaries and bought the bombs that irradiated their children's thyroids with Iodine-131 and bones with Strontium-90. (I'm a boomer. If you want to know just how many big boom clouds a boomer lived through as a kid, hold a sensitive radiation meter up to one of the long bones of the leg; you'll see the elevated beta radiation from the Strontium-90 ingested in milk and immured in the bones [Strontium is a chemical analogue of Calcium].) Instead, they denied the obvious effects, suppressed research which showed the potential risks, intimidated investigators exploring the effects of low level radiation, and covered up assessments of fallout intensity and effects upon those exposed. Thank goodness such travesties of science and public policy could not happen in our enlightened age! An excellent example of mid-fifties AEC propaganda is the Atomic Test Effects in the Nevada Test Site Region pamphlet, available on this site: “Your best action is not to be worried about fall-out. … We can expect many reports that ‘Geiger counters were going crazy here today.’ Reports like this may worry people unnecessarily. Don't let them bother you.”

This book describes U.S. nuclear testing in Nevada in detail, even giving the precise path the fallout cloud from most detonations took over the country. Pacific detonations are covered in less detail, concentrating on major events and fallout disasters such as Castle Bravo. Soviet tests and the Chelyabinsk-40 disaster are covered more sketchily (fair enough—most details remained secret when the book was written), and British, French, and Chinese atmospheric tests are mentioned only in passing.

The paperback edition of this book has the hefty cover price of US$39.95, which is ta lot for a book of 548 pages with just a few black and white illustrations. I read the Kindle edition, which is priced at US$11.99 at this writing, which is, on its merits, even more overpriced. It is a sad, sorry, and shoddy piece of work, which appears to be the result of scanning a printed edition of the book with an optical character recognition program and transferring it to Kindle format without any proofreading whatsoever. Numbers and punctuation are uniformly garbled, words are mis-recognised, random words are jammed into the text as huge raster images, page numbers and chapter headings are interleaved into the text, and hyphenated words are not joined while pairs of unrelated words are run together. The abundant end note citations are randomly garbled and not linked to the notes at the end of the book. The index is just a scan of that in the printed book, garbled, unlinked to the text, and utterly useless. Most public domain Kindle books sold for a dollar have much better production values than this full price edition. It is a shame that such an excellent work on which the author invested such a great amount of work doing the research and telling the story has been betrayed by this slapdash Kindle edition which will leave unwary purchasers feeling their pockets have been picked. I applaud Amazon's providing a way for niche publishers and independent authors to bring their works to market on the Kindle, but I wonder if their lack of quality control on the works published (especially at what passes for full price on the Kindle) might, in the end, injure the reputation of Kindle books among the customer base. After this experience, I know for sure that I will never again purchase a Kindle book from a minor publisher before checking the comments to see if the transfer merits the asking price. Amazon might also consider providing a feedback mechanism for Kindle purchasers to rate the quality of the transfer to the Kindle, which would appear along with the content-based rating of the work.

September 2010 Permalink

Miller, Roland. Abandoned in Place. Albuquerque: University of New Mexico Press, 2016. ISBN 978-0-8263-5625-3.
Between 1945 and 1970 humanity expanded from the surface of Earth into the surrounding void, culminating in 1969 with the first landing on the Moon. Centuries from now, when humans and their descendents populate the solar system and exploit resources dwarfing those of the thin skin and atmosphere of the home planet, these first steps may be remembered as the most significant event of our age, with all of the trivialities that occupy our quotidian attention forgotten. Not only were great achievements made, but grand structures built on Earth to support them; these may be looked upon in the future as we regard the pyramids or the great cathedrals.

Or maybe not. The launch pads, gantry towers, assembly buildings, test facilities, blockhouses, bunkers, and control centres were not built as monuments for the ages, but rather to accomplish time-sensitive goals under tight budgets, by the lowest bidder, and at the behest of a government famous for neglecting infrastructure. Once the job was done, the mission accomplished, the program concluded; the facilities that supported it were simply left at the mercy of the elements which, in locations like coastal Florida, immediately began to reclaim them. Indeed, half of the facilities pictured here no longer exist.

For more than two decades, author and photographer Roland Miller has been documenting this heritage before it succumbs to rust, crumbling concrete, and invasive vegetation. With unparalleled access to the sites, he has assembled this gallery of these artefacts of a great age of exploration. In a few decades, this may be all we'll have to remember them. Although there is rudimentary background information from a variety of authors, this is a book of photography, not a history of the facilities. In some cases, unless you know from other sources what you're looking at, you might interpret some of the images as abstract.

The hardcover edition is a “coffee table book”: large format and beautifully printed, with a corresponding price. The Kindle edition is, well, a Kindle book, and grossly overpriced for 193 pages with screen-resolution images and a useless index consisting solely of search terms.

A selection of images from the book may be viewed on the Abandoned in Place Web site.

May 2016 Permalink

Milosz, Czeslaw. The Captive Mind. New York: Vintage, [1951, 1953, 1981] 1990. ISBN 0-679-72856-2.
This book is an illuminating exploration of life in a totalitarian society, written by a poet and acute observer of humanity who lived under two of the tyrannies of the twentieth century and briefly served one of them. The author was born in Lithuania in 1911 and studied at the university in Vilnius, a city he describes (p. 135) as “ruled in turn by the Russians, Germans, Lithuanians, Poles, again the Lithuanians, again the Germans, and again the Russians”—and now again the Lithuanians. An ethnic Pole, he settled in Warsaw after graduation, and witnessed the partition of Poland between Nazi Germany and the Soviet Union at the outbreak of World War II, conquest and occupation by Germany, “liberation” by the Red Army, and the imposition of Stalinist rule under the tutelage of Moscow. After working with the underground press during the war, the author initially supported the “people's government”, even serving as a cultural attaché at the Polish embassies in Washington and Paris. As Stalinist terror descended upon Poland and the rigid dialectical “Method” was imposed upon intellectual life, he saw tyranny ascendant once again and chose exile in the West, initially in Paris and finally the U.S., where he became a professor at the University of California at Berkeley in 1961—imagine, an anti-communist at Berkeley!

In this book, he explores the various ways in which the human soul comes to terms with a regime which denies its very existence. Four long chapters explore the careers of four Polish writers he denotes as “Alpha” through “Delta” and the choices they made when faced with a system which offered them substantial material rewards in return for conformity with a rigid system which put them at the service of the State, working toward ends prescribed by the “Center” (Moscow). He likens acceptance of this bargain to swallowing a mythical happiness pill, which, by eliminating the irritations of creativity, scepticism, and morality, guarantees those who take it a tranquil place in a well-ordered society. In a powerful chapter titled “Ketman”—a Persian word denoting fervent protestations of faith by nonbelievers, not only in the interest of self-preservation, but of feeling superior to those they so easily deceive—Milosz describes how an entire population can become actors who feign belief in an ideology and pretend to believe the earnest affirmations of orthodoxy on the part of others while sharing scorn for the few true believers.

The author received the 1980 Nobel Prize in Literature.

December 2006 Permalink

Ministry of Information. What Britain Has Done. London: Atlantic Books, [1945] 2007. ISBN 978-1-84354-680-1.
Here is government propaganda produced by the organisation upon which George Orwell (who worked there in World War II) based the Ministry of Truth in his novel Nineteen Eighty-Four. This slim volume (126 pages in this edition) was originally published in May of 1945, after the surrender of Germany, but with the war against Japan still underway. (Although there are references to Germany's capitulation, some chapters appear to have been written before the end of the war in Europe.)

The book is addressed to residents of the United Kingdom, and seeks to show how important their contributions were to the overall war effort, seemingly to dispel the notion that the U.S. and Soviet Union bore the brunt of the effort. To that end, it is as craftily constructed a piece of propaganda as you're likely to encounter. While subtitled “1939–1945: A Selection of Outstanding Facts and Figures”, it might equally as well be described as “Total War: Artfully Chosen Factoids”. Here is an extract from pp. 34–35 to give you a flavour.

Between September 1939 and February 1943, HM Destroyer Forester steamed 200,000 miles, a distance equal to nine times round the world.

In a single year the corvette Jonquil steamed a distance equivalent to more than three times round the world.

In one year and four months HM Destroyer Wolfhound steamed over 50,000 miles and convoyed 3,000 ships.

The message of British triumphalism is conveyed in part by omission: you will find only the barest hints in this narrative of the disasters of Britain's early efforts in the war, the cataclysmic conflict on the Eastern front, or the Pacific war waged by the United States against Japan. (On the other hand, the title is “What Britain Has Done”, so one might argue that tasks which Britain either didn't do or failed to accomplish do not belong here.) But this is not history, but propaganda, and as the latter it is a masterpiece. (Churchill's history, The Second World War, although placing Britain at the centre of the story, treats all of these topics candidly, except those relating to matters still secret, such as the breaking of German codes during the war.)

This reprint edition includes a new introduction which puts the document into historical perspective and seven maps which illustrate operations in various theatres of the war.

April 2008 Permalink

Muravchik, Joshua. Heaven on Earth: The Rise and Fall of Socialism. San Francisco: Encounter Books, 2002. ISBN 1-893554-45-7.

November 2002 Permalink

Murray, Charles and Catherine Bly Cox. Apollo. Burkittsville, MD: South Mountain Books, [1989, 2004] 2010. ISBN 978-0-9760008-0-8.
On November 5, 1958, NASA, only four months old at the time, created the Space Task Group (STG) to manage its manned spaceflight programs. Although there had been earlier military studies of manned space concepts and many saw eventual manned orbital flights growing out of the rocket plane projects conducted by NASA's predecessor, the National Advisory Committee for Aeronautics (NACA) and the U.S. Air Force, at the time of the STG's formation the U.S. had no formal manned space program. The initial group numbered 45 in all, including eight secretaries and “computers”—operators of electromechanical desk calculators, staffed largely with people from the NACA's Langley Research Center and initially headquartered there. There were no firm plans for manned spaceflight, no budget approved to pay for it, no spacecraft, no boosters, no launch facilities, no mission control centre, no astronauts, no plans to select and train them, and no experience either with human flight above the Earth's atmosphere or with more than a few seconds of weightlessness. And yet this team, the core of an effort which would grow to include around 400,000 people at NASA and its 20,000 industry and academic contractors, would, just ten years and nine months later, on July 20th, 1969, land two people on the surface of the Moon and then return them safely to the Earth.

Ten years is not a long time when it comes to accomplishing a complicated technological project. Development of the Boeing 787, a mid-sized commercial airliner which flew no further, faster, or higher than its predecessors, and was designed and built using computer-aided design and manufacturing technologies, took eight years from project launch to entry into service, and the F-35 fighter plane only entered service and then only in small numbers of one model a full twenty-three years after the start of its development.

In November, 1958, nobody in the Space Task Group was thinking about landing on the Moon. Certainly, trips to the Moon had been discussed in fables from antiquity to Jules Verne's classic De la terre à la lune of 1865, and in 1938 members of the British Interplanetary Society published a (totally impractical) design for a Moon rocket powered by more than two thousand solid rocket motors bundled together, which would be discarded once burned out, but only a year since the launch of the first Earth satellite and when nothing had been successfully returned from Earth orbit to the Earth, talk of manned Moon ships sounded like—lunacy.

The small band of stalwarts at the STG undertook the already daunting challenge of manned space flight with an incremental program they called Project Mercury, whose goal was to launch a single man into Earth orbit in a capsule (unable to change its orbit once released from the booster rocket, it barely deserved the term “spacecraft”) atop a converted Atlas intercontinental ballistic missile. In essence, the idea was to remove the warhead, replace it with a tiny cone-shaped can with a man in it, and shoot him into orbit. At the time the project began, the reliability of the Atlas rocket was around 75%, so NASA could expect around one in four launches to fail, with the Atlas known for spectacular explosions on the ground or on the way to space. When, in early 1960, the newly-chosen Mercury astronauts watched a test launch of the rocket they were to ride, it exploded less than a minute after launch. This was the fifth consecutive failure of an Atlas booster (although not all were so spectacular).

Doing things which were inherently risky on tight schedules with a shoestring budget (compared to military projects) and achieving an acceptable degree of safety by fanatic attention to detail and mountains of paperwork (NASA engineers quipped that no spacecraft could fly until the mass of paper documenting its construction and test equalled that of the flight hardware) became an integral part of the NASA culture. NASA was proceeding on its deliberate, step-by-step development of Project Mercury, and in 1961 was preparing for the first space flight by a U.S. astronaut, not into orbit on an Atlas, just a 15 minute suborbital hop on a version of the reliable Redstone rocket that launched the first U.S. satellite in 1958 when, on April 12, 1961, they were to be sorely disappointed when the Soviet Union launched Yuri Gagarin into orbit on Vostok 1. Not only was the first man in space a Soviet, they had accomplished an orbital mission, which NASA hadn't planned to attempt until at least the following year.

On May 5, 1961, NASA got back into the game, or at least the minor league, when Alan Shepard was launched on Mercury-Redstone 3. Sure, it was just a 15 minute up and down, but at least an American had been in space, if only briefly, and it was enough to persuade a recently-elected, young U.S. president smarting from being scooped by the Soviets to “take longer strides”. On May 25, less than three weeks after Shepard's flight, before a joint session of Congress, President Kennedy said, “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth.” Kennedy had asked his vice president, Lyndon Johnson, what goal the U.S. could realistically hope to achieve before the Soviets, and after consulting with the NASA administrator, James Webb, a Texas oil man and lawyer, and no other NASA technical people other than Wernher von Braun, he reported that a manned Moon landing was the only milestone the Soviets, with their heavy boosters and lead in manned space flight, were unlikely to do first. So, to the Moon it was.

The Space Task Group people who were, ultimately going to be charged with accomplishing this goal and had no advance warning until they heard Kennedy's speech or got urgent telephone calls from colleagues who had also heard the broadcast were, in the words of their leader, Robert Gilruth, who had no more warning than his staff, “aghast”. He and his team had, like von Braun in the 1950s, envisioned a deliberate, step-by-step development of space flight capability: manned orbital flight, then a more capable spacecraft with a larger crew able to maneuver in space, a space station to explore the biomedical issues of long-term space flight and serve as a base to assemble craft bound farther into space, perhaps a reusable shuttle craft to ferry crew and cargo to space without (wastefully and at great cost) throwing away rockets designed as long-range military artillery on every mission,followed by careful reconnaissance of the Moon by both unmanned and manned craft to map its surface, find safe landing zones, and then demonstrate the technologies that would be required to get people there and back safely.

All that was now clearly out the window. If Congress came through with the massive funds it would require, going to the Moon would be a crash project like the Manhattan Project to build the atomic bomb in World War II, or the massive industrial mobilisation to build Liberty Ships or the B-17 and B-29 bombers. The clock was ticking: when Kennedy spoke, there were just 3142 days until December 31, 1969 (yes, I know the decade actually ends at the end of 1970, since there was no year 0 in the Gregorian calendar, but explaining this to clueless Americans is a lost cause), around eight years and seven months. What needed to be done? Everything. How much time was there to do it? Not remotely enough. Well, at least the economy was booming, politicians seemed willing to pay the huge bills for what needed to be done, and there were plenty of twenty-something newly-minted engineering graduates ready and willing to work around the clock without a break to make real what they'd dreamed of since reading science fiction in their youth.

The Apollo Project was simultaneously one of the most epochal and inspiring accomplishments of the human species, far more likely to be remembered a thousand years hence than anything else that happened in the twentieth century, and at the same time a politically-motivated blunder which retarded human expansion into the space frontier. Kennedy's speech was at the end of May 1961. Perhaps because the Space Task Group was so small, it and NASA were able to react with a speed which is stunning to those accustomed to twenty year development projects for hardware far less complicated than Apollo.

In June and July [1961], detailed specifications for the spacecraft hardware were completed. By the end of July, the Requests for Proposals were on the street.

In August, the first hardware contract was awarded to M.I.T.'s Instrumentation Laboratory for the Apollo guidance system. NASA selected Merritt Island, Florida, as the site for a new spaceport and acquired 125 square miles of land.

In September, NASA selected Michoud, Louisiana, as the production facility for the Saturn rockets, acquired a site for the Manned Spacecraft Center—the Space Task Group grown up—south of Houston, and awarded the contract for the second stage of the Saturn [V] to North American Aviation.

In October, NASA acquired 34 square miles for a Saturn test facility in Mississippi.

In November, the Saturn C-1 was successfully launched with a cluster of eight engines, developing 1.3 million pounds of thrust. The contract for the command and service module was awarded to North American Aviation.

In December, the contract for the first stage of the Saturn [V] was awarded to Boeing and the contract for the third stage was awarded to Douglas Aircraft.

By January of 1962, construction had begun at all of the acquired sites and development was under way at all of the contractors.

Such was the urgency with which NASA was responding to Kennedy's challenge and deadline that all of these decisions and work were done before deciding on how to get to the Moon—the so-called “mission mode”. There were three candidates: direct-ascent, Earth orbit rendezvous (EOR), and lunar orbit rendezvous (LOR). Direct ascent was the simplest, and much like idea of a Moon ship in golden age science fiction. One launch from Earth would send a ship to the Moon which would land there, then take off and return directly to Earth. There would be no need for rendezvous and docking in space (which had never been attempted, and nobody was sure was even possible), and no need for multiple launches per mission, which was seen as an advantage at a time when rockets were only marginally reliable and notorious for long delays from their scheduled launch time. The downside of direct-ascent was that it would require an enormous rocket: planners envisioned a monster called Nova which would have dwarfed the Saturn V eventually used for Apollo and required new manufacturing, test, and launch facilities to accommodate its size. Also, it is impossible to design a ship which is optimised both for landing under rocket power on the Moon and re-entering Earth's atmosphere at high speed. Still, direct-ascent seemed to involve the least number of technological unknowns. Ever wonder why the Apollo service module had that enormous Service Propulsion System engine? When it was specified, the mission mode had not been chosen, and it was made powerful enough to lift the entire command and service module off the lunar surface and return them to the Earth after a landing in direct-ascent mode.

Earth orbit rendezvous was similar to what Wernher von Braun envisioned in his 1950s popular writings about the conquest of space. Multiple launches would be used to assemble a Moon ship in low Earth orbit, and then, when it was complete, it would fly to the Moon, land, and then return to Earth. Such a plan would not necessarily even require a booster as large as the Saturn V. One might, for example, launch the lunar landing and return vehicle on one Saturn I, the stage which would propel it to the Moon on a second, and finally the crew on a third, who would board the ship only after it was assembled and ready to go. This was attractive in not requiring the development of a giant rocket, but required on-time launches of multiple rockets in quick succession, orbital rendezvous and docking (and in some schemes, refuelling), and still had the problem of designing a craft suitable both for landing on the Moon and returning to Earth.

Lunar orbit rendezvous was originally considered a distant third in the running. A single large rocket (but smaller than Nova) would launch two craft toward the Moon. One ship would be optimised for flight through the Earth's atmosphere and return to Earth, while the other would be designed solely for landing on the Moon. The Moon lander, operating only in vacuum and the Moon's weak gravity, need not be streamlined or structurally strong, and could be potentially much lighter than a ship able to both land on the Moon and return to Earth. Finally, once its mission was complete and the landing crew safely back in the Earth return ship, it could be discarded, meaning that all of the hardware needed solely for landing on the Moon need not be taken back to the Earth. This option was attractive, requiring only a single launch and no gargantuan rocket, and allowed optimising the lander for its mission (for example, providing better visibility to its pilots of the landing site), but it not only required rendezvous and docking, but doing it in lunar orbit which, if they failed, would strand the lander crew in orbit around the Moon with no hope of rescue.

After a high-stakes technical struggle, in the latter part of 1962, NASA selected lunar orbit rendezvous as the mission mode, with each landing mission to be launched on a single Saturn V booster, making the decision final with the selection of Grumman as contractor for the Lunar Module in November of that year. Had another mission mode been chosen, it is improbable in the extreme that the landing would have been accomplished in the 1960s.

The Apollo architecture was now in place. All that remained was building machines which had never been imagined before, learning to do things (on-time launches, rendezvous and docking in space, leaving spacecraft and working in the vacuum, precise navigation over distances no human had ever travelled before, and assessing all of the “unknown unknowns” [radiation risks, effects of long-term weightlessness, properties of the lunar surface, ability to land on lunar terrain, possible chemical or biological threats on the Moon, etc.]) and developing plans to cope with them.

This masterful book is the story of how what is possibly the largest collection of geeks and nerds ever assembled and directed at a single goal, funded with the abundant revenue from an economic boom, spurred by a geopolitical competition against the sworn enemy of liberty, took on these daunting challenges and, one by one, overcame them, found a way around, or simply accepted the risk because it was worth it. They learned how to tame giant rocket engines that randomly blew up by setting off bombs inside them. They abandoned the careful step-by-step development of complex rockets in favour of “all-up testing” (stack all of the untested pieces the first time, push the button, and see what happens) because “there wasn't enough time to do it any other way”. People were working 16–18–20 hours a day, seven days a week. Flight surgeons in Mission Control handed out “go and whoa pills”—amphetamines and barbiturates—to keep the kids on the console awake at work and asleep those few hours they were at home—hey, it was the Sixties!

This is not a tale of heroic astronauts and their exploits. The astronauts, as they have been the first to say, were literally at the “tip of the spear” and would not have been able to complete their missions without the work of almost half a million uncelebrated people who made them possible, not to mention the hundred million or so U.S. taxpayers who footed the bill.

This was not a straight march to victory. Three astronauts died in a launch pad fire the investigation of which revealed shockingly slapdash quality control in the assembly of their spacecraft and NASA's ignoring the lethal risk of fire in a pure oxygen atmosphere at sea level pressure. The second flight of the Saturn V was a near calamity due to multiple problems, some entirely avoidable (and yet the decision was made to man the next flight of the booster and send the crew to the Moon). Neil Armstrong narrowly escaped death in May 1968 when the Lunar Landing Research Vehicle he was flying ran out of fuel and crashed. And the division of responsibility between the crew in the spacecraft and mission controllers on the ground had to be worked out before it would be tested in flight where getting things right could mean the difference between life and death.

What can we learn from Apollo, fifty years on? Other than standing in awe at what was accomplished given the technology and state of the art of the time, and on a breathtakingly short schedule, little or nothing that is relevant to the development of space in the present and future. Apollo was the product of a set of circumstances which happened to come together at one point in history and are unlikely to ever recur. Although some of those who worked on making it a reality were dreamers and visionaries who saw it as the first step into expanding the human presence beyond the home planet, to those who voted to pay the forbidding bills (at its peak, NASA's budget, mostly devoted to Apollo, was more than 4% of all Federal spending; in recent years, it has settled at around one half of one percent: a national commitment to space eight times smaller as a fraction of total spending) Apollo was seen as a key battle in the Cold War. Allowing the Soviet Union to continue to achieve milestones in space while the U.S. played catch-up or forfeited the game would reinforce the Soviet message to the developing world that their economic and political system was the wave of the future, leaving decadent capitalism in the dust.

A young, ambitious, forward-looking president, smarting from being scooped once again by Yuri Gagarin's orbital flight and the humiliation of the débâcle at the Bay of Pigs in Cuba, seized on a bold stroke that would show the world the superiority of the U.S. by deploying its economic, industrial, and research resources toward a highly visible goal. And, after being assassinated two and a half years later, his successor, a space enthusiast who had directed a substantial part of NASA's spending to his home state and those of his political allies, presented the program as the legacy of the martyred president and vigorously defended it against those who tried to kill it or reduce its priority. The U.S. was in an economic boom which would last through most of the Apollo program until after the first Moon landing, and was the world's unchallenged economic powerhouse. And finally, the federal budget had not yet been devoured by uncontrollable “entitlement” spending and national debt was modest and manageable: if the national will was there, Apollo was affordable.

This confluence of circumstances was unique to its time and has not been repeated in the half century thereafter, nor is it likely to recur in the foreseeable future. Space enthusiasts who look at Apollo and what it accomplished in such a short time often err in assuming a similar program: government funded, on a massive scale with lavish budgets, focussed on a single goal, and based on special-purpose disposable hardware suited only for its specific mission, is the only way to open the space frontier. They are not only wrong in this assumption, but they are dreaming if they think there is the public support and political will to do anything like Apollo today. In fact, Apollo was not even particularly popular in the 1960s: only at one point in 1965 did public support for funding of human trips to the Moon poll higher than 50% and only around the time of the Apollo 11 landing did 50% of the U.S. population believe Apollo was worth what was being spent on it.

In fact, despite being motivated as a demonstration of the superiority of free people and free markets, Project Apollo was a quintessentially socialist space program. It was funded by money extracted by taxation, its priorities set by politicians, and its operations centrally planned and managed in a top-down fashion of which the Soviet functionaries at Gosplan could only dream. Its goals were set by politics, not economic benefits, science, or building a valuable infrastructure. This was not lost on the Soviets. Here is Soviet Minister of Defence Dmitriy Ustinov speaking at a Central Committee meeting in 1968, quoted by Boris Chertok in volume 4 of Rockets and People.

…the Americans have borrowed our basic method of operation—plan-based management and networked schedules. They have passed us in management and planning methods—they announce a launch preparation schedule in advance and strictly adhere to it. In essence, they have put into effect the principle of democratic centralism—free discussion followed by the strictest discipline during implementation.

This kind of socialist operation works fine in a wartime crash program driven by time pressure, where unlimited funds and manpower are available, and where there is plenty of capital which can be consumed or borrowed to pay for it. But it does not create sustainable enterprises. Once the goal is achieved, the war won (or lost), or it runs out of other people's money to spend, the whole thing grinds to a halt or stumbles along, continuing to consume resources while accomplishing little. This was the predictable trajectory of Apollo.

Apollo was one of the noblest achievements of the human species and we should celebrate it as a milestone in the human adventure, but trying to repeat it is pure poison to the human destiny in the solar system and beyond.

This book is a superb recounting of the Apollo experience, told mostly about the largely unknown people who confronted the daunting technical problems and, one by one, found solutions which, if not perfect, were good enough to land on the Moon in 1969. Later chapters describe key missions, again concentrating on the problem solving which went on behind the scenes to achieve their goals or, in the case of Apollo 13, get home alive. Looking back on something that happened fifty years ago, especially if you were born afterward, it may be difficult to appreciate just how daunting the idea of flying to the Moon was in May 1961. This book is the story of the people who faced that challenge, pulled it off, and are largely forgotten today.

Both the 1989 first edition and 2004 paperback revised edition are out of print and available only at absurd collectors' prices. The Kindle edition, which is based upon the 2004 edition with small revisions to adapt to digital reader devices is available at a reasonable price, as is an unabridged audio book, which is a reading of the 2004 edition. You'd think there would have been a paperback reprint of this valuable book in time for the fiftieth anniversary of the landing of Apollo 11 (and the thirtieth anniversary of its original publication), but there wasn't.

Project Apollo is such a huge, sprawling subject that no book can possibly cover every aspect of it. For those who wish to delve deeper, here is a reading list of excellent sources. I have read all of these books and recommend every one. For those I have reviewed, I link to my review; for others, I link to a source where you can obtain the book.

If you wish to commemorate the landing of Apollo 11 in a moving ceremony with friends, consider hosting an Evoloterra celebration.

July 2019 Permalink

Nury, Fabien and Thierry Robin. La Mort de Staline. Paris: Dargaud, [2010, 2012] 2014. ISBN 978-2-205-07351-5.
The 2017 film, The Death of Stalin, was based upon this French bande dessinée (BD, graphic novel, or comic). The story is based around the death of Stalin and the events that ensued: the scheming and struggle for power among the members of his inner circle, the reactions and relationships of his daughter Svetlana and wastrel son Vasily, the conflict between the Red Army and NKVD, the maneuvering over the arrangements for Stalin's funeral, and the all-encompassing fear and suspicion that Stalin's paranoia had infused into the Soviet society. This is a fictional account, grounded in documented historical events, in which the major characters were real people. But the authors are forthright in saying they invented events and dialogue to tell a story which is intended to give one a sense of the «folie furieuse de Staline et de son entourage» rather than provide a historical narrative.

The film adaptation is listed as a comedy and, particularly if you have a taste for black humour, is quite funny. This BD is not explicitly funny, except in an ironic sense, illustrating the pathological behaviour of those surrounding Stalin. Many of the sequences in this work could have been used as storyboards for the movie, but there are significant events here which did make it into the screenplay. The pervasive strong language which earned the film an R rating is little in evidence here.

The principal characters and their positions are introduced by boxes overlaying the graphics, much as was done in the movie. Readers who aren't familiar with the players in Stalin's Soviet Union such as Beria, Zhukov, Molotov, Malenkov, Khrushchev, Mikoyan, and Bulganin, may miss some of the nuances of their behaviour here, which is driven by this back-story. Their names are given using the French transliteration of Russian, which is somewhat different from that used in English (for example, “Krouchtchev” instead of “Khrushchev”). The artwork is intricately drawn in the realistic style, with only a few comic idioms sparsely used to illustrate things like gunshots.

I enjoyed both the movie (which I saw first, not knowing until the end credits that it was based upon this work) and the BD. They're different takes on the same story, and both work on their own terms. This is not the kind of story for which “spoilers” apply, so you'll lose nothing by enjoying both in either order.

The album cited above contains both volumes of the original print edition. The Kindle edition continues to be published in two volumes (Vol. 1, Vol. 2). An English translation of the graphic novel is available. I have not looked at it beyond the few preview pages available on Amazon.

June 2018 Permalink

Okrent, Daniel. Last Call: The Rise and Fall of Prohibition. New York: Scribner, 2010. ISBN 978-0-7432-7702-0.
The ratification of the Eighteenth Amendment to the U.S. Constitution in 1919, prohibiting the “manufacture, sale, or transportation of intoxicating liquors” marked the transition of the U.S. Federal government into a nanny state, which occupied itself with the individual behaviour of its citizens. Now, certainly, attempts to legislate morality and regulate individual behaviour were commonplace in North America long before the United States came into being, but these were enacted at the state, county, or municipality level. When the U.S. Constitution was ratified, it exclusively constrained the actions of government, not of individual citizens, and with the sole exception of the Thirteenth Amendment, which abridged the “freedom” to hold people in slavery and involuntary servitude, this remained the case into the twentieth century. While bans on liquor were adopted in various jurisdictions as early as 1840, it simply never occurred to many champions of prohibition that a nationwide ban, written into the federal constitution, was either appropriate or feasible, especially since taxes on alcoholic beverages accounted for as much as forty percent of federal tax revenue in the years prior to the introduction of the income tax, and imposition of total prohibition would zero out the second largest source of federal income after the tariff.

As the Progressive movement gained power, with its ambitions of continental scale government and imposition of uniform standards by a strong, centralised regime, it found itself allied with an improbable coalition including the Woman's Christian Temperance Union; the Methodist, Baptist and Presbyterian churches; advocates of women's suffrage; the Anti-Saloon League; Henry Ford; and the Ku Klux Klan. Encouraged by the apparent success of “war socialism” during World War I and empowered by enactment of the Income Tax via the Sixteenth Amendment, providing another source of revenue to replace that of excise taxes on liquor, these players were motivated in the latter years of the 1910s to impose their agenda upon the entire country in as permanent a way as possible: by a constitutional amendment. Although the supermajorities required were daunting (two thirds in the House and Senate to submit, three quarters of state legislatures to ratify), if a prohibition amendment could be pushed over the bar (if you'll excuse the term), opponents would face what was considered an insuperable task to reverse it, as it would only take 13 dry states to block repeal.

Further motivating the push not just for a constitutional amendment, but enacting one as soon as possible, were the rapid demographic changes underway in the U.S. Support for prohibition was primarily rural, in southern and central states, Protestant, and Anglo-Saxon. During the 1910s, population was shifting from farms to urban areas, from the midland toward the coasts, and the immigrant population of Germans, Italians, and Irish who were famously fond of drink was burgeoning. This meant that the electoral landscape following reapportionment after the 1920 census would be far less receptive to the foes of Demon Rum.

One must never underestimate the power of an idea whose time has come, regardless of how stupid and counterproductive it might be. And so it came to pass that the Eighteenth Amendment was ratified by the 36th state: Utah, appropriately, on January 16th, 1919, with nationwide Prohibition to come into effect a year hence. From the outset, it was pretty obvious to many astute observers what was about happen. An Army artillery captain serving in France wrote to his fiancée in Missouri, “It looks to me like the moonshine business is going to be pretty good in the land of the Liberty Loans and Green Trading Stamps, and some of us want to get in on the ground floor. At least we want to get there in time to lay in a supply for future consumption.” Captain Harry S. Truman ended up pursuing a different (and probably less lucrative career), but was certainly prescient about the growth industry of the coming decade.

From the very start, Prohibition was a theatre of the absurd. Since it was enforced by a federal statute, the Volstead Act, enforcement, especially in states which did not have their own state Prohibition laws, was the responsibility of federal agents within the Treasury Department, whose head, Andrew Mellon, was a staunch opponent of Prohibition. Enforcement was always absurdly underfunded compared to the magnitude of the bootlegging industry and their customers (the word “scofflaw” entered the English language to describe them). Federal Prohibition officers were paid little, but were nonetheless highly prized patronage jobs, as their holders could often pocket ten times their salary in bribes to look the other way.

Prohibition unleashed the American talent for ingenuity, entrepreneurship, and the do-it-yourself spirit. While it was illegal to manufacture liquor for sale or to sell it, possession and consumption were perfectly legal, and families were allowed to make up to 200 gallons (which should suffice even for the larger, more thirsty households of the epoch) for their own use. This led to a thriving industry in California shipping grapes eastward for householders to mash into “grape juice” for their own use, being careful, of course, not to allow it to ferment or to sell some of their 200 gallon allowance to the neighbours. Later on, the “Vino Sano Grape Brick” was marketed nationally. Containing dried crushed grapes, complete with the natural yeast on the skins, you just added water, waited a while, and hoisted a glass to American innovation. Brewers, not to be outdone, introduced “malt syrup”, which with the addition of yeast and water, turned into beer in the home brewer's basement. Grocers stocked everything the thirsty householder needed to brew up case after case of Old Frothingslosh, and brewers remarked upon how profitable it was to outsource fermentation and bottling to the customers.

For those more talented in manipulating the law than fermenting fluids, there were a number of opportunities as well. Sacramental wine was exempted from Prohibition, and wineries which catered to Catholic and Jewish congregations distributing such wines prospered. Indeed, Prohibition enforcers noted they'd never seen so many rabbis before, including some named Patrick Houlihan and James Maguire. Physicians and dentists were entitled to prescribe liquor for medicinal purposes, and the lucrative fees for writing such prescriptions and for pharmacists to fill them rapidly caused hard liquor to enter the materia medica for numerous maladies, far beyond the traditional prescription as snakebite medicine. While many pre-Prohibition bars re-opened as speakeasies, others prospered by replacing “Bar” with ”Drug Store” and filling medicinal whiskey prescriptions for the same clientele.

Apart from these dodges, the vast majority of Americans slaked their thirst with bootleg booze, either domestic (and sometimes lethal), or smuggled from Canada or across the ocean. The obscure island of St. Pierre, a French possession off the coast of Canada, became a prosperous entrepôt for reshipment of Canadian liquor legally exported to “France”, then re-embarked on ships headed for “Rum Row”, just outside the territorial limit of the U.S. East Coast. Rail traffic into Windsor, Ontario, just across the Detroit River from the eponymous city, exploded, as boxcar after boxcar unloaded cases of clinking glass bottles onto boats bound for…well, who knows? Naturally, with billions and billions of dollars of tax-free income to be had, it didn't take long for criminals to stake their claims to it. What was different, and deeply appalling to the moralistic champions of Prohibition, is that a substantial portion of the population who opposed Prohibition did not despise them, but rather respected them as making their “money by supplying a public demand”, in the words of one Alphonse Capone, whose public relations machine kept him in the public eye.

As the absurdity of the almost universal scorn and disobedience of Prohibition grew (at least among the urban chattering classes, which increasingly dominated journalism and politics at the time), opinion turned toward ways to undo its increasingly evident pernicious consequences. Many focussed upon amending the Volstead Act to exempt beer and light wines from the definition of “intoxicating liquors”—this would open a safety valve, and at least allow recovery of the devastated legal winemaking and brewing industries. The difficulty of actually repealing the Eighteenth Amendment deterred many of the most ardent supporters of that goal. As late as September 1930, Senator Morris Sheppard, who drafted the Eighteenth Amendment, said “There is a much chance of repealing the Eighteenth Amendment as there is for a hummingbird to fly to the planet Mars with the Washington Monument tied to its tail.”

But when people have had enough (I mean, of intrusive government, not illicit elixir), it's amazing what they can motivate a hummingbird to do! Less than two years later, the Twenty-first Amendment, repealing Prohibition, was passed by the Congress, and on December 5th, 1933, it was ratified by the 36th state (appropriately, but astonishingly, Utah), thus putting an end to what had not only become generally seen as a farce, but also a direct cause of sanguinary lawlessness and scorn for the rule of law. The cause of repeal was greatly aided not only by the thirst of the populace, but also by the thirst of their government for revenue, which had collapsed due to plunging income tax receipts as the Great Depression deepened, along with falling tariff income as international trade contracted. Reinstating liquor excise taxes and collecting corporate income tax from brewers, winemakers, and distillers could help ameliorate the deficits from New Deal spending programs.

In many ways, the adoption and repeal of Prohibition represented a phase transition in the relationship between the federal government and its citizens. In its adoption, they voted, by the most difficult of constitutional standards, to enable direct enforcement of individual behaviour by the national government, complete with its own police force independent of state and local control. But at least they acknowledged that this breathtaking change could only be accomplished by a direct revision of the fundamental law of the republic, and that reversing it would require the same—a constitutional amendment, duly proposed and ratified. In the years that followed, the federal government used its power to tax (many partisans of Repeal expected the Sixteenth Amendment to also be repealed but, alas, this was not to be) to promote and deter all kinds of behaviour through tax incentives and charges, and before long the federal government was simply enacting legislation which directly criminalised individual behaviour without a moment's thought about its constitutionality, and those who challenged it were soon considered nutcases.

As the United States increasingly comes to resemble a continental scale theatre of the absurd, there may be a lesson to be learnt from the final days of Prohibition. When something is unsustainable, it won't be sustained. It's almost impossible to predict when the breaking point will come—recall the hummingbird with the Washington Monument in tow—but when things snap, it doesn't take long for the unimaginable new to supplant the supposedly secure status quo. Think about this when you contemplate issues such as immigration, the Euro, welfare state spending, bailouts of failed financial institutions and governments, and the multitude of big and little prohibitions and intrusions into personal liberty of the pervasive nanny state—and root for the hummingbird.

In the Kindle edition, all of the photographic illustrations are collected at the very end of the book, after the index—don't overlook them.

June 2010 Permalink

Orwell, George. Homage to Catalonia. San Diego: Harcourt Brace, [1938, 1952] 1987. ISBN 0-15-642117-8.
The orwell.ru site makes available electronic editions of this work in both English and Русский which you can read online or download to read at your leisure. All of Orwell's works are in the public domain under Russia's 50 year copyright law.

January 2003 Permalink

Outzen, James D., ed. The Dorian Files Revealed. Chantilly, VA: Center for the Study of National Reconnaissance, 2015. ISBN 978-1-937219-18-5.
We often think of the 1960s as a “can do” time, when technological progress, societal self-confidence, and burgeoning economic growth allowed attempting and achieving great things: from landing on the Moon, global communications by satellite, and mass continental and intercontinental transportation by air. But the 1960s were also a time, not just of conflict and the dissolution of the postwar consensus, but also of some grand-scale technological boondoggles and disasters. There was the XB-70 bomber and its companion F-108 fighter plane, the Boeing 2707 supersonic passenger airplane, the NERVA nuclear rocket, the TFX/F-111 swing-wing hangar queen aircraft, and plans for military manned space programs. Each consumed billions of taxpayer dollars with little or nothing to show for the expenditure of money and effort lavished upon them. The present volume, consisting of previously secret information declassified in July 2015, chronicles the history of the Manned Orbiting Laboratory, the U.S. Air Force's second attempt to launch its own astronauts into space to do military tasks there.

The creation of NASA in 1958 took the wind out of the sails of the U.S. military services, who had assumed it would be they who would lead on the road into space and in exploiting space-based assets in the interest of national security. The designation of NASA as a civilian aerospace agency did not preclude military efforts in space, and the Air Force continued with its X-20 Dyna-Soar, a spaceplane intended to be launched on a Titan rocket which would return to Earth and land on a conventional runway. Simultaneous with the cancellation of Dyna-Soar in December 1963, a new military space program, the Manned Orbiting Laboratory (MOL) was announced.

MOL would use a modified version of NASA's Gemini spacecraft to carry two military astronauts into orbit atop a laboratory facility which they could occupy for up to 60 days before returning to Earth in the Gemini capsule. The Gemini and laboratory would be launched by a Titan III booster, requiring only a single launch and no orbital rendezvous or docking to accomplish the mission. The purpose of the program was stated as to “evaluate the utility of manned space flight for military purposes”. This was a cover story or, if you like, a bald-faced lie.

In fact, MOL was a manned spy satellite, intended to produce reconnaissance imagery of targets in the Soviet Union, China, and the communist bloc in the visual, infrared, and radar bands, plus electronic information in much higher resolution than contemporary unmanned spy satellites. Spy satellites operating in the visual spectrum lost on the order of half their images to cloud cover. With a man on board, exposures would be taken only when skies were clear, and images could be compensated for motion of the spacecraft, largely eliminating motion blur. Further, the pilots could scan for “interesting” targets and photograph them as they appeared, and conduct wide-area ocean surveillance.

None of the contemporary drawings showed the internal structure of the MOL, and most people assumed it was a large pressurised structure for various experiments. In fact, most of it was an enormous telescope aimed at the ground, with a 72 inch (1.83 metre) mirror and secondary optics capable of very high resolution photography of targets on the ground. When this document was declassified in 2015, all references to its resolution capability were replaced with statements such as {better than 1 foot}. It is, in fact, a simple geometrical optics calculation to determine that the diffraction-limited resolution of a 1.83 metre mirror in the visual band is around 0.066 arc seconds. In a low orbit suited to imaging in detail, this would yield a resolution of around 4 cm (1.6 inches) as a theoretical maximum. Taking optical imperfections, atmospheric seeing, film resolution, and imperfect motion compensation into account, the actual delivered resolution would be about half this (8 cm, 3.2 inches). Once they state the aperture of the primary mirror, this is easy to work out, so they wasted a lot of black redaction ink in this document. And then, on page 102, they note (not redacted), “During times of crisis the MOL could be transferred from its nominal 80-mile orbit to one of approximately 200–300 miles. In this higher orbit the system would have access to all targets in the Soviet Bloc approximately once every three days and be able to take photographs at resolutions of about one foot.” All right, if they have one foot (12 inch) resolution at 200 miles, then they have 4.8 inch (12 cm) resolution at 80 miles (or, if we take 250 miles altitude, 3.8 inches [9.7 cm]), entirely consistent with my calculation from mirror aperture.

This document is a management, financial, and political history of the MOL program, with relatively little engineering detail. Many of the technological developments of the optical system were later used in unmanned reconnaissance satellite programs and remain secret. What comes across in the sorry history of this program, which, between December 1963 and its cancellation in June of 1969 burned through billions of taxpayer dollars, is that the budgeting, project management, and definition and pursuit of well-defined project goals was just as incompetent as the redaction of technical details discussed in the previous paragraph. There are almost Marx brothers episodes where Florida politicians attempted to keep jobs in their constituencies by blocking launches into polar orbit from Vandenberg Air Force Base while the Air Force could not disclose that polar orbits were essential to overflying targets in the Soviet Union because the reconnaissance mission of MOL was a black program.

Along with this history, a large collection of documents and pictures, all previously secret (and many soporifically boring) has been released. As a publication of the U.S. government, this work is in the public domain.

November 2015 Permalink

Page, Joseph T., II. Vandenberg Air Force Base. Charleston, SC: Arcadia Publishing, 2014. ISBN 978-1-4671-3209-1.
Prior to World War II, the sleepy rural part of the southern California coast between Santa Barbara and San Luis Obispo was best known as the location where, in September 1923, despite a lighthouse having been in operation at Arguello Point since 1901, the U.S. Navy suffered its worst peacetime disaster, when seven destroyers, travelling at 20 knots, ran aground at Honda Point, resulting in the loss of all seven ships and the deaths of 23 crewmembers. In the 1930s, following additional wrecks in the area, a lifeboat station was established in conjunction with the lighthouse.

During World War II, the Army acquired 92,000 acres (372 km²) in the area for a training base which was called Camp Cooke, after a cavalry general who served in the Civil War, in wars with Indian tribes, and in the Mexican-American War. The camp was used for training Army troops in a variety of weapons and in tank maneuvers. After the end of the war, the base was closed and placed on inactive status, but was re-opened after the outbreak of war in Korea to train tank crews. It was once again mothballed in 1953, and remained inactive until 1957, when 64,000 acres were transferred to the U.S. Air Force to establish a missile base on the West Coast, initially called Cooke Air Force Base, intended to train missile crews and also serve as the U.S.'s first operational intercontinental ballistic missile (ICBM) site. On October 4th, 1958, the base was renamed Vandenberg Air Force Base in honour of the late General Hoyt Vandenberg, former Air Force Chief of Staff and Director of Central Intelligence.

On December 15, 1958, a Thor intermediate range ballistic missile was launched from the new base, the first of hundreds of launches which would follow and continue up to the present day. Starting in September 1959, three Atlas ICBMs armed with nuclear warheads were deployed on open launch pads at Vandenberg, the first U.S. intercontinental ballistic missiles to go on alert. The Atlas missiles remained part of the U.S. nuclear force until their retirement in May 1964.

With the advent of Earth satellites, Vandenberg became a key part of the U.S. military and civil space infrastructure. Launches from Cape Canaveral in Florida are restricted to a corridor directed eastward over the Atlantic ocean. While this is fine for satellites bound for equatorial orbits, such as the geostationary orbits used by many communication satellites, a launch into polar orbit, preferred by military reconnaissance satellites and Earth resources satellites because it allows them to overfly and image locations anywhere on Earth, would result in the rockets used to launch them dropping spent stages on land, which would vex taxpayers to the north and hotheated Latin neighbours to the south.

Vandenberg Air Force Base, however, situated on a point extending from the California coast, had nothing to the south but open ocean all the way to Antarctica. Launching southward, satellites could be placed into polar or Sun synchronous orbits without disturbing anybody but the fishes. Vandenberg thus became the prime launch site for U.S. reconnaissance satellites which, in the early days when satellites were short-lived and returned film to the Earth, required a large number of launches. The Corona spy satellites alone accounted for 144 launches from Vandenberg between 1959 and 1972.

With plans in the 1970s to replace all U.S. expendable launchers with the Space Shuttle, facilities were built at Vandenberg (Space Launch Complex 6) to process and launch the Shuttle, using a very different architecture than was employed in Florida. The Shuttle stack would be assembled on the launch pad, protected by a movable building that would retract prior to launch. The launch control centre was located just 365 metres from the launch pad (as opposed to 4.8 km away at the Kennedy Space Center in Florida), so the plan in case of a catastrophic launch accident on the pad essentially seemed to be “hope that never happens”. In any case, after spending more than US$4 billion on the facilities, after the Challenger disaster in 1986, plans for Shuttle launches from Vandenberg were abandoned, and the facility was mothballed until being adapted, years later, to launch other rockets.

This book, part of the “Images of America” series, is a collection of photographs (all black and white) covering all aspects of the history of the site from before World War II to the present day. Introductory text for each chapter and detailed captions describe the items shown and their significance to the base's history. The production quality is excellent, and I noted only one factual error in the text (the names of crew of Gemini 5). For a book of just 128 pages, the paperback is very expensive (US$22 at this writing). The Kindle edition is still pricey (US$13 list price), but may be read for free by Kindle Unlimited subscribers.

December 2019 Permalink

Pellegrino, Charles. Ghosts of the Titanic. New York: Avon, 2000. ISBN 0-380-72472-3.

August 2001 Permalink

Phares, Walid. Future Jihad. New York: Palgrave Macmillan, [2005] 2006. ISBN 1-4039-7511-6.
It seems to me that at the root of the divisive and rancorous dispute over the war on terrorism (or whatever you choose to call it), is an individual's belief in one of the following two mutually exclusive propositions.

  1. There is a broad-based, highly aggressive, well-funded, and effective jihadist movement which poses a dire threat not just to secular and pluralist societies in the Muslim world, but to civil societies in Europe, the Americas, and Asia.
  2. There isn't.

In this book, Walid Phares makes the case for the first of these two statements. Born in Lebanon, after immigrating to the United States in 1990, he taught Middle East studies at several universities, and is currently a professor at Florida Atlantic University. He is the author of a number of books on Middle East history, and appears as a commentator on media outlets ranging from Fox News to Al Jazeera.

Ever since the early 1990s, the author has been warning of what he argued was a constantly growing jihadist threat, which was being overlooked and minimised by the academic experts to whom policy makers turn for advice, largely due to Saudi-funded and -indoctrinated Middle East Studies programmes at major universities. Meanwhile, Saudi funding also financed the radicalisation of Muslim communities around the world, particularly the large immigrant populations in many Western European countries. In parallel to this top-down approach by the Wahabi Saudis, the Muslim Brotherhood and its affiliated groups, including Hamas and the Front Islamique du Salut in Algeria, pursued a bottom-up strategy of radicalising the population and building a political movement seeking to take power and impose an Islamic state. Since the Iranian revolution of 1979, a third stream of jihadism has arisen, principally within Shiite communities, promoted and funded by Iran, including groups such as Hezbollah.

The present-day situation is placed in historical content dating back to the original conquests of Mohammed and the spread of Islam from the Arabian peninsula across three continents, and subsequent disasters at the hands of the Mongols and Crusaders, the reconquista of the Iberian peninsula, and the ultimate collapse of the Ottoman Empire and Caliphate following World War I. This allows the reader to grasp the world-view of the modern jihadist which, while seemingly bizarre from a Western standpoint, is entirely self-consistent from the premises whence the believers proceed.

Phares stresses that modern jihadism (which he dates from the abolition of the Ottoman Caliphate in 1923, an event which permitted free-lance, non-state actors to launch jihad unconstrained by the central authority of a caliph), is a political ideology with imperial ambitions: the establishment of a new caliphate and its expansion around the globe. He argues that this is only incidentally a religious conflict: although the jihadists are Islamic, their goals and methods are much the same as believers in atheistic ideologies such as communism. And just as one could be an ardent Marxist without supporting Soviet imperialism, one can be a devout Muslim and oppose the jihadists and intolerant fundamentalists. Conversely, this may explain the curious convergence of the extreme collectivist left and puritanical jihadists: red diaper baby and notorious terrorist Carlos “the Jackal” now styles himself an Islamic revolutionary, and the corpulent caudillo of Caracas has been buddying up with the squinty dwarf of Tehran.

The author believes that since the terrorist strikes against the United States in September 2001, the West has begun to wake up to the threat and begin to act against it, but that far more, both in realising the scope of the problem and acting to avert it, remains to be done. He argues, and documents from post-2001 events, that the perpetrators of future jihadist strikes against the West are likely to be home-grown second generation jihadists radicalised and recruited among Muslim communities within their own countries, aided by Saudi financed networks. He worries that the emergence of a nuclear armed jihadist state (most likely due to an Islamist takeover of Pakistan or Iran developing its own bomb) would create a base of operations for jihad against the West which could deter reprisal against it.

Chapter thirteen presents a chilling scenario of what might have happened had the West not had the wake-up call of the 2001 attacks and begun to mobilise against the threat. The scary thing is that events could still go this way should the threat be real and the West, through fatigue, ignorance, or fear, cease to counter it. While defensive measures at home and direct action against terrorist groups are required, the author believes that only the promotion of democratic and pluralistic civil societies in the Muslim world can ultimately put an end to the jihadist threat. Toward this end, a good first step would be, he argues, for the societies at risk to recognise that they are not at war with “terrorism” or with Islam, but rather with an expansionist ideology with a political agenda which attacks targets of opportunity and adapts quickly to countermeasures.

In all, I found the arguments somewhat over the top, but then, unlike the author, I haven't spent most of my career studying the jihadists, nor read their publications and Web sites in the original Arabic as he has. His warnings of cultural penetration of the West, misdirection by artful propaganda, and infiltration of policy making, security, and military institutions by jihadist covert agents read something like J. Edgar Hoover's Masters of Deceit, but then history, in particular the Venona decrypts, has borne out many of Hoover's claims which were scoffed at when the book was published in 1958. But still, one wonders how a “movement” composed of disparate threads many of whom hate one another (for example, while the Saudis fund propaganda promoting the jihadists, most of the latter seek to eventually depose the Saudi royal family and replace it with a Taliban-like regime; Sunni and Shiite extremists view each other as heretics) can effectively co-ordinate complex operations against their enemies.

A thirty page afterword in this paperback edition provides updates on events through mid-2006. There are some curious things: while transliteration of Arabic and Farsi into English involves a degree of discretion, the author seems very fond of the letter “u”. He writes the name of the leader of the Iranian revolution as “Khumeini”, for example, which I've never seen elsewhere. The book is not well-edited: occasionally he used “Khomeini”, spells Sayid Qutb's last name as “Kutb” on p. 64, and on p. 287 refers to “Hezbollah” and “Hizbollah” in the same sentence.

The author maintains a Web site devoted to the book, as well as a personal Web site which links to all of his work.

September 2007 Permalink

Pipes. Richard. Communism: A History. New York: Doubleday, [2001] 2003. ISBN 978-0-8129-6864-4.
This slim volume (just 175 pages) provides, for its size, the best portrait I have encountered of the origins of communist theory, the history of how various societies attempted to implement it in the twentieth century, and the tragic consequences of those grand scale social experiments and their aftermath. The author, a retired professor of history at Harvard University, is one of the most eminent Western scholars of Russian and Soviet history. The book examines communism as an ideal, a program, and its embodiment in political regimes in various countries. Based on the ideals of human equality and subordination of the individual to the collective which date at least back to Plato, communism, first set out as a program of action by Marx and Engels, proved itself almost infinitely malleable in the hands of subsequent theorists and political leaders, rebounding from each self-evident failure (any one of which should, in a rational world, have sufficed to falsify a theory which proclaims itself “scientific”), morphing into yet another infallible and inevitable theory of history. In the words of the immortal Bullwinkle J. Moose, “This time for sure!”

Regardless of the nature of the society in which the communist program is undertaken and the particular variant of the theory adopted, the consequences have proved remarkably consistent: emergence of an elite which rules through violence, repression, and fear; famine and economic stagnation; and collapse of the individual enterprise and innovation which are the ultimate engine of progress of all kinds. No better example of this is the comparison of North and South Korea on p. 152. Here are two countries which started out identically devastated by Japanese occupation in World War II and then by the Korean War, with identical ethnic makeup, which diverged in the subsequent decades to such an extent that famine killed around two million people in North Korea in the 1990s, at which time the GDP per capita in the North was around US$900 versus US$13,700 in the South. Male life expectancy at birth in the North was 48.9 years compared to 70.4 years in the South, with an infant mortality rate in the North more than ten times that of the South. This appalling human toll was modest compared to the famines and purges of the Soviet Union and Communist China, or the apocalyptic fate of Cambodia under Pol Pot. The Black Book of Communism puts the total death toll due to communism in the twentieth century as between 85 and 100 million, which is half again greater than that of both world wars combined. To those who say “One cannot make an omelette without breaking eggs”, the author answers, “Apart from the fact that human beings are not eggs, the trouble is that no omelette has emerged from the slaughter.” (p. 158)

So effective were communist states in their “big lie” propaganda, and so receptive were many Western intellectuals to its idealistic message, that many in the West were unaware of this human tragedy as it unfolded over the better part of a century. This book provides an excellent starting point for those unaware of the reality experienced by those living in the lands of communism and those for whom that epoch is distant, forgotten history, but who remain, like every generation, susceptible to idealistic messages and unaware of the suffering of those who attempted to put them into practice in the past.

Communism proved so compelling to intellectuals (and, repackaged, remains so) because it promised hope for a new way of living together and change to a rational world where the best and the brightest—intellectuals and experts—would build a better society, shorn of all the conflict and messiness which individual liberty unavoidably entails. The author describes this book as “an introduction to Communism and, at the same time, its obituary.” Maybe—let's hope so. But this book can serve an even more important purpose: as a cautionary tale of how the best of intentions can lead directly to the worst of outcomes. When, for example, one observes in the present-day politics of the United States the creation, deliberate exacerbation, and exploitation of crises to implement a political agenda; use of engineered financial collapse to advance political control over the economy and pauperise and render dependent upon the state classes of people who would otherwise oppose it; the creation, personalisation, and demonisation of enemies replacing substantive debate over policy; indoctrination of youth in collectivist dogma; and a number of other strategies right out of Lenin's playbook, one wonders if the influence of that evil mummy has truly been eradicated, and wishes that the message in this book were more widely known there and around the world.

March 2009 Permalink

Podhoretz, Norman. World War IV. New York: Doubleday, 2007. ISBN 978-0-385-52221-2.
Whether you agree with it or not, here is one of the clearest expositions of the “neoconservative” (a term the author, who is one of the type specimens, proudly uses to identify himself) case for the present conflict between Western civilisation and the forces of what he identifies as “Islamofascism”, an aggressive, expansionist, and totalitarian ideology which is entirely distinct from Islam, the religion. The author considers the Cold War to have been World War III, and hence the present and likely as protracted a conflict, as World War IV. He deems it to be as existential a struggle for civilisation against the forces of tyranny as any of the previous three wars.

If you're sceptical of such claims (as am I, being very much an economic determinist who finds it difficult to believe a region of the world whose exports, apart from natural resources discovered and extracted largely by foreigners, are less than those of Finland, can truly threaten the fountainhead of the technologies and products without which its residents would remain in the seventh century utopia they seem to idolise), read Chapter Two for the contrary view: it is argued that since 1970, a series of increasingly provocative attacks were made against the West, not in response to Western actions but due to unreconcilably different world-views. Each indication of weakness by the West only emboldened the aggressors and escalated the scale of subsequent attacks.

The author argues the West is engaged in a multi-decade conflict with its own survival at stake, in which the wars in Afghanistan and Iraq are simply campaigns. This war, like the Cold War, will be fought on many levels: not just military, but also proxy conflicts, propaganda, covert action, economic warfare, and promotion of the Western model as the solution to the problems of states imperiled by Islamofascism. There is some discussion in the epilogue of the risk posed to Europe by the radicalisation of its own burgeoning Muslim population while its indigenes are in a demographic death spiral, but for the most part the focus is on democratising the Middle East, not the creeping threat to democracy in the West by an unassimilated militant immigrant population which a feckless, cringing political class is unwilling to confront.

This book is well written and argued, but colour me unpersuaded. Instead of spending decades spilling blood and squandering fortune in a region of the world which has been trouble for every empire foolish enough to try to subdue it over the last twenty centuries, why not develop domestic energy sources to render the slimy black stuff in the ground there impotent and obsolete, secure the borders against immigration from there (except those candidates who demonstrate themselves willing to assimilate to the culture of the West), and build a wall around the place and ignore what happens inside? Works for me.

July 2008 Permalink

Ponting, Clive. Gunpowder. London: Pimlico, 2005. ISBN 1-84413-543-8.
When I was a kid, we learnt in history class that gunpowder had been discovered in the thirteenth century by the English Franciscan monk Roger Bacon, who is considered one of the founders of Western science. The Chinese were also said to have known of gunpowder, but used it only for fireworks, as opposed to the applications in the fields of murder and mayhem the more clever Europeans quickly devised. In The Happy Turning (July 2003), H. G. Wells remarked that “truth has a way of heaving up through the cracks of history”, and so it has been with the origin of gunpowder, as recounted here.

It is one of those splendid ironies that gunpowder, which, along with its more recent successors, has contributed to the slaughter of more human beings than any other invention with the exception of government, was discovered in the 9th century A.D. by Taoist alchemists in China who were searching for an elixir of immortality (and, in fact, gunpowder continued to be used as a medicine in China for centuries thereafter). But almost as soon as the explosive potential of gunpowder was discovered, the Chinese began to apply it to weapons and, over the next couple of centuries had invented essentially every kind of firearm and explosive weapon which exists today.

Gunpowder is not a high explosive; it does not detonate in a supersonic shock wave as do substances such as nitroglycerine and TNT, but rather deflagrates, or burns rapidly, as the heat of combustion causes the release of the oxygen in the nitrate compound in the mix. If confined, of course, the rapid release of gases and heat can cause a container to explode, but the rapid combustion of gunpowder also makes it suitable as a propellant in guns and rockets. The early Chinese formulations used a relatively small amount of saltpetre (potassium nitrate), and were used in incendiary weapons such as fire arrows, fire lances (a kind of flamethrower), and incendiary bombs launched by catapults and trebuchets. Eventually the Chinese developed high-nitrate mixes which could be used in explosive bombs, rockets, guns, and cannon (which were perfected in China long before the West, where the technology of casting iron did not appear until two thousand years after it was known in China).

From China, gunpowder technology spread to the Islamic world, where bombardment by a giant cannon contributed to the fall of Constantinople to the Ottoman Empire. Knowledge of gunpowder almost certainly reached Europe via contact with the Islamic invaders of Spain. The first known European document giving its formula, whose disarmingly candid Latin title Liber Ignium ad Comburendos Hostes translates to “Book of Fires for the Burning of Enemies”, dates from about 1300 and contains a number of untranslated Arabic words.

Gunpowder weapons soon became a fixture of European warfare, but crude gun fabrication and weak powder formulations initially limited their use mostly to huge siege cannons which launched large stone projectiles against fortifications at low velocity. But as weapon designs and the strength of powder improved, the balance in siege warfare shifted from the defender to the attacker, and the consolidation of power in Europe began to accelerate.

The author argues persuasively that gunpowder played an essential part in the emergence of the modern European state, because the infrastructure needed to produce saltpetre, manufacture gunpowder weapons in quantity, equip, train, and pay ever-larger standing armies required a centralised administration with intrusive taxation and regulation which did not exist before. Once these institutions were in place, they conferred such a strategic advantage that the ruler was able to consolidate and expand the area of control at the expense of previously autonomous regions, until coming up against another such “gunpowder state”.

Certainly it was gunpowder weapons which enabled Europeans to conquer colonies around the globe and eventually impose their will on China, where centuries of political stability had caused weapons technology to stagnate by comparison with that of conflict-ridden Europe.

It was not until the nineteenth century that other explosives and propellants discovered by European chemists brought the millennium-long era of gunpowder a close. Gunpowder shaped human history as have few other inventions. This excellent book recounts that story from gunpowder's accidental invention as an elixir to its replacement by even more destructive substances, and provides a perspective on a thousand years of world history in terms of the weapons with which so much of it was created.

January 2007 Permalink

Posner, Gerald L. Secrets of the Kingdom. New York: Random House, 2005. ISBN 1-4000-6291-8.
Most of this short book (196 pages of main text) is a straightforward recounting of the history of Saudi Arabia from its founding as a unified kingdom in 1932 under Ibn Saud, and of the petroleum-dominated relationship between the United States and the kingdom up to the present, based almost entirely upon secondary sources. Chapter 10, buried amidst the narrative and barely connected to the rest, and based on the author's conversations with an unnamed Mossad (Israeli intelligence) officer and an unidentified person claiming to be an eyewitness, describes a secret scheme called “Petroleum Scorched Earth” (“Petro SE”) which, it is claimed, was discovered by NSA intercepts of Saudi communications which were shared with the Mossad and then leaked to the author.

The claim is that the Saudis have rigged all of their petroleum infrastructure so that it can be destroyed from a central point should an invader be about to seize it, or the House of Saud fall due to an internal revolution. Oil and gas production facilities tend to be spread out over large areas and have been proven quite resilient—the damage done to Kuwait's infrastructure during the first Gulf War was extensive, yet reparable in a relatively short time, and the actual petroleum reserves are buried deep in the Earth and are essentially indestructible—if a well is destroyed, you simply sink another well; it costs money, but you make it back as soon as the oil starts flowing again. Refineries and storage facilities are more easily destroyed, but the real long-term wealth (and what an invader or revolutionary movement would covet most) lies deep in the ground. Besides, most of Saudi Arabia's export income comes from unrefined products (in the first ten months of 2004, 96% of Saudi Arabia's oil exports to the U.S. were crude), so even if all the refineries were destroyed (which is difficult—refineries are big and spread out over a large area) and took a long time to rebuild, the core of the export economy would be up and running as soon as the wells were pumping and pipelines and oil terminals were repaired.

So, it is claimed, the Saudis have mined their key facilities with radiation dispersal devices (RDDs), “dirty bombs” composed of Semtex plastic explosive mixed with radioactive isotopes of cesium, rubidium (huh?), and/or strontium which, when exploded, will disperse the radioactive material over a broad area, which (p. 127) “could render large swaths of their own country uninhabitable for years”. What's that? Do I hear some giggling from the back of the room from you guys with the nuclear bomb effects computers? Well, gosh, where shall we begin?

Let us commence by plinking an easy target, the rubidium. Metallic rubidium burns quite nicely in air, which makes it easy to disperse, but radioactively it's a dud. Natural rubidium contains about 28% of the radioactive isotope rubidium-87, but with a half-life of about 50 billion years, it's only slightly more radioactive than dirt when dispersed over any substantial area. The longest-lived artificially created isotope is rubidium-83 with a half-life of only 86 days, which means that once dispersed, you'd only have to wait a few months for it to decay away. In any case, something which decays so quickly is useless for mining facilities, since you'd need to constantly produce fresh batches of the isotope (in an IAEA inspected reactor?) and install it in the bombs. So, at least the rubidium part of this story is nonsense; how about the rest?

Cesium-137 and strontium-90 both have half-lives of about 30 years and are readily taken up and stored in the human body, so they are suitable candidates for a dirty bomb. But while a dirty bomb is a credible threat for contaminating high-value, densely populated city centres in countries whose populations are wusses about radiation, a sprawling oil field or petrochemical complex is another thing entirely. The Federation of American Scientists report, “Dirty Bombs: Response to a Threat”, estimates that in the case of a cobalt-salted dirty bomb, residents who lived continuously in the contaminated area for forty years after the detonation would have a one in ten chance of death from cancer induced by the radiation. With the model cesium bomb, five city blocks would be contaminated at a level which would create a one in a thousand chance of cancer for residents.

But this is nothing! To get a little perspective on this, according to the U.S. Centers for Disease Control's Leading Causes of Death Reports, people in the United States never exposed to a dirty bomb have a 22.8% probability of dying of cancer. While the one in ten chance created by the cobalt dirty bomb is a substantial increase in this existing risk, that's the risk for people who live for forty years in the contaminated area. Working in a contaminated oil field is quite different. First of all, it's a lot easier to decontaminate steel infrastructure and open desert than a city, and oil field workers can be issued protective gear to reduce their exposure to the remaining radiation. In any case, they'd only be in the contaminated area for the work day, then return to a clean area at the end of the shift. You could restrict hiring to people 45 years and older, pay a hazard premium, and limit their contract to either a time period (say two years) or based on integrated radiation dose. Since radiation-induced cancers usually take a long time to develop, older workers are likely to die of some other cause before the effects of radiation get to them. (This sounds callous, but it's been worked out in detail in studies of post nuclear war decontamination. The rules change when you're digging out of a hole.)

Next, there is this dumb-as-a-bag-of-dirt statement on p. 127:

Saudi engineers calculated that the soil particulates beneath the surface of most of their three hundred known reserves are so fine that radioactive releases there would permit the contamination to spread widely through the soil subsurface, carrying the radioactivity far under the ground and into the unpumped oil. This gave Petro SE the added benefit of ensuring that even if a new power in the Kingdom could rebuild the surface infrastructure, the oil reserves themselves might be unusable for years.
Hey, you guys in the back—enough with the belly laughs! Did any of the editors at Random House think to work out, even if you stipulated that radioactive contamination could somehow migrate from the surface down through hundreds to thousands of metres of rock (how, due to the abundant rain?), just how much radioactive contaminant you'd have to mix with the estimated two hundred and sixty billion barrels of crude oil in the Saudi reserves to render it dangerously radioactive? In any case, even if you could magically transport the radioactive material into the oil bearing strata and supernaturally mix it with the oil, it would be easy to separate during the refining process.

Finally, there's the question of why, if the Saudis have gone to all the trouble to rig their oil facilities to self-destruct, it has remained a secret waiting to be revealed in this book. From a practical standpoint, almost all of the workers in the Saudi oil fields are foreigners. Certainly some of them would be aware of such a massive effort and, upon retirement, say something about it which the news media would pick up. But even if the secret could be kept, we're faced with the same question of deterrence which arose in the conclusion of Dr. Strangelove with the Soviet doomsday machine—it's idiotic to build a doomsday machine and keep it a secret! Its only purpose is to deter a potential attack, and if attackers don't know there's a doomsday machine, they won't be deterred. Precisely the same logic applies to the putative Saudi self-destruct button.

Now none of this argumentation proves in any way that the Saudis haven't rigged their oil fields to blow up and scatter radioactive material on the debris, just that it would be a phenomenally stupid thing for them to try to do. But then, there are plenty of precedents for the Saudis doing dumb things—they have squandered the greatest fortune in the history of the human race and, while sitting on a quarter of all the world's oil, seen their per capita GDP erode to fall between that of Poland and Latvia. If, indeed, they have done something so stupid as this scorched earth scheme, let us hope they manage the succession to the throne, looming in the near future, in a far more intelligent fashion.

July 2005 Permalink

Post, David G. In Search of Jefferson's Moose. New York: Oxford University Press, 2009. ISBN 978-0-19-534289-5.
In 1787, while serving as Minister to France, Thomas Jefferson took time out from his diplomatic duties to arrange to have shipped from New Hampshire across the Atlantic Ocean the complete skeleton, skin, and antlers of a bull moose, which was displayed in his residence in Paris. Jefferson was involved in a dispute with the Comte de Buffon, who argued that the fauna of the New World were degenerate compared to those of Europe and Asia. Jefferson concluded that no verbal argument or scientific evidence would be as convincing of the “structure and majesty of American quadrupeds” as seeing a moose in the flesh (or at least the bone), so he ordered one up for display.

Jefferson was a passionate believer in the exceptionality of the New World and the prospects for building a self-governing republic in its expansive territory. If it took hauling a moose all the way to Paris to convince Europeans disdainful of the promise of his nascent nation, then so be it—bring on the moose! Among Jefferson's voluminous writings, perhaps none expressed these beliefs as strongly as his magisterial Notes on the State of Virginia. The present book, subtitled “Notes on the State of Cyberspace” takes Jefferson's work as a model and explores this new virtual place which has been built based upon a technology which simply sends packets of data from place to place around the world. The parallels between the largely unexplored North American continent of Jefferson's time and today's Internet are strong and striking, as the author illustrates with extensive quotations from Jefferson interleaved in the text (set in italics to distinguish them from the author's own words) which are as applicable to the Internet today as the land west of the Alleghenies in the late 18th century.

Jefferson believed in building systems which could scale to arbitrary size without either losing their essential nature or becoming vulnerable to centralisation and the attendant loss of liberty and autonomy. And he believed that free individuals, living within such a system and with access to as much information as possible and the freedom to communicate without restrictions would self-organise to perpetuate, defend, and extend such a polity. While Europeans, notably Montesquieu, believed that self-governance was impossible in a society any larger than a city-state, and organised their national and imperial governments accordingly, Jefferson's 1784 plan for the government of new Western territory set forth an explicitly power law fractal architecture which, he believed, could scale arbitrarily large without depriving citizens of local control of matters which directly concerned them. This architecture is stunningly similar to that of the global Internet, and the bottom-up governance of the Internet to date (which Post explores in some detail) is about as Jeffersonian as one can imagine.

As the Internet has become a central part of global commerce and the flow of information in all forms, the eternal conflict between the decentralisers and champions of individual liberty (with confidence that free people will sort things out for themselves)—the Jeffersonians—and those who believe that only strong central authority and the vigorous enforcement of rules can prevent chaos—Hamiltonians—has emerged once again in the contemporary debate about “Internet governance”.

This is a work of analysis, not advocacy. The author, a law professor and regular contributor to The Volokh Conspiracy Web log, observes that, despite being initially funded by the U.S. Department of Defense, the development of the Internet to date has been one of the most Jeffersonian processes in history, and has scaled from a handful of computers in 1969 to a global network with billions of users and a multitude of applications never imagined by its creators, and all through consensual decision making and contractual governance with nary a sovereign gun-wielder in sight. So perhaps before we look to “fix” the unquestioned problems and challenges of the Internet by turning the Hamiltonians loose upon it, we should listen well to the wisdom of Jefferson, who has much to say which is directly applicable to exploring, settling, and governing this new territory which technology has opened up. This book is a superb way to imbibe the wisdom of Jefferson, while learning the basics of the Internet architecture and how it, in many ways, parallels that of aspects of Jefferson's time. Jefferson even spoke to intellectual property issues which read like today's news, railing against a “rascal” using an abusive patent of a long-existing device to extort money from mill owners (p. 197), and creating and distributing “freeware” including a design for a uniquely efficient plough blade based upon Newton's Principia which he placed in the public domain, having “never thought of monopolizing by patent any useful idea which happened to offer itself to me” (p. 196).

So astonishing was Jefferson's intellect that as you read this book you'll discover that he has a great deal to say about this new frontier we're opening up today. Good grief—did you know that the Oxford English Dictionary even credits Jefferson with being the first person to use the words “authentication” and “indecipherable” (p. 124)? The author's lucid explanations, deft turns of phrase, and agile leaps between the eighteenth and twenty-first centuries are worthy of the forbidding standard set by the man so extensively quoted here. Law professors do love their footnotes, and this is almost two books in one: the focused main text and the more rambling but fascinating footnotes, some of which span several pages. There is also an extensive list of references and sources for all of the Jefferson quotations in the end notes.

March 2009 Permalink

Powell, Jim. FDR's Folly. New York: Crown Forum, 2003. ISBN 0-7615-0165-7.

May 2004 Permalink

Pyle, Ernie. Brave Men. Lincoln, NE: Bison Books, [1944] 2001. ISBN 0-8032-8768-2.
Ernie Pyle is perhaps the most celebrated war correspondent of all time, and this volume amply illustrates why. A collection of his columns for the Scripps-Howard newspapers edited into book form, it covers World War II from the invasion of Sicily in 1943 through the Normandy landings and the liberation of Paris in 1944. This is the first volume of three collections of his wartime reportage: the second and third, Here is Your War and Ernie Pyle in England, are out of print, but used copies are readily available at a reasonable price.

While most readers today know Pyle only from his battle dispatches, he was, in fact, a renowned columnist even before the United States entered the war—in the 1930s he roamed the nation, filing columns about Americana and Americans which became as beloved as the very similar television reportage decades later by Charles Kuralt who, in fact, won an Ernie Pyle Award for his reporting.

Pyle's first love and enduring sympathy was with the infantry, and few writers have expressed so eloquently the experience of being “in the line” beyond what most would consider the human limits of exhaustion, exertion, and fear. But in this book he also shows the breadth of the Allied effort, profiling Navy troop transport and landing craft, field hospitals, engineering troops, air corps dive and light bombers, artillery, ordnance depots, quartermaster corps, and anti-aircraft guns (describing the “scientific magic” of radar guidance without disclosing how it worked).

Apart from the prose, which is simultaneously unaffected and elegant, the thing that strikes a reader today is that in this entire book, written by a superstar columnist for the mainstream media of his day, there is not a single suggestion that the war effort, whatever the horrible costs he so candidly documents, is misguided, or that there is any alternative or plausible outcome other than victory. How much things have changed…. If you're looking for this kind of with the troops on the ground reporting today, you won't find it in the legacy dead tree or narrowband one-to-many media, but rather in reader-supported front-line journalists such as Michael Yon—if you like what he's writing, hit the tip jar and keep him at the front; think of it like buying the paper with Ernie Pyle's column.

Above, I've linked to a contemporary reprint edition of this work. Actually, I read a hardbound sixth printing of the 1944 first edition which I found in a used bookstore in Marquette, Michigan (USA) for less than half the price of the paperback reprint; visit your local bookshop—there are wonderful things there to be discovered.

July 2007 Permalink

Radosh, Ronald. Commies. San Francisco: Encounter Books, 2001. ISBN 1-893554-05-8.

July 2001 Permalink

Radosh, Ronald and Joyce Milton. The Rosenberg File. 2nd. ed. New Haven, CT: Yale University Press, 1997. ISBN 0-300-07205-8.

August 2002 Permalink

Radosh, Ronald and Allis Radosh. Red Star over Hollywood. San Francisco: Encounter Books, 2005. ISBN 1-893554-96-1.
The Hollywood blacklist has become one of the most mythic elements of the mid-20th century Red scare. Like most myths, especially those involving tinseltown, it has been re-scripted into a struggle of good (falsely-accused artists defending free speech) versus evil (paranoid witch hunters bent on censorship) at the expense of a large part of the detail and complexity of the actual events. In this book, drawing upon contemporary sources, recently released documents from the FBI and House Committee on Un-American Activities (HUAC), and interviews with surviving participants in the events, the authors patiently assemble the story of what really happened, which is substantially different than the stories retailed by partisans of the respective sides. The evolution of those who joined the Communist Party out of idealism, were repelled by its totalitarian attempts to control their creative work and/or the cynicism of its support for the 1939–1941 Nazi/Soviet pact, yet who risked their careers to save those of others by refusing to name other Party members, is evocatively sketched, along with the agenda of HUAC, which FBI documents now reveal actually had lists of party members before the hearings began, and were thus grandstanding to gain publicity and intimidate the studios into firing those who would not deny Communist affiliations. History isn't as tidy as myth: the accusers were perfectly correct in claiming that a substantial number of prominent Hollywood figures were members of the Communist Party, and the accused were perfectly correct in their claim that apart from a few egregious exceptions, Soviet and pro-communist propaganda was not inserted into Hollywood films. A mystery about one of those exceptions, the 1943 Warner Brothers film Mission to Moscow, which defended the Moscow show trials, is cleared up here. I've always wondered why, since many of the Red-baiting films of the 1950s are cult classics, this exemplar of the ideological inverse (released, after all, when the U.S. and Soviet Union were allies in World War II) has never made it to video. Well, apparently those who currently own the rights are sufficiently embarrassed by it that apart from one of the rare prints being run on television, the only place you can see it is at the film library of the Museum of Modern Art in New York or in the archive of the University of Wisconsin. Ronald Radosh is author of Commies (July 2001) and co-author of The Rosenberg File (August 2002).

October 2005 Permalink

Rahe, Paul A. The Spartan Regime. New Haven, CT: Yale University Press, 2016. ISBN 978-0-300-21901-2.
This thin volume (just 232 pages in the hardcover edition, only around 125 of which are the main text and appendices—the rest being extensive source citations, notes, and indices of subjects and people and place names) is intended as the introduction to an envisioned three volume work on Sparta covering its history from the archaic period through the second battle of Mantinea in 362 b.c. where defeat of a Sparta-led alliance at the hands of the Thebans paved the way for the Macedonian conquest of Sparta.

In this work, the author adopts the approach to political science used in antiquity by writers such as Thucydides, Xenophon, and Aristotle: that the principal factor in determining the character of a political community is its constitution, or form of government, the rules which define membership in the community and which its members were expected to obey, their character being largely determined by the system of education and moral formation which shape the citizens of the community.

Discerning these characteristics in any ancient society is difficult, but especially so in the case of Sparta, which was a society of warriors, not philosophers and historians. Almost all of the contemporary information we have about Sparta comes from outsiders who either visited the city at various times in its history or based their work upon the accounts of others who had. Further, the Spartans were famously secretive about the details of their society, so when ancient accounts differ, it is difficult to determine which, if any, is correct. One gets the sense that all of the direct documentary information we have about Sparta would fit on one floppy disc: everything else is interpretations based upon that meagre foundation. In recent centuries, scholars studying Sparta have seen it as everything from the prototype of constitutional liberty to a precursor of modern day militaristic totalitarianism.

Another challenge facing the modern reader and, one suspects, many ancients, in understanding Sparta was how profoundly weird it was. On several occasions whilst reading the book, I was struck that rarely in science fiction does one encounter a description of a society so thoroughly alien to those with which we are accustomed from our own experience or a study of history. First of all, Sparta was tiny: there were never as many as ten thousand full-fledged citizens. These citizens were descended from Dorians who had invaded the Peloponnese in the archaic period and subjugated the original inhabitants, who became helots: essentially serfs who worked the estates of the Spartan aristocracy in return for half of the crops they produced (about the same fraction of the fruit of their labour the helots of our modern enlightened self-governing societies are allowed to retain for their own use). Every full citizen, or Spartiate, was a warrior, trained from boyhood to that end. Spartiates not only did not engage in trade or work as craftsmen: they were forbidden to do so—such work was performed by non-citizens. With the helots outnumbering Spartiates by a factor of from four to seven (and even more as the Spartan population shrunk toward the end), the fear of an uprising was ever-present, and required maintenance of martial prowess among the Spartiates and subjugation of the helots.

How were these warriors formed? Boys were taken from their families at the age of seven and placed in a barracks with others of their age. Henceforth, they would return to their families only as visitors. They were subjected to a regime of physical and mental training, including exercise, weapons training, athletics, mock warfare, plus music and dancing. They learned the poetry, legends, and history of the city. All learned to read and write. After intense scrutiny and regular tests, the young man would face a rite of passage, krupteίa, in which, for a full year, armed only with a dagger, he had to survive on his own in the wild, stealing what he needed, and instilling fear among the helots, who he was authorised to kill if found in violation of curfew. Only after surviving this ordeal would the young Spartan be admitted as a member of a sussιtίon, a combination of a men's club, a military mess, and the basic unit in the Spartan army. A Spartan would remain a member of this same group all his life and, even after marriage and fatherhood, would live and dine with them every day until the age of forty-five.

From the age of twelve, boys in training would usually have a patron, or surrogate father, who was expected to initiate him into the world of the warrior and instruct him in the duties of citizenship. It was expected that there would be a homosexual relationship between the two, and that this would further cement the bond of loyalty to his brothers in arms. Upon becoming a full citizen and warrior, the young man was expected to take on a boy and continue the tradition. As to many modern utopian social engineers, the family was seen as an obstacle to the citizen's identification with the community (or, in modern terminology, the state), and the entire process of raising citizens seems to have been designed to transfer this inherent biological solidarity with kin to peers in the army and the community as a whole.

The political structure which sustained and, in turn, was sustained by these cultural institutions was similarly alien and intricate—so much so that I found myself wishing that Professor Rahe had included a diagram to help readers understand all of the moving parts and how they interacted. After finishing the book, I found this one on Wikipedia.

Structure of Government in Sparta
Image by Wikipedia user Putinovac licensed under the
Creative Commons Attribution 3.0 Unported license.

The actual relationships are even more complicated and subtle than expressed in this diagram, and given the extent to which scholars dispute the details of the Spartan political institutions (which occupy many pages in the end notes), it is likely the author may find fault with some aspects of this illustration. I present it purely because it provides a glimpse of the complexity and helped me organise my thoughts about the description in the text.

Start with the kings. That's right, “kings”—there were two of them—both traditionally descended from Hercules, but through different lineages. The kings shared power and acted as a check on each other. They were commanders of the army in time of war, and high priests in peace. The kingship was hereditary and for life.

Five overseers, or ephors were elected annually by the citizens as a whole. Scholars dispute whether ephors could serve more than one term, but the author notes that no ephor is known to have done so, and it is thus likely they were term limited to a single year. During their year in office, the board of five ephors (one from each of the villages of Sparta) exercised almost unlimited power in both domestic and foreign affairs. Even the kings were not immune to their power: the ephors could arrest a king and bring him to trial on a capital charge just like any other citizen, and this happened. On the other hand, at the end of their one year term, ephors were subject to a judicial examination of their acts in office and liable for misconduct. (Wouldn't be great if present-day “public servants” received the same kind of scrutiny at the end of their terms in office? It would be interesting to see what a prosecutor could discover about how so many of these solons manage to amass great personal fortunes incommensurate with their salaries.) And then there was the “fickle meteor of doom” rule.

Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.

I can imagine the kings hoping they didn't pick a night in mid-August for their vigil!

The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders.

All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage.

Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text.

In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.

August 2017 Permalink

Reagan, Ronald. The Reagan Diaries. Edited by Douglas Brinkley. New York: Harper Perennial, 2007. ISBN 978-0-06-155833-7.
What's it actually like to be the president of the United States? There is very little first-person testimony on this topic: among American presidents, only Washington, John Quincy Adams, Polk, and Hayes kept comprehensive diaries prior to the twentieth century, and the present work, an abridged edition of the voluminous diaries of Ronald Reagan, was believed, at the time of its publication, to be the only personal, complete, and contemporaneous account of a presidency in the twentieth century. Since its publication, a book purporting to be the White House diaries of Jimmy Carter has been published, but even if you believe the content, who cares about the account of the presidency of a feckless crapweasel whose damage to the republic redounds unto the present day?

Back in the epoch, the media (a couple of decades later to become the legacy media), portrayed Reagan as a genial dunce, bumbling through his presidency at the direction of his ideological aides. That illusion is dispelled in the first ten pages of these contemporaneous diary entries. In these pages, rife with misspellings (he jokes to himself that he always spells the Libyan dictator's name the last way he saw it spelt in the newspaper, and probably ended up with at least a dozen different spellings) and apostrophe abuse, you experience Reagan not writing for historians but rather memos to file about the decisions he was making from day to day.

As somebody who was unfortunate enough to spend a brief part of his life as CEO of an S&P 500 company in the Reagan years, the ability of Reagan, almost forty years my senior, to keep dozens of balls in the air, multitask among grave matters of national security and routine paperwork, meetings with heads of states of inconsequential countries, criminal investigations of his subordinates, and schmooze with politicians staunchly opposed to his legislative agenda to win the votes needed to enact the parts he deemed most important is simply breathtaking. Here we see a chief executive, honed by eight years as governor of California, at the top of his game, deftly out-maneuvering his opponents in Congress not, as the media would have you believe, by his skills in communicating directly to the people (although that played a part), but mostly by plain old politics: faking to the left and then scoring the point from the right. Reading these abridged but otherwise unedited diary entries gives lie to any claim that Reagan was in any way intellectually impaired or unengaged at any point of his presidency. This is a master politician getting done what he can in the prevailing political landscape and committing both his victories and teeth-gritting compromises to paper the very day they occurred.

One of the most stunning realisations I took away from this book is that when Reagan came to office, he looked upon his opposition in the Congress and the executive bureaucracy as people who shared his love of the country and hope for its future, but who simply disagreed as to the best course to achieve their shared goals. You can see it slowly dawning upon Reagan, as year followed year, that although there were committed New Dealers and Cold War Democrats among his opposition, there was a growing movement, both within the bureaucracy and among elected officials, who actually wanted to bring America down—if not to actually capitulate to Soviet hegemony, at least to take it down from superpower status to a peer of others in the “international community”. Could Reagan have imagined that the day would come when a president who bought into this agenda might actually sit in the Oval Office? Of course: Reagan was well-acquainted with worst case scenarios.

The Kindle edition is generally well-produced, but in lieu of a proper index substitutes a lengthy and entirely useless list of “searchable terms” which are not linked in any way to their appearances in the text.

Today is the hundredth anniversary of the birth of Ronald Reagan.

February 2011 Permalink

Reasoner, James. Draw: The Greatest Gunfights of the American West. New York: Berkley, 2003. ISBN 0-425-19193-1.
The author is best known as a novelist, author of a bookshelf full of yarns, mostly set in the Wild West, but also of the War Between the States and World War II. In this, his first work of nonfiction after twenty-five years as a writer, he sketches in 31 short chapters (of less than ten pages average length, with a number including pictures) the careers and climactic (and often career-ending) conflicts of the best known gunslingers of the Old West, as well as many lesser-known figures, some of which were just as deadly and, in their own time, notorious. Here are tales of Wyatt Earp, Doc Holliday, the Dalton Gang, Bat Masterson, Bill Doolin, Pat Garrett, John Wesley Hardin, Billy the Kid, and Wild Bill Hickok; but also Jim Levy, the Jewish immigrant from Ireland who was considered by both Earp and Masterson to be one of the deadliest gunfighters in the West; Henry Starr, who robbed banks from the 1890s until his death in a shoot-out in 1921, pausing in mid-career to write, direct, and star in a silent movie about his exploits, A Debtor to the Law; and Ben Thompson, who Bat Masterson judged to be the fastest gun in the West, who was, at various times, an Indian fighter, Confederate cavalryman, mercenary for Emperor Maximilian of Mexico, gambler, gunfighter,…and chief of police of Austin, Texas. Many of the characters who figure here worked both sides of the law, in some cases concurrently.

The author does not succumb to the temptation to glamorise these mostly despicable figures, nor the tawdry circumstances in which so many met their ends. (Many, but not all: Bat Masterson survived a career as deputy sheriff in Dodge City, sheriff of Ford County, Kansas, Marshal of Trinidad, Colorado, and as itinerant gambler in the wildest towns of the West, to live the last twenty years of his life in New York City, working as sports editor and columnist for a Manhattan newspaper.) Reasoner does, however, attempt to spice up the narrative with frontier lingo (whether genuine or bogus, I know not): lawmen and “owlhoots” (outlaws) are forever slappin' leather, loosing or dodging hails of lead, getting thrown in the hoosegow, or seeking the comfort of the soiled doves who plied their trade above the saloons. This can become tedious if you read the book straight through; it's better enjoyed a chapter at a time spread out over an extended period. The chapters are completely independent of one other (although there are a few cross-references), and may be read in any order. In fact, they read like a collection of magazine columns, but there is no indication in the book they were ever previously published. There is a ten page bibliography citing sources for each chapter but no index—this is a substantial shortcoming since many of the chapter titles do not name the principals in the events they describe, and since the paths of the most famous gunfighters crossed frequently, their stories are spread over a number of chapters.

July 2006 Permalink

Regis, Ed. Monsters. New York: Basic Books, 2015. ISBN 978-0-465-06594-3.
In 1863, as the American Civil War raged, Count Ferdinand von Zeppelin, an ambitious young cavalry officer from the German kingdom of Württemberg arrived in America to observe the conflict and learn its lessons for modern warfare. He arranged an audience with President Lincoln, who authorised him to travel among the Union armies. Zeppelin spent a month with General Joseph Hooker's Army of the Potomac. Accustomed to German military organisation, he was unimpressed with what he saw and left to see the sights of the new continent. While visiting Minnesota, he ascended in a tethered balloon and saw the landscape laid out below him like a military topographical map. He immediately grasped the advantage of such an eye in the sky for military purposes. He was impressed.

Upon his return to Germany, Zeppelin pursued a military career, distinguishing himself in the 1870 war with France, although being considered “a hothead”. It was this characteristic which brought his military career to an abrupt end in 1890. Chafing under what he perceived as stifling leadership by the Prussian officer corps, he wrote directly to the Kaiser to complain. This was a bad career move; the Kaiser “promoted” him into retirement. Adrift, looking for a new career, Zeppelin seized upon controlled aerial flight, particularly for its military applications. And he thought big.

By 1890, France was at the forefront of aviation. By 1885 the first dirigible, La France, had demonstrated aerial navigation over complex closed courses and carried passengers. Built for the French army, it was just a technology demonstrator, but to Zeppelin it demonstrated a capability with such potential that Germany must not be left behind. He threw his energy into the effort, formed a company, raised the money, and embarked upon the construction of Luftschiff Zeppelin 1 (LZ 1).

Count Zeppelin was not a man to make small plans. Eschewing sub-scale demonstrators or technology-proving prototypes, he went directly to a full scale airship intended to be militarily useful. It was fully 128 metres long, almost two and a half times the size of La France, longer than a football field. Its rigid aluminium frame contained 17 gas bags filled with hydrogen, and it was powered by two gasoline engines. LZ 1 flew just three times. An observer from the German War Ministry reported it to be “suitable for neither military nor for non-military purposes.” Zeppelin's company closed its doors and the airship was sold for scrap.

By 1905, Zeppelin was ready to try again. On its first flight, the LZ 2 lost power and control and had to make a forced landing. Tethered to the ground at the landing site, it was caught by the wind and destroyed. It was sold for scrap. Later the LZ 3 flew successfully, and Zeppelin embarked upon construction of the LZ 4, which would be larger still. While attempting a twenty-four hour endurance flight, it suffered motor failure, landed, and while tied down was caught by wind. Its gas bags rubbed against one another and static electricity ignited the hydrogen, which reduced the airship to smoking wreckage.

Many people would have given up at this point, but not the redoubtable Count. The LZ 5, delivered to the military, was lost when carried away by the wind after an emergency landing and dashed against a hill. LZ 6 burned in its hangar after an engine caught fire. LZ 7, the first civilian passenger airship, crashed into a forest on its first flight and was damaged beyond repair. LZ 8, its replacement, was destroyed by a gust of wind while being walked out of its hangar.

With the outbreak of war in 1914, the airship went to war. Germany operated 117 airships, using them for reconnaissance and even bombing targets in England. Of the 117, fully 81 were destroyed, about half due to enemy action and half by the woes which had wrecked so many airships prior to the conflict.

Based upon this stunning record of success, after the end of the Great War, Britain decided to embark in earnest on its own airship program, building even larger airships than Germany. Results were no better, culminating in the R100 and R101, built to provide air and cargo service on routes throughout the Empire. On its maiden flight to India in 1930, R101 crashed and burned in a storm while crossing France, killing 48 of the 54 on board. After the catastrophe, the R100 was retired and sold for scrap.

This did not deter the Americans, who, in addition to their technical prowess and “can do” spirit, had access to helium, produced as a by-product of their natural gas fields. Unlike hydrogen, helium is nonflammable, so the risk of fire, which had destroyed so many airships using hydrogen, was entirely eliminated. Helium does not provide as much lift as hydrogen, but this can be compensated for by increasing the size of the ship. Helium is also around fifty times more expensive than hydrogen, which makes managing an airship in flight more difficult. While the commander of a hydrogen airship can freely “valve” gas to reduce lift when required, doing this in a helium ship is forbiddingly expensive and restricted only to the most dire of emergencies.

The U.S. Navy believed the airship to be an ideal platform for long-range reconnaissance, anti-submarine patrols, and other missions where its endurance, speed, and the ability to operate far offshore provided advantages over ships and heavier than air craft. Between 1921 and 1935 the Navy operated five rigid airships, three built domestically and two abroad. Four of the five crashed in storms or due to structural failure, killing dozens of crew.

This sorry chronicle leads up to a detailed recounting of the history of the Hindenburg. Originally designed to use helium, it was redesigned for hydrogen after it became clear the U.S., which had forbidden export of helium in 1927, would not grant a waiver, especially to a Germany by then under Nazi rule. The Hindenburg was enormous: at 245 metres in length, it was longer than the U.S. Capitol building and more than three times the length of a Boeing 747. It carried between 50 and 72 passengers who were served by a crew of 40 to 61, with accommodations (apart from the spartan sleeping quarters) comparable to first class on ocean liners. In 1936, the great ship made 17 transatlantic crossings without incident. On its first flight to the U.S. in 1937, it was destroyed by fire while approaching the mooring mast at Lakehurst, New Jersey. The disaster and its aftermath are described in detail. Remarkably, given the iconic images of the flaming airship falling to the ground and the structure glowing from the intense heat of combustion, of the 97 passengers and crew on board, 62 survived the disaster. (One of the members of the ground crew also died.)

Prior to the destruction of the Hindenburg, a total of twenty-six hydrogen filled airships had been destroyed by fire, excluding those shot down in wartime, with a total of 250 people killed. The vast majority of all rigid airships built ended in disaster—if not due to fire then structural failure, weather, or pilot error. Why did people continue to pursue this technology in the face of abundant evidence that it was fundamentally flawed?

The author argues that rigid airships are an example of a “pathological technology”, which he characterises as:

  1. Embracing something huge, either in size or effects.
  2. Inducing a state bordering on enthralment among its proponents…
  3. …who underplay its downsides, risks, unintended consequences, and obvious dangers.
  4. Having costs out of proportion to the benefits it is alleged to provide.

Few people would argue that the pursuit of large airships for more than three decades in the face of repeated disasters was a pathological technology under these criteria. Even setting aside the risks from using hydrogen as a lifting gas (which I believe the author over-emphasises: prior to the Hindenburg accident nobody had ever been injured on a commercial passenger flight of a hydrogen airship, and nobody gives a second thought today about boarding an airplane with 140 tonnes of flammable jet fuel in the tanks and flying across the Pacific with only two engines). Seemingly hazardous technologies can be rendered safe with sufficient experience and precautions. Large lighter than air ships were, however, inherently unsafe because they were large and lighter than air: nothing could be done about that. They were are the mercy of the weather, and if they were designed to be strong enough to withstand whatever weather conditions they might encounter, they would have been too heavy to fly. As the experience of the U.S. Navy with helium airships demonstrated, it didn't matter if you were immune to the risks of hydrogen; the ship would eventually be destroyed in a storm.

The author then moves on from airships to discuss other technologies he deems pathological, and here, in my opinion, goes off the rails. The first of these technologies is Project Plowshare, a U.S. program to explore the use of nuclear explosions for civil engineering projects such as excavation, digging of canals, creating harbours, and fracturing rock to stimulate oil and gas production. With his characteristic snark, Regis mocks the very idea of Plowshare, and yet examination of the history of the program belies this ridicule. For the suggested applications, nuclear explosions were far more economical than chemical detonations and conventional earthmoving equipment. One principal goal of Plowshare was to determine the efficacy of such explosions and whether they would pose risks (for example, release of radiation) which were unacceptable. Over 11 years 26 nuclear tests were conducted under the program, most at the Nevada Test Site, and after a review of the results it was concluded the radiation risk was unacceptable and the results unpromising. Project Plowshare was shut down in 1977. I don't see what's remotely pathological about this. You have an idea for a new technology; you explore it in theory; conduct experiments; then decide it's not worth pursuing. Now maybe if you're Ed Regis, you may have been able to determine at the outset, without any of the experimental results, that the whole thing was absurd, but a great many people with in-depth knowledge of the issues involved preferred to run the experiments, take the data, and decide based upon the results. That, to me, seems the antithesis of pathological.

The next example of a pathological technology is the Superconducting Super Collider, a planned particle accelerator to be built in Texas which would have an accelerator ring 87.1 km in circumference and collide protons at a centre of mass energy of 40 TeV. The project was approved and construction begun in the 1980s. In 1993, Congress voted to cancel the project and work underway was abandoned. Here, the fit with “pathological technology” is even worse. Sure, the project was large, but it was mostly underground: hardly something to “enthral” anybody except physics nerds. There were no risks at all, apart from those in any civil engineering project of comparable scale. The project was cancelled because it overran its budget estimates but, even if completed, would probably have cost less than a tenth the expenditures to date on the International Space Station, which has produced little or nothing of scientific value. How is it pathological when a project, undertaken for well-defined goals, is cancelled when those funding it, seeing its schedule slip and budget balloon beyond that projected, pull the plug on it? Isn't that how things are supposed to work? Who were the seers who forecast all of this at the project's inception?

The final example of so-called pathological technology is pure spite. Ed Regis has a fine time ridiculing participants in the first 100 Year Starship symposium, a gathering to explore how and why humans might be able, within a century, to launch missions (robotic or crewed) to other star systems. This is not a technology at all, but rather an exploration of what future technologies might be able to do, and the limits imposed by the known laws of physics upon potential technologies. This is precisely the kind of “exploratory engineering” that Konstantin Tsiolkovsky engaged in when he worked out the fundamentals of space flight in the late 19th and early 20th centuries. He didn't know the details of how it would be done, but he was able to calculate, from first principles, the limits of what could be done, and to demonstrate that the laws of physics and properties of materials permitted the missions he envisioned. His work was largely ignored, which I suppose may be better than being mocked, as here.

You want a pathological technology? How about replacing reliable base load energy sources with inefficient sources at the whim of clouds and wind? Banning washing machines and dishwashers that work in favour of ones that don't? Replacing toilets with ones that take two flushes in order to “save water”? And all of this in order to “save the planet” from the consequences predicted by a theoretical model which has failed to predict measured results since its inception, through policies which impoverish developing countries and, even if you accept the discredited models, will have negligible results on the global climate. On this scandal of our age, the author is silent. He concludes:

Still, for all of their considerable faults and stupidities—their huge costs, terrible risks, unintended negative consequences, and in some cases injuries and deaths—pathological technologies possess one crucial saving grace: they can be stopped.

Or better yet, never begun.

Except, it seems, you can only recognise them in retrospect.

January 2016 Permalink

Reich, Eugenie Samuel. Plastic Fantastic. New York: St. Martin's Press, 2009. ISBN 978-0-230-62384-2.
Boosters of Big Science, and the politicians who rely upon its pronouncements to justify their policy prescriptions often cite the self-correcting nature of the scientific process: peer review subjects the work of researchers to independent and dispassionate scrutiny before results are published, and should an incorrect result make it into print, the failure of independent researchers to replicate it will inevitably call it into question and eventually cause it to be refuted.

Well, that's how it works in theory. Theory is very big in contemporary Big Science. This book is about how things work in fact, in the real world, and it's quite a bit different. At the turn of the century, there was no hotter property in condensed matter physics than Hendrik Schön, a junior researcher at Bell Labs who, in rapid succession reported breakthroughs in electronic devices fabricated from organic molecules including:

  • Organic field effect transistors
  • Field-induced superconductivity in organic crystals
  • Fractional quantum Hall effect in organic materials
  • Organic crystal laser
  • Light emitting organic transistor
  • Organic Josephson junction
  • High temperature superconductivity in C60
  • Single electron organic transistors

In the year 2001, Schön published a paper in a peer reviewed journal at a rate of one every eight days, with many reaching the empyrean heights of Nature, Science, and Physical Review. Other labs were in awe of his results, and puzzled because every attempt they made to replicate his experiments failed, often in ways which seemed to indicate the descriptions of experiments he published were insufficient for others to replicate them. Theorists also raised their eyebrows at Schön's results, because he claimed breakdown properties of sputtered aluminium oxide insulating layers far beyond measured experimental results, and behaviour of charge transport in his organic substrates which didn't make any sense according to the known properties of such materials.

The experimenters were in a tizzy, trying to figure out why they couldn't replicate Schön's results, while the theorists were filling blackboards trying to understand how his incongruous results could possibly make sense. His superiors were basking in the reflected glory of his ascendence into the élite of experimental physicists and the reflection of his glory upon their laboratory.

In April 2002, while waiting in the patent attorney's office at Bell Labs, researchers Julia Hsu and Lynn Loo were thumbing through copies of Schön's papers they'd printed out as background documentation for the patent application they were preparing, when Loo noticed that two graphs of inverter outputs, one in a Nature paper describing a device made of a layer of thousands of organic molecules, and another in a Science paper describing an inverter made of just one or two active molecules were identical, right down to the instrumental noise. When this was brought to the attention of Schön's manager and word of possible irregularities in Schön's publications began to make its way through the condensed matter physics grapevine, his work was subjected to intense scrutiny both within Bell Labs and by outside researchers, and additional instances of identical graphs re-labelled for entirely different experiments came to hand. Bell Labs launched a formal investigation in May 2002, which concluded, in a report issued the following September, that Schön had committed at least 16 instances of scientific misconduct, fabricating the experimental data he reported from mathematical functions, with no evidence whatsoever that he had ever built the devices he claimed to have, or performed the experiments described in his papers. A total of twenty-one papers authored by Schön in Science, Nature, and Physical Review were withdrawn, as well as a number in less prestigious venues.

What is fascinating in this saga of flat-out fraud and ultimate exposure and disgrace is how completely the much-vaunted system of checks and balances of industrial scale Big Science and peer review in the most prestigious journals completely fell on its face at the hands of a fraudster in a junior position with little or no scientific track record who was willing to make up data to confirm the published expectations of the theorists, and figured out how to game the peer review system, using criticisms of his papers as a guide to make up additional data to satisfy the objections of the referees. As a former manager of a group of ambitious and rambunctious technologists, what strikes me is how utterly Schön's colleagues and managers at Bell Labs failed in overseeing his work and vetting his results. “Extraordinary claims require extraordinary evidence”, and Schön was making and publishing extraordinary claims at the rate of almost one a week in 2001, and yet not once did anybody at Bell Labs insist on observing him perform one of the experiments he claimed to be performing, even after other meticulous experimenters in laboratories around the world reported that they were unable to replicate his results. Think about it—if a junior software developer in your company claimed to have developed a miraculous application, wouldn't you want to see a demo before issuing a press release about it and filing a patent application? And yet nobody at Bell Labs thought to do so with Schön's work.

The lessons from this episode are profound, and I see little evidence that they have been internalised by the science establishment. A great deal of experimental science is now guided by the expectations of theorists; it is difficult to obtain funding for an experimental program which looks for effects not anticipated by theory. In such an environment, an unscrupulous scientist willing to make up data that conforms to the prejudices of the theorists may be able to publish in prestigious journals and be considered a rising star of science based on an entirely fraudulent corpus of work. Because scientists, especially in the Anglo-Saxon culture, are loath to make accusations of fraud (as the author notes, in the golden age of British science such an allegation might well result in a duel being fought), failure to replicate experimental results is often assumed to be a failure by the replicator to precisely reproduce the circumstances of the original investigator, not to call into question the veracity of the reported work. Schön's work consisted of desktop experiments involving straightforward measurements of electrical properties of materials, which were about as simple as anything in contemporary science to evaluate and independently replicate. Now think of how vulnerable research on far less clear cut topics such as global climate, effects of diet on public health, and other topics would be to fraudulent, agenda-driven “research”. Also, Schön got caught only because he became sloppy in his frenzy of publication, duplicating graphs and data sets from one paper to another. How long could a more careful charlatan get away with it?

Quite aside from the fascinating story and its implications for the integrity of the contemporary scientific enterprise, this is a superbly written narrative which reads more like a thriller than an account of a regrettable episode in science. But it is entirely factual, and documented with extensive end notes citing original sources.

August 2010 Permalink

Richelson, Jeffrey T. Spying on the Bomb. New York: W. W. Norton, [2006] 2007. ISBN 978-0-393-32982-7.
I had some trepidation about picking up this book. Having read the author's The Wizards of Langley (May 2002), expecting an account of “Q Branch” spy gizmology and encountering instead a tedious (albeit well-written and thorough) bureaucratic history of the CIA's Directorate of Science and Technology, I was afraid this volume might also reduce one of the most critical missions of U.S. intelligence in the post World War II era to another account of interagency squabbling and budget battles. Not to worry—although such matters are discussed where appropriate (especially when they led to intelligence failures), the book not only does not disappoint, it goes well beyond the mission of its subtitle, “American Nuclear Intelligence from Nazi Germany to Iran and North Korea” in delivering not just an account of intelligence activity but also a comprehensive history of the nuclear programs of each of the countries upon which the U.S. has focused its intelligence efforts: Nazi Germany, the Soviet Union, China, France, Israel, South Africa, India, Pakistan, Taiwan, Libya, Iraq, North Korea, and Iran.

The reader gets an excellent sense of just how difficult it is, even in an age of high-resolution optical and radar satellite imagery, communications intelligence, surveillance of commercial and financial transactions, and active efforts to recruit human intelligence sources, to determine the intentions of states intent (or maybe not) on developing nuclear weapons. The ease with which rogue regimes seem to be able to evade IAEA safeguards and inspectors, and manipulate diplomats loath to provoke a confrontation, is illustrated on numerous occasions. An entire chapter is devoted to the enigmatic double flash incident of September 22nd, 1979 whose interpretation remains in dispute today. This 2007 paperback edition includes a new epilogue with information on the October 2006 North Korean “fissile or fizzle” nuclear test, and recent twists and turns in the feckless international effort to restrain Iran's nuclear program.

May 2008 Permalink

Rickards, James. Currency Wars. New York: Portfolio / Penguin, 2011. ISBN 978-1-59184-449-5.
Debasement of currency dates from antiquity (and doubtless from prehistory—if your daughter's dowry was one cow and three goats, do you think you'd choose them from the best in your herd?), but currency war in the modern sense first emerged in the 20th century in the aftermath of World War I. When global commerce—the first era of globalisation—became established in the 19th century, most of the trading partners were either on the gold standard or settled their accounts in a currency freely convertible to gold, with the British pound dominating as the unit of account in international trade. A letter of credit financing a shipload of goods exported from Argentina to Italy could be written by a bank in London and traded by an investor in New York without any currency risk during the voyage because all parties denominated the transaction in pounds sterling, which the Bank of England would exchange for gold on demand. This system of global money was not designed by “experts” nor managed by “maestros”—it evolved organically and adapted itself to the needs of its users in the marketplace.

All of this was destroyed by World War I. As described here, and in more detail in Lords of Finance (August 2011), in the aftermath of the war all of the European powers on both sides had expended their gold and foreign exchange reserves in the war effort, and the United States had amassed a large fraction of all of the gold in the world in its vaults and was creditor in chief to the allies to whom, in turn, Germany owed enormous reparation payments for generations to come. This set the stage for what the author calls Currency War I, from 1921 through 1936, in which central bankers attempted to sort out the consequences of the war, often making disastrous though well-intentioned decisions which, arguably, contributed to a decade of pre-depression malaise in Britain, the U.S. stock market bubble and 1929 crash, the Weimar Germany hyperinflation, and its aftermath which contributed to the rise of Hitler.

At the end of World War II, the United States was in an even more commanding position than at the conclusion of the first war. With Europe devastated, it sat on an even more imposing hoard of gold, and when it convened the Bretton Woods conference in 1944, with the war still underway, despite the conference's list of attendees hailing from 44 allied nations, it was clear that the Golden Rule applied: he who has the gold makes the rules. Well, the U.S. had the gold, and the system adopted at the conference made the U.S. dollar central to the postwar monetary system. The dollar was fixed to gold at the rate of US$35/troy ounce, with the U.S. Treasury committed to exchanging dollars for gold at that rate in unlimited quantities. All other currencies were fixed to the dollar, and hence indirectly to gold, so that except in the extraordinary circumstance of a revaluation against the dollar, exchange rate risk would not exist. While the Bretton Woods system was more complex than the pre-World War I gold standard (in particular, it allowed central banks to hold reserves in other paper currencies in addition to gold), it tried to achieve the same stability in exchange rates as the pure gold standard.

Amazingly, this system, the brainchild of Soviet agent Harry Dexter White and economic charlatan John Maynard Keynes, worked surprisingly well until the late 1960s, when profligate deficit spending by the U.S. government began to cause foreign holders of an ever-increasing pile of dollars to trade them in for the yellow metal. This was the opening shot in what the author deems Currency War II, which ran from 1967 through 1987, ending in the adoption of the present system of floating exchange rates among currencies backed by nothing whatsoever.

The author believes we are now in the initial phase of Currency War III, in which a perfect storm of unsustainable sovereign debt, economic contraction, demographic pressure on social insurance schemes, and trade imbalances creates the preconditions for the kind of “beggar thy neighbour” competitive devaluations which characterised Currency War I. This is, in effect, a race to the bottom with each unanchored paper currency trying to become cheaper against the others to achieve a transitory export advantage. But, of course, as a moment's reflection will make evident, with currencies decoupled from any tangible asset, the only limit in a race to the bottom is zero, and in a world where trillions of monetary units can be created by the click of a mouse without even the need to crank up the printing press, this funny money is, in the words of Gerald Celente, “not worth the paper it isn't printed on”.

In financial crises, there is a progression from:

  1. Currency war
  2. Trade war
  3. Shooting war

Currency War I led to all three phases. Currency War II was arrested at the “trade war” step, although had the Carter administration and Paul Volcker not administered the bitter medicine to the U.S. economy to extirpate inflation, it's entirely possible a resource war to seize oil fields might have ensued. Now we're in Currency War III (this is the author's view, with which I agree): where will it go from here? Well, nobody knows, and the author is the first to acknowledge that the best a forecaster can do is to sketch a number of plausible scenarios which might play out depending upon precipitating events and the actions of decision makers in time of crisis. Chapter 11 (how appropriate!) describes the four scenarios Rickards sees as probable outcomes and what they would mean for investors and companies engaged in international trade. Some of these may be breathtaking, if not heart-stopping, but as the author points out, all of them are grounded in precedents which have already occurred in the last century.

The book begins with a chilling wargame in which the author participated. Strategic planners often remain stuck counting ships, troops, and tanks, and forget that all of these military assets are worthless without the funds to keep them operating, and that these assets are increasingly integrated into a world financial system whose complexity (and hence systemic risk, either to an accidental excursion or a deliberate disruption) is greater than ever before. Analyses of the stability of global finance often assume players are rational and therefore would not act in a way which was ultimately damaging to their own self interest. This is ominously reminiscent of those who, as late as the spring of 1914, forecast that a general conflict in Europe was unthinkable because it would be the ruin of all of the combatants. Indeed, it was, and yet still it happened.

The Kindle edition has the table of contents and notes properly linked, but the index is just a list of unlinked terms.

November 2011 Permalink

Roberts, Andrew. Churchill: Walking with Destiny. New York: Viking, 2018. ISBN 978-1-101-98099-6.
At the point that Andrew Roberts sat down to write a new biography of Winston Churchill, there were a total of 1009 biographies of the man in print, examining every aspect of his life from a multitude of viewpoints. Works include the encyclopedic three-volume The Last Lion (January 2013) by William Manchester and Paul Reid, and Roy Jenkins' single-volume Churchill: A Biography (February 2004), which concentrates on Churchill's political career. Such books may seem to many readers to say just about everything about Churchill there is to be said from the abundant documentation available for his life. What could a new biography possibly add to the story?

As the author demonstrates in this magnificent and weighty book (1152 pages, 982 of main text), a great deal. Earlier Churchill biographers laboured under the constraint that many of Churchill's papers from World War II and the postwar era remained under the seal of official secrecy. These included the extensive notes taken by King George VI during his weekly meetings with the Prime Minister during the war and recorded in his personal diary. The classified documents were made public only fifty years after the end of the war, and the King's wartime diaries were made available to the author by special permission granted by the King's daughter, Queen Elizabeth II.

The royal diaries are an invaluable source on Churchill's candid thinking as the war progressed. As a firm believer in constitutional monarchy, Churchill withheld nothing in his discussions with the King. Even the deepest secrets, such as the breaking of the German codes, the information obtained from decrypted messages, and atomic secrets, which were shared with only a few of the most senior and trusted government officials, were discussed in detail with the King. Further, while Churchill was constantly on stage trying to hold the Grand Alliance together, encourage Britons to stay in the fight, and advance his geopolitical goals which were often at variance with even the Americans, with the King he was brutally honest about Britain's situation and what he was trying to accomplish. Oddly, perhaps the best insight into Churchill's mind as the war progressed comes not from his own six-volume history of the war, but rather the pen of the King, writing only to himself. In addition, sources such as verbatim notes of the war cabinet, diaries of the Soviet ambassador to the U.K. during the 1930s through the war, and other recently-disclosed sources resulted in, as the author describes it, there being something new on almost every page.

The biography is written in an entirely conventional manner: the author eschews fancy stylistic tricks in favour of an almost purely chronological recounting of Churchill's life, flipping back and forth from personal life, British politics, the world stage and Churchill's part in the events of both the Great War and World War II, and his career as an author and shaper of opinion.

Winston Churchill was an English aristocrat, but not a member of the nobility. A direct descendant of John Churchill, the 1st Duke of Marlborough, his father, Lord Randolph Churchill, was the third son of the 7th Duke of Marlborough. As only the first son inherits the title, although Randolph bore the honorific “Lord”, he was a commoner and his children, including first-born Winston, received no title. Lord Randolph was elected to the House of Commons in 1874, the year of Winston's birth, and would serve until his death in 1895, having been Chancellor of the Exchequer, Leader of the House of Commons, and Secretary of State for India. His death, aged just forty-five (rumoured at the time to be from syphilis, but now attributed to a brain tumour, as his other symptoms were inconsistent with syphilis), along with the premature deaths of three aunts and uncles at early ages, convinced the young Winston his own life might be short and that if he wanted to accomplish great things, he had no time to waste.

In terms of his subsequent career, his father's early death might have been an unappreciated turning point in Winston Churchill's life. Had his father retired from the House of Commons prior to his death, he would almost certainly have been granted a peerage in return for his long service. When he subsequently died, Winston, as eldest son, would have inherited the title and hence not been entitled to serve in the House of Commons. It is thus likely that had his father not died while still an MP, the son would never have had the political career he did nor have become prime minister in 1940.

Young, from a distinguished family, wealthy (by the standards of the average Briton, but not compared to the landed aristocracy or titans of industry and finance), ambitious, and seeking novelty and adventures to the point of recklessness, the young Churchill believed he was meant to accomplish great things in however many years Providence might grant him on Earth. In 1891, at the age of just 16, he confided to a friend,

I can see vast changes coming over a now peaceful world, great upheavals, terrible struggles; wars such as one cannot imagine; and I tell you London will be in danger — London will be attacked and I shall be very prominent in the defence of London. … This country will be subjected, somehow, to a tremendous invasion, by what means I do not know, but I tell you I shall be in command of the defences of London and I shall save London and England from disaster. … I repeat — London will be in danger and in the high position I shall occupy, it will fall to me to save the capital and save the Empire.

He was, thus, from an early age, not one likely to be daunted by the challenges he assumed when, almost five decades later at an age (66) when many of his contemporaries retired, he faced a situation uncannily similar to that he imagined in boyhood.

Churchill's formal education ended at age 20 with his graduation from the military academy at Sandhurst and commissioning as a second lieutenant in the cavalry. A voracious reader, he educated himself in history, science, politics, philosophy, literature, and the classics, while ever expanding his mastery of the English language, both written and spoken. Seeking action, and finding no war in which he could participate as a British officer, he managed to persuade a London newspaper to hire him as a war correspondent and set off to cover an insurrection in Cuba against its Spanish rulers. His dispatches were well received, earning five guineas per article, and he continued to file dispatches as a war correspondent even while on active duty with British forces. By 1901, he was the highest-paid war correspondent in the world, having earned the equivalent of £1 million today from his columns, books, and lectures.

He subsequently saw action in India and the Sudan, participating in the last great cavalry charge of the British army in the Battle of Omdurman, which he described along with the rest of the Mahdist War in his book, The River War. In October 1899, funded by the Morning Post, he set out for South Africa to cover the Second Boer War. Covering the conflict, he was taken prisoner and held in a camp until, in December 1899, he escaped and crossed 300 miles of enemy territory to reach Portuguese East Africa. He later returned to South Africa as a cavalry lieutenant, participating in the Siege of Ladysmith and capture of Pretoria, continuing to file dispatches with the Morning Post which were later collected into a book.

Upon his return to Britain, Churchill found that his wartime exploits and writing had made him a celebrity. Eleven Conservative associations approached him to run for Parliament, and he chose to run in Oldham, narrowly winning. His victory was part of a massive landslide by the Unionist coalition, which won 402 seats versus 268 for the opposition. As the author notes,

Before the new MP had even taken his seat, he had fought in four wars, published five books,… written 215 newspaper and magazine articles, participated in the greatest cavalry charge in half a century and made a spectacular escape from prison.

This was not a man likely to disappear into the mass of back-benchers and not rock the boat.

Churchill's views on specific issues over his long career defy those who seek to put him in one ideological box or another, either to cite him in favour of their views or vilify him as an enemy of all that is (now considered) right and proper. For example, Churchill was often denounced as a bloodthirsty warmonger, but in 1901, in just his second speech in the House of Commons, he rose to oppose a bill proposed by the Secretary of War, a member of his own party, which would have expanded the army by 50%. He argued,

A European war cannot be anything but a cruel, heart-rending struggle which, if we are ever to enjoy the bitter fruits of victory, must demand, perhaps for several years, the whole manhood of the nation, the entire suspension of peaceful industries, and the concentrating to one end of every vital energy in the community. … A European war can only end in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors. Democracy is more vindictive than Cabinets. The wars of peoples will be more terrible than those of kings.

Bear in mind, this was a full thirteen years before the outbreak of the Great War, which many politicians and military men expected to be short, decisive, and affordable in blood and treasure.

Churchill, the resolute opponent of Bolshevism, who coined the term “Cold War”, was the same person who said, after Stalin's annexation of Latvia, Lithuania, and Estonia in 1939, “In essence, the Soviet's Government's latest actions in the Baltic correspond to British interests, for they diminish Hitler's potential Lebensraum. If the Baltic countries have to lose their independence, it is better for them to be brought into the Soviet state system than the German one.”

Churchill, the champion of free trade and free markets, was also the one who said, in March 1943,

You must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from the cradle to the grave. … [Everyone must work] whether they come from the ancient aristocracy, or the ordinary type of pub-crawler. … We must establish on broad and solid foundations a National Health Service.

And yet, just two years later, contesting the first parliamentary elections after victory in Europe, he argued,

No Socialist Government conducting the entire life and industry of the country could afford to allow free, sharp, or violently worded expressions of public discontent. They would have to fall back on some form of Gestapo, no doubt very humanely directed in the first instance. And this would nip opinion in the bud; it would stop criticism as it reared its head, and it would gather all the power to the supreme party and the party leaders, rising like stately pinnacles above their vast bureaucracies of Civil servants, no longer servants and no longer civil.

Among all of the apparent contradictions and twists and turns of policy and politics there were three great invariant principles guiding Churchill's every action. He believed that the British Empire was the greatest force for civilisation, peace, and prosperity in the world. He opposed tyranny in all of its manifestations and believed it must not be allowed to consolidate its power. And he believed in the wisdom of the people expressed through the democratic institutions of parliamentary government within a constitutional monarchy, even when the people rejected him and the policies he advocated.

Today, there is an almost reflexive cringe among bien pensants at any intimation that colonialism might have been a good thing, both for the colonial power and its colonies. In a paragraph drafted with such dry irony it might go right past some readers, and reminiscent of the “What have the Romans done for us?” scene in Life of Brian, the author notes,

Today, of course, we know imperialism and colonialism to be evil and exploitative concepts, but Churchill's first-hand experience of the British Raj did not strike him that way. He admired the way the British had brought internal peace for the first time in Indian history, as well as railways, vast irrigation projects, mass education, newspapers, the possibilities for extensive international trade, standardized units of exchange, bridges, roads, aqueducts, docks, universities, an uncorrupt legal system, medical advances, anti-famine coordination, the English language as the first national lingua franca, telegraphic communication and military protection from the Russian, French, Afghan, Afridi and other outside threats, while also abolishing suttee (the practice of burning widows on funeral pyres), thugee (the ritualized murder of travellers) and other abuses. For Churchill this was not the sinister and paternalist oppression we now know it to have been.

This is a splendid in-depth treatment of the life, times, and contemporaries of Winston Churchill, drawing upon a multitude of sources, some never before available to any biographer. The author does not attempt to persuade you of any particular view of Churchill's career. Here you see his many blunders (some tragic and costly) as well as the triumphs and prescient insights which made him a voice in the wilderness when so many others were stumbling blindly toward calamity. The very magnitude of Churchill's work and accomplishments would intimidate many would-be biographers: as a writer and orator he published thirty-seven books totalling 6.1 million words (more than Shakespeare and Dickens put together) and won the Nobel Prize in Literature for 1953, plus another five million words of public speeches. Even professional historians might balk at taking on a figure who, as a historian alone, had, at the time of his death, sold more history books than any historian who ever lived.

Andrew Roberts steps up to this challenge and delivers a work which makes a major contribution to understanding Churchill and will almost certainly become the starting point for those wishing to explore the life of this complicated figure whose life and works are deeply intertwined with the history of the twentieth century and whose legacy shaped the world in which we live today. This is far from a dry historical narrative: Churchill was a master of verbal repartee and story-telling, and there are a multitude of examples, many of which will have you laughing out loud at his wit and wisdom.

Here is an Uncommon Knowledge interview with the author about Churchill and this biography.

This is a lecture by Andrew Roberts on “The Importance of Churchill for Today” at Hillsdale College in March, 2019.

May 2019 Permalink

Ronald Reagan Presidential Foundation. Ronald Reagan: An American Hero. New York: Dorling Kindersley, 2001. ISBN 0-7894-7992-3.
This is basically a coffee-table book. There are a multitude of pictures, many you're unlikely to have seen before, but the text is sparse and lightweight. If you're looking for a narrative, try Peggy Noonan's When Character Was King (March 2002).

July 2004 Permalink

Roosevelt, Theodore. The Rough Riders. Philadelphia: Pavilion Press, [1899] 2004. ISBN 1-4145-0492-6.
This is probably, by present-day standards, the most politically incorrect book ever written by a United States President. The fact that it was published and became a best-seller before his election as Vice President in 1900 and President in 1904 indicates how different the world was in the age in which Theodore Roosevelt lived and helped define. T.R. was no chicken-hawk. After advocating war with Spain as assistant secretary of the Navy in the McKinley administration, as war approached, he left his desk job in Washington to raise a volunteer regiment from the rough and ready horse- and riflemen of his beloved Wild West, along with number of his fellow Ivy Leaguers hungry for a piece of the action. This book chronicles his adventures in raising, equipping, and training the regiment, and its combat exploits in Cuba in 1898. The prose is pure T.R. passionate purple; it was rumoured that when the book was originally typeset the publisher had to send out for more copies of the upper-case letter “I”. Almost every page contains some remark or other which would end the career of what passes for politicians in today's pale, emasculated world. What an age. What a man! The bloodthirsty warrior who wrote this book would go on to win the Nobel Peace Prize in 1906 for brokering an end to the war between Russia and Japan.

This paperback edition from Pavilion Press is a sorry thing physically. The text reads like something that's been OCR scanned and never spelling checked or proofread—on p. 171, for example, “antagonists” is printed as “antagon1sts”, and this is one of many such errors. There's no excuse for this at all, since there's an electronic text edition of The Rough Riders freely available from Project Gutenberg which is free of these errors, and an on-line edition which lacks these flaws. The cover photo of T.R. on his horse is a blow-up of a low-resolution JPEG image with obvious pixels and compression artefacts.

Roosevelt's report to his commanding general (pp. 163–170) detailing the logistical and administrative screwups in the campaign is an excellent illustration of the maxim that the one area in which government far surpasses the capabilities of free enterprise is in the making of messes.

February 2005 Permalink

Rorabaugh, W. J. The Alcoholic Republic. New York: Oxford University Press, 1979. ISBN 978-0-19-502990-1.
This book was recommended to me by Prof. Paul Rahe after I had commented during a discussion on Ricochet about drug (and other forms of) prohibition, using the commonplace libertarian argument that regardless of what one believes about the principle of self-ownership and the dangers to society if its members ingest certain substances, from a purely utilitarian standpoint, the evidence is that prohibition of anything simply makes the problem worse—in many cases not only increasing profits to traffickers in the banned substance, spawning crime among those who contend to provide it to those who seek it in the absence of an open market, promoting contempt for the law (the president of the United States, as of this writing, admitted in his autobiography to have used a substance whose possession, had he been apprehended, was a felony), and most of all that post-prohibition, use of the forbidden substance increases, and hence however satisfying prohibition may be to those who support, enact, and enforce it, it is ultimately counterproductive, as it increases the number of people who taste the forbidden fruit.

I read every book my readers recommend, albeit not immediately, and so I put this book on my queue, and have now digested it. This is a fascinating view of a very different America: a newly independent nation in the first two decades of the nineteenth century, still mostly a coastal nation with a vast wilderness to the West, but beginning to expand over the mountains into the fertile land beyond. The one thing all European visitors to America remarked upon was that people in this brave new republic, from strait-laced New Englanders, to Virginia patricians, to plantation barons of the South, to buckskin pioneers and homesteaders across the Appalachians, drank a lot, reaching a peak around 1830 of five gallons (19 litres) of hard spirits (in excess of 45% alcohol) per capita per annum—and that “per capita” includes children and babies in a rapidly growing population, so the adults, and particularly the men, disproportionately contributed to this aggregate.

As the author teases out of the sketchy data of the period, there were a number of social, cultural, and economic reasons for this. Prior to the revolution, America was a rum drinking nation, but after the break with Britain whiskey made from maize (corn, in the American vernacular) became the beverage of choice. As Americans migrated and settled the West, maize was their crop of choice, but before the era of canals and railroads, shipping their crop to the markets of the East cost more than its value. Distilling into a much-sought beverage, however, made the arduous trek to market profitable, and justified the round trip. In the rugged western frontier, drinking water was not to be trusted, and a sip of contaminated water could condemn one to a debilitating and possibly fatal bout of dysentery or cholera. None of these bugs could survive in whiskey, and hence it was seen as the healthy beverage. Finally, whiskey provides 83 calories per fluid ounce, and is thus a compact way to store and transmit food value without need for refrigeration.

Some things never change. European visitors to America remarked upon the phenomenon of “rapid eating” or, as we now call it, “fast food”. With the fare at most taverns outside the cities limited to fried corn cakes, salt pork, and whiskey, there was precious little need to linger over one's meal, and hence it was in-and-out, centuries before the burger. But then, things change. Starting around 1830, alcohol consumption in the United States began to plummet, and temperance societies began to spring up across the land. From a peak of about 5 gallons per capita, distilled spirits consumption fell to between 1 and 2 gallons and has remained more or less constant ever since.

But what is interesting is that the widespread turn away from hard liquor was not in any way produced by top-down or coercive prohibition. Instead, it was a bottom-up social movement largely coupled with the second great awakening. While this movement certainly did result in some forms of restrictions on the production and sale of alcohol, much more effective were its opprobrium against public drunkenness and those who enabled it.

This book is based on a Ph.D. thesis, and in places shows it. There is a painful attempt, based on laughably incomplete data, to quantify alcohol consumption during the early 19th century. This, I assume, is because at the epoch “social scientists” repeated the mantra “numbers are good”. This is all nonsense; ignore it. Far more credible are the reports of contemporary observers quoted in the text.

As to Prof. Rahe's assertion that prohibition reduces the consumption of a substance, I don't think this book advances that case. The collapse in the consumption of strong drink in the 1830s was a societal and moral revolution, and any restrictions on the availability of alcohol were the result of that change, not its cause. That said, I do not dispute that prohibition did reduce the reported level of alcohol consumption, but at the cost of horrific criminality and disdain for the rule of law and, after repeal, a return to the status quo ante.

If you're interested in prohibition in all of its manifestations, I recommend this book, even though it has little to do with prohibition. It is an object lesson in how a free society self-corrects from excess and re-orients itself toward behaviour which benefits its citizens.

November 2012 Permalink

Roth, Philip. The Plot Against America. New York: Vintage, 2004. ISBN 1-4000-7949-7.
Pulitzer Prize-winning mainstream novelist Philip Roth turns to alternative history in this novel, which also falls into the genre Rudy Rucker pioneered and named “transreal”—autobiographical fiction, in which the author (or a character clearly based upon him) plays a major part in the story. Here, the story is told in the first person by the author, as a reminiscence of his boyhood in the early 1940s in Newark, New Jersey. In this timeline, however, after a deadlocked convention, the Republican party chooses Charles Lindbergh as its 1940 presidential candidate who, running on an isolationist platform of “Vote for Lindbergh or vote for war”, defeats FDR's bid for a third term in a landslide.

After taking office, Lindbergh's tilt toward the Axis becomes increasingly evident. He appoints the virulently anti-Semitic Henry Ford as Secretary of the Interior, flies to Iceland to sign a pact with Hitler, and a concludes a treaty with Japan which accepts all its Asian conquests so far. Further, he cuts off all assistance to Britain and the USSR. On the domestic front, his Office of American Absorption begins encouraging “urban” children (almost all of whom happen to be Jewish) to spend their summers on farms in the “heartland” imbibing “American values”, and later escalates to “encouraging” the migration of entire families (who happen to be Jewish) to rural areas.

All of this, and its many consequences, ranging from trivial to tragic, are seen through the eyes of young Philip Roth, perceived as a young boy would who was living through all of this and trying to make sense of it. A number of anecdotes have nothing to do with the alternative history story line and may be purely autobiographical. This is a “mood novel” and not remotely a thriller; the pace of the story-telling is languid, evoking the time sense and feeling of living in the present of a young boy. As alternative history, I found a number of aspects implausible and unpersuasive. Most exemplars of the genre choose one specific event at which the story departs from recorded history, then spin out the ramifications of that event as the story develops. For example, in 1945 by Newt Gingrich and William Forstchen, after the attack on Pearl Harbor, Germany does not declare war on the United States, which only goes to war against Japan. In Roth's book, the point of divergence is simply the nomination of Lindbergh for president. Now, in the real election of 1940, FDR defeated Wendell Willkie by 449 electoral votes to 82, with the Republican carrying only 10 of the 48 states. But here, with Lindbergh as the nominee, we're supposed to believe that FDR would lose in forty-six states, carrying only his home state of New York and squeaking to a narrow win in Maryland. This seems highly implausible to me—Lindbergh's agitation on behalf of America First made him a highly polarising figure, and his apparent sympathy for Nazi Germany (in 1938 he accepted a gold medal decorated with four swastikas from Hermann Göring in Berlin) made him anathema in much of the media. All of these negatives would have been pounded home by the Democrats, who had firm control of the House and Senate as well as the White House, and all the advantages of incumbency. Turning a 38 state landslide into a 46 state wipeout simply by changing the Republican nominee stretches suspension of disbelief to the limit, at least for this reader, especially as Americans are historically disinclined to elect “outsiders” to the presidency.

If you accept this premise, then most of what follows is reasonably plausible and the descent of the country into a folksy all-American kind of fascism is artfully told. But then something very odd happens. As events are unfolding at their rather leisurely pace, on page 317 it's like the author realised he was about to run out of typewriter ribbon or something, and the whole thing gets wrapped up in ten pages, most of which is an unconfirmed account by one of the characters of behind-the-scenes events which may or may not explain everything, and then there's a final chapter to sort out the personal details. This left me feeling like Charlie Brown when Lucy snatches away the football; either the novel should be longer, or else the pace of the whole thing should be faster rather than this whiplash-inducing discontinuity right before the end—but who am I to give advice to somebody with a Pulitzer?

A postscript provides real-world biographies of the many historical figures who appear in the novel, and the complete text of Lindbergh's September 1941 Des Moines speech to the America First Committee which documents his contemporary sentiments for readers who are unaware of this unsavoury part of his career.

November 2006 Permalink

Rowsome, Frank, Jr. The Verse by the Side of the Road. New York: Plume, [1965] 1979. ISBN 0-452-26762-5.
In the years before the Interstate Highway System, long trips on the mostly two-lane roads in the United States could bore the kids in the back seat near unto death, and drive their parents to the brink of homicide by the incessant drone of “Are we there yet?” which began less than half an hour out of the driveway. A blessed respite from counting cows, license plate poker, and counting down the dwindling number of bottles of beer on the wall would be the appearance on the horizon of a series of six red and white signs, which all those in the car would strain their eyes to be the first to read.

WITHIN THIS VALE

OF TOIL

AND SIN

YOUR HEAD GROWS BALD

BUT NOT YOUR CHIN—USE

Burma-Shave

In the fall of 1925, the owners of the virtually unknown Burma-Vita company of Minneapolis came up with a new idea to promote the brushless shaving cream they had invented. Since the product would have particular appeal to travellers who didn't want to pack a wet and messy shaving brush and mug in their valise, what better way to pitch it than with signs along the highways frequented by those potential customers? Thus was born, at first only on a few highways in Minnesota, what was to become an American institution for decades and a fondly remembered piece of Americana, the Burma-Shave signs. As the signs proliferated across the landscape, so did sales; so rapid was the growth of the company in the 1930s that a director of sales said (p. 38), “We never knew that there was a depression.” At the peak the company had more than six million regular customers, who were regularly reminded to purchase the product by almost 7000 sets of signs—around 40,000 individual signs, all across the country.

While the first signs were straightforward selling copy, Burma-Shave signs quickly evolved into the characteristic jingles, usually rhyming and full of corny humour and outrageous puns. Rather than hiring an advertising agency, the company ran national contests which paid $100 for the best jingle and regularly received more than 50,000 entries from amateur versifiers.

Almost from the start, the company devoted a substantial number of the messages to highway safety; this was not the result of outside pressure from anti-billboard forces as I remember hearing in the 1950s, but based on a belief that it was the right thing to do—and besides, the sixth sign always mentioned the product! The set of signs above is the jingle that most sticks in my memory: it was a favourite of the Burma-Shave founders as well, having been re-run several times since its first appearance in 1933 and chosen by them to be immortalised in the Smithsonian Institution. Another that comes immediately to my mind is the following, from 1960, on the highway safety theme:

THIRTY DAYS

HATH SEPTEMBER

APRIL

JUNE AND THE

SPEED OFFENDER

Burma-Shave

Times change, and with the advent of roaring four-lane freeways, billboard bans or set-back requirements which made sequences of signs unaffordable, the increasing urbanisation of American society, and of course the dominance of television over all other advertising media, by the early 1960s it was clear to the management of Burma-Vita that the road sign campaign was no longer effective. They had already decided to phase it out before they sold the company to Philip Morris in 1963, after which the signs were quickly taken down, depriving the two-lane rural byways of America of some uniquely American wit and wisdom, but who ever drove them any more, since the Interstate went through?

The first half of this delightful book tells the story of the origin, heyday, and demise of the Burma-Shave signs, and the balance lists all of the six hundred jingles preserved in the records of the Burma-Vita Company, by year of introduction. This isn't something you'll probably want to read straight through, but it's great to pick up from time to time when you want a chuckle.

And then the last sign had been read: all the family exclaimed in unison, “Burma-Shave!”. It had been maybe sixty miles since the last set of signs, and so they'd recall that one and remember other great jingles from earlier trips. Then things would quiet down for a while. “Are we there yet?”

October 2006 Permalink

Royce, Kenneth W. Hologram of Liberty. Ignacio, CO: Javelin Press, 1997. ISBN 1-888766-03-4.
The author, who also uses the nom de plume “Boston T. Party”, provides a survey of the tawdry machinations which accompanied the drafting and adoption of the United States Constitution, making the case that the document was deliberately designed to permit arbitrary expansion of federal power, with cosmetic limitations of power to persuade the states to ratify it. It is striking the extent to which not just vocal anti-federalists like Patrick Henry, but also Thomas Jefferson, anticipated precisely how the federal government would slip its bonds—through judiciary power and the creation of debt, both of which were promptly put into effect by John Marshall and Alexander Hamilton, respectively. Writing on this topic seems to have, as an occupational hazard, a tendency to rant. While Royce never ascends to the coruscating rhetoric of Lysander Spooner's No Treason, there is a great deal of bold type here, as well as some rather curious conspiracy theories (which are, in all fairness, presented for the reader's consideration, not endorsed by the author). Oddly, although chapter 11 discusses the 27th amendment (Congressional Pay Limitation)—proposed in 1789 as part of the original Bill of Rights, but not ratified until 1992—it is missing from the text of the Constitution in appendix C.

July 2004 Permalink

Rutler, George William. Coincidentally. New York: Crossroad Publishing, 2006. ISBN 978-0-8245-2440-1.
This curious little book is a collection of the author's essays on historical coincidences originally published in Crisis Magazine. Each explores coincidences around a general theme. “Coincidence” is defined rather loosely and generously. Consider (p. 160), “Two years later in Missouri, the St. Louis Municipal Bridge was dedicated concurrently with the appointment of England's poet laureate, Robert Bridges. The numerical sum of the year of his birth, 1844, multiplied by 10, is identical to the length in feet of the Philadelphia-Camden Bridge over the Delaware River.”

Here is paragraph from p. 138 which illustrates what's in store for you in these essays.

Odd and tragic coincidences in maritime history render a little more plausible the breathless meters of James Elroy Flecker (1884–1915): “The dragon-green, the luminous, the dark, the serpent-haunted sea.” That sea haunts me too, especially with the realization that Flecker died in the year of the loss of 1,154 lives on the Lusitania. More odd than tragic is this: the United States Secretary of State William Jennings Bryan (in H. L. Mencken's estimation “The National Tear-Duct”) officially protested the ship's sinking on May 13, 1915 which was the 400th anniversary, to the day, of the marriage of the Duke of Suffolk to Mary, the widow of Louis XII and sister of Henry VIII, after she had spurned the hand of the Archduke Charles. There is something ominous even in the name of the great hydrologist of the Massachusetts Institute of Technology who set the standards for water purification: Thomas Drown (1842–1904). Swinburne capitalized on the pathos: “… the place of the slaying of Itylus / The feast of Daulis, the Thracian sea.” And a singularly melancholy fact about the sea is that Swinburne did not end up in it.
I noted several factual errors. For example, on p. 169, Chuck Yeager is said to have flown a “B-51 Mustang” in World War II (the correct designation is P-51). Such lapses make you wonder about the reliability of other details, which are far more arcane and difficult to verify.

The author is opinionated and not at all hesitant to share his acerbic perspective: on p. 94 he calls Richard Wagner a “master of Nazi elevator music”. The vocabulary will send almost all readers other than William F. Buckley (who contributed a cover blurb to the book) to the dictionary from time to time. This is not a book you'll want to read straight through—your head will end up spinning with all the details and everything will dissolve into a blur. I found a chapter or two a day about right. I'd sum it up with Abraham Lincoln's observation “Well, for those who like that sort of thing, I should think it is just about the sort of thing they would like.”

February 2008 Permalink

Ryan, Craig. Magnificent Failure. Washington: Smithsonian Books, 2003. ISBN 978-1-58834-141-9.
In his 1995 book, The Pre-Astronauts (which I read before I began keeping this list), the author masterfully explores the pioneering U.S. balloon flights into the upper atmosphere between the end of World War II and the first manned space flights, which brought both Air Force and Navy manned balloon programs to an abrupt halt. These flights are little remembered today (except for folks lucky enough to have an attic [or DVD] full of National Geographics from the epoch, which covered them in detail). Still less known is the story recounted here: one man's quest, fuelled only by ambition, determination, willingness to do whatever it took, persuasiveness, and sheer guts, to fly higher and free-fall farther than any man had ever done before. Without the backing of any military service, government agency, wealthy patron, or corporate sponsor, he achieved his first goal, setting an altitude record for lighter than air flight which remains unbroken more than four decades later, and tragically died from injuries sustained in his attempt to accomplish the second, after an in-flight accident which remains enigmatic and controversial to this day.

The term “American original” is over-used in describing exceptional characters that nation has produced, but if anybody deserves that designation, Nick Piantanida does. The son of immigrant parents from the Adriatic island of Korčula (now part of Croatia), Nick was born in 1932 and grew up on the gritty Depression-era streets of Union City, New Jersey in the very cauldron of the American melting pot, amid communities of Germans, Italians, Irish, Jews, Poles, Syrians, and Greeks. Although universally acknowledged to be extremely bright, his interests in school were mostly brawling and basketball. He excelled in the latter, sharing the 1953 YMCA All-America honours with some guy named Wilt Chamberlain. After belatedly finishing high school (bored, he had dropped out to start a scrap iron business, but was persuaded to return by his parents), he joined the Army where he was All-Army in basketball for both years of his hitch and undefeated as a heavyweight boxer. After mustering out, he received a full basketball scholarship to Fairleigh Dickinson University, then abruptly quit a few months into his freshman year, finding the regimentation of college life as distasteful as that of the Army.

In search of fame, fortune, and adventure, Nick next set his sights on Venezuela, where he vowed to be the first to climb Devil's Mountain, from which Angel Falls plummets 807 metres. Penniless, he recruited one of his Army buddies as a climbing partner and lined up sponsors to fund the expedition. At the outset, he knew nothing about mountaineering, so he taught himself on the Hudson River Palisades with the aid of books from the library. Upon arrival in Venezuela, the climbers learnt to their dismay that another expedition had just completed the first ascent of the mountain, so Nick vowed to make the first ascent of the north face, just beside the falls, which was thought unclimbable. After an arduous trip through the jungle, during which their guide quit and left the climbers alone, Nick and his partner made the ascent by themselves and returned to the acclaim of all. Such was the determination of this man.

Nick was always looking for adventure, celebrity, and the big score. He worked for a while as a steelworker on the high iron of the Verrazano-Narrows Bridge, but most often supported himself and, after his marriage, his growing family, by contract truck driving and, occasionally, unemployment checks. Still, he never ceased to look for ways, always unconventional, to make his fortune, nor failed to recruit associates and find funding for his schemes. Many of his acquaintances use the word “hustler” to describe him in those days, and one doubts that Nick would be offended by the honorific. He opened an exotic animal import business, and ordered cobras, mongooses, goanna lizards, and other critters mail-order from around the world for resale to wealthy clients. When buyers failed to materialise, he staged gladiatorial contests of both animal versus animal and animal versus himself. Eventually he imported a Bengal tiger cub which he kept in his apartment until it had grown so large it could put its paws on his shoulders, whence he traded the tiger for a decrepit airplane (he had earned a pilot's license while still in his teens). Offered a spot on the New York Knicks professional basketball team, he turned it down because he thought he could make more money barnstorming in his airplane.

Nick finally found his life's vocation when, on a lark, he made a parachute jump. Soon, he had progressed from static line beginner jumps to free fall and increasingly advanced skydiving, making as many jumps as he could afford and find the time for. And then he had the Big Idea. In 1960, Joseph Kittinger had ridden a helium balloon to an altitude of 31,333 metres and bailed out, using a small drogue parachute to stabilise his fall until he opened his main parachute at an altitude of 5,330 metres. Although this was, at the time (and remains to this day) the highest altitude parachute jump ever made, skydiving purists do not consider it a true free fall jump due to the use of the stabilising chute. In 1962, Eugene Andreev jumped from a Soviet balloon at an altitude of 25,460 metres and did a pure free fall descent, stabilising himself purely by skydiving techniques, setting an official free-fall altitude record which also remains unbroken. Nick vowed to claim both the record for highest altitude ascent and longest free-fall jump for himself, and set about it with his usual energy and single-minded determination.

Piantanida faced a daunting set of challenges in achieving his goal: at the outset he had neither balloon, gondola, spacesuit, life support system, suitable parachute, nor any knowledge of or experience with the multitude of specialities whose mastery is required to survive in the stratosphere, above 99% of the Earth's atmosphere. Kittinger and Andreev were supported by all the resources, knowledge, and funding of their respective superpowers' military establishments, while Nick had—well…Nick. But he was not to be deterred, and immediately set out educating himself and lining up people, sponsors, and gear necessary for the attempt.

The story of what became known as Project Strato-Jump reads like an early Heinlein novel, with an indomitable spirit pursuing a goal other, more “reasonable”, people considered absurd or futile. By will, guile, charm, pull, intimidation, or simply wearing down adversaries until they gave in just to make him go away, he managed to line up everything he needed, including having the company which supplied NASA with its Project Gemini spacesuits custom tailor one (Nick was built like an NBA star, not an astronaut) and loan it to him for the project.

Finally, on October 22, 1965, all was ready, and Nick took to the sky above Minnesota, bound for the edge of space. But just a few minutes after launch, at just 7,000 metres, the balloon burst, probably due to a faulty seam in the polyethylene envelope, triggered by a wind shear at that altitude. Nick rode down in the gondola under its recovery parachute, then bailed out at 3200 metres, unglamorously landing in the Pig's Eye Dump in St. Paul.

Undeterred by the failure, Nick recruited a new balloon manufacturer and raised money for a second attempt, setting off again for the stratosphere a second time on February 2, 1966. This time the ascent went flawlessly, and the balloon rose to an all-time record altitude of 37,643 metres. But as Nick proceeded through the pre-jump checklist, when he attempted to disconnect the oxygen hose that fed his suit from the gondola's supply and switch over to the “bail out bottle” from which he would breathe during the descent, the disconnect fitting jammed, and he was unable to dislodge it. He was, in effect, tethered to the gondola by his oxygen line and had no option but to descend with it. Ground control cut the gondola's parachute from the balloon, and after a harrowing descent Nick and gondola landed in a farm field with only minor injuries. The jump had failed, but Nick had flown higher than any manned balloon ever had. But since the attempt was not registered as an official altitude attempt, although the altitude attained is undisputed, the record remains unofficial.

After the second failure, Nick's confidence appeared visibly shaken. Having all that expense, work, and risk undertaken come to nought due to a small detail with which nobody had been concerned prior to the flight underlined just how small the margin for error was in the extreme environment at the edge of space and, by implication, how the smallest error or oversight could lead to disaster. Still, he was bent on trying yet again, and on May 1, 1966 (since he was trying to break a Soviet record, he thought this date particularly appropriate), launched for the third time. Everything went normally as the balloon approached 17,375 metres, whereupon the ground crew monitoring the air to ground voice link heard what was described as a “whoosh” or hiss, followed by a call of “Emergen” from Nick, followed by silence. The ground crew immediately sent a radio command to cut the balloon loose, and the gondola, with Nick inside, began to descend under its cargo parachute.

Rescue crews arrived just moments after the gondola touched down and found it undamaged, but Nick was unconscious and unresponsive. He was rushed to the local hospital, treated without avail, and then transferred to a hospital in Minneapolis where he was placed in a hyperbaric chamber where treatment for decompression sickness was administered, without improvement. On June 18th, he was transferred to the National Institute of Health hospital in Bethesda, Maryland, where he was examined and treated by experts in decompression disease and hypoxia, but never regained consciousness. He died on August 25, 1966, with an autopsy finding the cause of death hypoxia and ruptures of the tissue in the brain due to decompression.

What happened to Nick up there in the sky? Within hours after the accident, rumours started to circulate that he was the victim of equipment failure: that his faceplate had blown out or that the pressure suit had failed in some other manner, leading to an explosive decompression. This story has been repeated so often it has become almost canon—consider this article from Wired from July 2002. Indeed, when rescuers arrived on the scene, Nick's “faceplate” was damaged, but this was just the sun visor which can be pivoted down to cover the pressure-retaining faceplate, which was intact and, in a subsequent test of the helmet, found to seal perfectly. Rescuers assumed the sun visor was damaged by impact with part of the gondola during the landing and, in any case, would not have caused a decompression however damaged.

Because the pressure suit had been cut off in the emergency room, it wasn't possible to perform a full pressure test, but meticulous inspection of the suit by the manufacturer discovered no flaws which could explain an explosive decompression. The oxygen supply system in the gondola was found to be functioning normally, with all pressure vessels and regulators operating within specifications.

So, what happened? We will never know for sure. Unlike a NASA mission, there was no telemetry, nor even a sequence camera recording what was happening in the gondola. And yet, within minutes after the accident occurred, many members of the ground crew came to a conclusion as to the probable cause, which those still alive today have seen no need to revisit. Such was their certainty that reporter Robert Vaughan gave it as the cause in the story he filed with Life magazine, which he was dismayed to see replaced with an ambiguous passage by the editors, because his explanation did not fit with the narrative chosen for the story. (The legacy media acted like the legacy media even when they were the only media and not yet legacy!)

Astonishingly, all the evidence (which, admittedly, isn't very much) seems to indicate that Nick opened his helmet visor at that extreme altitude, which allowed the air in suit to rush out (causing the “whoosh”), forcing the air from his lungs (cutting off the call of “Emergency!”), and rapidly incapacitating him. The extended hypoxia and exposure to low pressure as the gondola descended under the cargo parachute caused irreversible brain damage well before the gondola landed. But why would Nick do such a crazy thing as open his helmet visor when in the physiological equivalent of space? Again, we can never know, but what is known is that he'd done it before, at lower altitudes, to the dismay of his crew, who warned him of the potentially dire consequences. There is abundant evidence that Piantanida violated the oxygen prebreathing protocol before high altitude exposure not only on this flight, but on a regular basis. He reported symptoms completely consistent with decompression sickness (the onset of “the bends”), and is quoted as saying that he could relieve the symptoms by deflating and reinflating his suit. Finally, about as close to a smoking gun as we're likely to find, the rescue crew found Nick's pressure visor unlatched and rotated away from the seal position. Since Nick would have been in a coma well before he entered breathable atmosphere, it isn't possible he could have done this before landing, and there is no way an impact upon landing could have performed the precise sequence of operations required to depressurise the suit and open the visor.

It is impossible put oneself inside the mind of such an outlier in the human population as Nick, no less imagine what he was thinking and feeling when rising into the darkness above the dawn on the third attempt at achieving his dream. He was almost certainly suffering from symptoms of decompression sickness due to inadequate oxygen prebreathing, afflicted by chronic sleep deprivation in the rush to get the flight off, and under intense stress to complete the mission before his backers grew discouraged and the money ran out. All of these factors can cloud the judgement of even the most disciplined and best trained person, and, it must be said, Nick was neither. Perhaps the larger puzzle is why members of his crew who did understand these things, did not speak up, pull the plug, or walk off the project when they saw what was happening. But then a personality like Nick can sweep people along through its own primal power, for better or for worse; in this case, to tragedy.

Was Nick a hero? Decide for yourself—my opinion is no. In pursuing his own ego-driven ambition, he ended up leaving his wife a widow and his three daughters without a father they remember, with only a meagre life insurance policy to support them. The project was basically a stunt, mounted with the goal of turning its success into money by sales of story, film, and celebrity appearances. Even had the jump succeeded, it would have yielded no useful aeromedical research data applicable to subsequent work apart from the fact that it was possible. (In Nick's defence on this account, he approached the Air Force and NASA, inviting them to supply instrumentation and experiments for the jump, and was rebuffed.)

This book is an exhaustively researched (involving many interviews with surviving participants in the events) and artfully written account of this strange episode which was, at the same time, the last chapter of the exploration of the black beyond by intrepid men in their floating machines and a kind of false dawn precursor of the private exploration of space which is coming to the fore almost half a century after Nick Piantanida set out to pursue his black sky dream. The only embarrassing aspect to this superb book is that on occasion the author equates state-sponsored projects with competence, responsibility, and merit. Well, let's see…. In a rough calculation, using 2007 constant dollars, NASA has spent northward of half a trillion dollars, killing a total of 17 astronauts (plus other employees in industrial accidents on the ground), with all of the astronaut deaths due to foreseeable risks which management failed to identify or mitigate in time.

Project Strato-Jump, funded entirely by voluntary contributions, without resort to the state's monopoly on the use of force, set an altitude record for lighter than air flight within the atmosphere which has stood from 1966 to this writing, and accomplished it in three missions with a total budget of less than (2007 constant) US$400,000, with the loss of a single life due to pilot error. Yes, NASA has achieved much, much more. But a million times more?

This is a very long review, so if you've made it to this point and found it tedious, please accept my excuses. Nick Piantanida has haunted me for decades. I followed his exploits as they happened, and were reported on the CBS Evening News in the 1960s. I felt the frustration of the second flight (with that achingly so far and yet so near view of the Earth from altitude, when he couldn't jump), and then the dismay at the calamity on the third, then the long vigil ending with his sad demise. Astronauts were, well, astronauts, but Nick was one of us. If a truck driver from New Jersey could, by main force, travel to the black of space, then why couldn't any of us? That was the real dream of the Space Age: Have Space Suit—Will Travel. Well, Nick managed to lay his hands on a space suit and travel he did!

Anybody who swallowed the bogus mainstream media narrative of Nick's “suit failure” had to watch the subsequent Gemini and Apollo EVA missions with a special sense of apprehension. A pressure suit is one of the few things in the NASA space program which has no backup: if the pressure garment fails catastrophically, you're dead before you can do anything about it. (A slow leak isn't a problem, since there's an oxygen purge system which can maintain pressure until you can get inside, but a major seam failure, or having a visor blow out or glove pop off is endsville.) Knowing that those fellows cavorting on the Moon were wearing pretty much the same suit as Nick caused those who believed the propaganda version of his death to needlessly catch their breath every time one of them stumbled and left a sitzmark or faceplant in the eternal lunar regolith.

November 2010 Permalink

Sacks, David. Language Visible: Unraveling the Mystery of the Alphabet. New York: Broadway Books, 2003. ISBN 0-7679-1172-5.
Whaddya gonna do? The hardcover is out of print and the paperback isn't scheduled for publication until August 2004. The U.K. hardback edition, simply titled The Alphabet, is currently available.

March 2004 Permalink

Salisbury, Harrison E. The 900 Days. New York: Da Capo Press, [1969, 1985] 2003. ISBN 978-0-306-81298-9.
On June 22, 1941, Nazi Germany, without provocation or warning, violated its non-aggression pact with the Soviet Union and invaded from the west. The German invasion force was divided into three army groups. Army Group North, commanded by Field Marshal Ritter von Leeb, was charged with advancing through and securing the Baltic states, then proceeding to take or destroy the city of Leningrad. Army Group Centre was to invade Byelorussia and take Smolensk, then advance to Moscow. After Army Group North had reduced Leningrad, it was to detach much of its force for the battle for Moscow. Army Group South's objective was to conquer the Ukraine, capture Kiev, and then seize the oil fields of the Caucasus.

The invasion took the Soviet government and military completely by surprise, despite abundant warnings from foreign governments of German troops massing along its western border and reports from Soviet spies indicating an invasion was imminent. A German invasion did not figure in Stalin's world view and, in the age of the Great Terror, nobody had the standing or courage to challenge Stalin. Indeed, Stalin rejected proposals to strengthen defenses on the western frontiers for fear of provoking the Germans. The Soviet military was in near-complete disarray. The purges which began in the 1930s had wiped out not only most of the senior commanders, but the officer corps as a whole. By 1941, only 7 percent of Red Army officers had any higher military education and just 37% had any military instruction at all, even at a high school level.

Thus, it wasn't a surprise that the initial German offensive was even more successful than optimistic German estimates. Many Soviet aircraft were destroyed on the ground, and German air strikes deep into Soviet territory disrupted communications in the battle area and with senior commanders in Moscow. Stalin appeared to be paralysed by the shock; he did not address the Soviet people until the 3rd of July, a week and a half after the invasion, by which time large areas of Soviet territory had already been lost.

Army Group North's advance toward Leningrad was so rapid that the Soviets could hardly set up new defensive lines before they were overrun by German forces. The administration in Leningrad mobilised a million civilians (out of an initial population of around three million) to build fortifications around the city and on the approaches to it. By August, German forces were within artillery range of the city and shells began to fall throughout Leningrad. On August 21st, Hitler issued a directive giving priority to the encirclement of Leningrad and linking up with the advancing Finnish army over the capture of Moscow, so Army Group North would receive what it needed for the task. When the Germans captured the town of Mga on August 30, the last rail link between Leningrad and the rest of Russia was severed. Henceforth, the only way in or out of Leningrad was across Lake Lagoda, running the gauntlet of German ships and mines, or by air. The siege of Leningrad had begun. The battle for the city was now in the hands of the Germans' most potent allies: Generals Hunger, Cold, and Terror.

The civil authorities were as ill-prepared for what was to come as the military commanders had been to halt the German advance before it invested the city. The dire situation was compounded when, on September 8th, a German air raid burned to the ground the city's principal food warehouses, built of wood and packed next to one another, destroying all the reserves stored there. An inventory taken after the raid revealed that, at normal rates of consumption, only between two and three weeks' supply of food remained for the population. Rationing had already been imposed, and rations were immediately cut to 500 grams of bread per day for workers and 300 grams for office employees and children. This was to be just the start. The total population of encircled Leningrad, civilian and military, totalled around 3.4 million.

While military events and the actions of the city government are described, most of the book recounts the stories of people who lived through the siege. The accounts are horrific, with the previous unimaginable becoming the quotidian experience of residents of the city. The frozen bodies of victims of starvation were often stacked like cordwood outside apartment buildings or hauled on children's sleds to common graves. Very quickly, Leningrad became exclusively a city of humans: dogs, cats, and pigeons quickly disappeared, eaten as food supplies dwindled. Even rats vanished. While some were doubtless eaten, most seemed to have deserted the starving metropolis for the front, where food was more abundant. Cannibalism was not just rumoured, but documented, and parents were careful not to let children out of their sight.

Even as privation reached extreme levels (at one point, the daily bread ration for workers fell to 300 grams and for children and dependents 125 grams—and that is when bread was available at all), Stalin's secret police remained up and running, and people were arrested in the middle of the night for suspicion of espionage, contacts with foreigners, shirking work, or for no reason at all. The citizenry observed that the NKVD seemed suspiciously well-fed throughout the famine, and they wielded the power of life and death when denial of a ration card was a sentence of death as certain as a bullet in the back of the head.

In the brutal first winter of 1941–1942, Leningrad was sustained largely by truck traffic over the “Road of Life”, constructed over the ice of frozen Lake Lagoda. Operating from November through April, and subject to attack by German artillery and aircraft, thousands of tons of supplies, civilian and military, were brought into the city and the wounded and noncombatants evacuated over the road. The road was rebuilt during the following winter and continued to be the city's lifeline.

The siege of Leningrad was unparalleled in the history of urban sieges. Counting from the fall of Mga on September 8, 1941 until the lifting of the siege on January 27, 1944, the siege had lasted 872 days. By comparison, the siege of Paris in 1870–1871 lasted just 121 days. The siege of Vicksburg in the American war of secession lasted 47 days and involved only 4000 civilians. Total civilian casualties during the siege of Paris were less than those in Leningrad every two or three winter days. Estimates of total deaths in Leningrad due to starvation, disease, and enemy action vary widely. Official Soviet sources tried to minimise the toll to avoid recriminations among Leningraders who felt they had been abandoned to their fate. The author concludes that starvation deaths in Leningrad and the surrounding areas were on the order of one million, with a total of all deaths, civilian and military, between 1.3 and 1.5 million.

The author, then a foreign correspondent for United Press, was one of the first reporters to visit Leningrad after the lifting of the siege. The people he met then and their accounts of life during the siege were unfiltered by the edifice of Soviet propaganda later erected over life in besieged Leningrad. On this and subsequent visits, he was able to reconstruct the narrative, both at the level of policy and strategy and of individual human stories, which makes up this book. After its initial publication in 1969, the book was fiercely attacked in the Soviet press, with Pravda publishing a full page denunciation. Salisbury's meticulously documented account of the lack of preparedness, military blunders largely due to Stalin's destruction of the officer corps in his purges, and bungling by the Communist Party administration of the city did not fit with the story of heroic Leningrad standing against the Nazi onslaught in the official Soviet narrative. The book was banned in the Soviet Union and copies brought by tourists seized by customs. The author, who had been Moscow bureau chief for The New York Times from 1949 through 1954, was for years denied a visa to visit the Soviet Union. It was only after the collapse of the Soviet Union that the work became generally available in Russia.

I read the Kindle edition, which is a shameful and dismaying travesty of this classic and important work. It's not a cheap knock-off: the electronic edition is issued by the publisher at a price (at this writing) of US$ 13, only a few dollars less than the paperback edition. It appears to have been created by optical character recognition of a print edition without the most rudimentary copy editing of the result of the scan. Hundreds of words which were hyphenated at the ends of lines in the print edition occur here with embedded hyphens. The numbers ‘0’ and ‘1’ are confused with the letters ‘o’ and ‘i’ in numerous places. Somebody appears to have accidentally done a global replace of the letters “charge” with “chargé”, both in stand-alone words and within longer words. Embarrassingly, for a book with “900” in its title, the number often appears in the text as “poo”. Poetry is typeset with one character per line. I found more than four hundred mark-ups in the text, which even a cursory examination by a copy editor would have revealed. The index is just a list of searchable items, not linked to their references in the text. I have compiled a list of my mark-ups to this text, which I make available to readers and the publisher, should the latter wish to redeem this electronic edition by correcting them. I applaud publishers who make valuable books from their back-lists available in electronic form. But respect your customers! When you charge us almost as much as the paperback and deliver a slapdash product which clearly hasn't been read by anybody on your staff before it reached my eyes, I'm going to savage it. Consider it savaged. Should the publisher supplant this regrettable edition with one worthy of its content, I will remove this notice.

October 2016 Permalink

Satter, David. Age of Delirium: The Decline and Fall of the Soviet Union. New Haven, CT: Yale University Press, [1996] 2001. ISBN 0-300-08705-5.

May 2002 Permalink

Schama, Simon. Citizens: A Chronicle of the French Revolution. New York: Vintage Books, 1989. ISBN 0-679-72610-1.
The French Revolution is so universally used as a metaphor in social and political writing that it's refreshing to come across a straight narrative history of what actually happened. The French Revolution is a huge, sprawling story, and this is a big, heavy book about it—more than nine hundred pages, with an enormous cast of characters—in large part because each successive set of new bosses cut off the heads of their predecessors. Schama stresses the continuity of many of the aspects of the Revolution with changes already underway in the latter decades of the ancien régime—Louis XVI comes across as kind of Enlightenment Gorbachev—attempting to reform a bankrupt system from the top and setting in motion forces which couldn't be controlled. Also striking is how many of the most extreme revolutionaries were well-off before the Revolution and, in particular, the large number of lawyers in their ranks. Far from viewing the Terror as an aberration, Schama argues that from the very start, the summer of 1789, “violence was the motor of the Revolution”. With the benefit of two centuries of hindsight, you almost want to reach back across the years, shake these guys by the shoulders, and say “Can't you see where you're going with this?” But then you realise: this was all happening for the very first time—they had no idea of the inevitable outcome of their idealism! In a mere four years, they invented the entire malevolent machinery of the modern, murderous, totalitarian nation-state, and all with the best intentions, informed by the persuasively posed yet relentlessly wrong reasoning of Rousseau. Those who have since repeated the experiment, with the example of the French Revolution before them as a warning, have no such excuse.

October 2004 Permalink

Schlosser, Eric. Command and Control. New York: Penguin, 2013. ISBN 978-0-14-312578-5.
On the evening of September 18th, 1980 two U.S. Air Force airmen, members of a Propellant Transfer System (PTS) team, entered a Titan II missile silo near Damascus, Arkansas to perform a routine maintenance procedure. Earlier in the day they had been called to the site because a warning signal had indicated that pressure in the missile's second stage oxidiser tank was low. This was not unusual, especially for a missile which had recently been refuelled, as this one had, and the procedure of adding nitrogen gas to the tank to bring the pressure up to specification was considered straightforward. That is, if you consider any work involving a Titan II “routine” or “straightforward”. The missile, in an underground silo, protected by a door weighing more than 65 tonnes and able to withstand the 300 psi overpressure of a nearby nuclear detonation, stood more than 31 metres high and contained 143 tonnes of highly toxic fuel and oxidiser which, in addition to being poisonous to humans in small concentrations, were hypergolic: they burst into flames upon contact with one another, with no need of a source of ignition. Sitting atop this volatile fuel was a W-53 nuclear warhead with a yield of 9 megatons and high explosives in the fission primary which were not, as more modern nuclear weapons, insensitive to shock and fire. While it was unlikely in the extreme that detonation of these explosives due to an accident would result in a nuclear explosion, they could disperse the radioactive material in the bomb over the local area, requiring a massive clean-up effort.

The PTS team worked on the missile wearing what amounted to space suits with their own bottled air supply. One member was an experienced technician while the other was a 19-year old rookie receiving on the job training. Early in the procedure, the team was to remove the pressure cap from the side of the missile. While the lead technician was turning the cap with a socket wrench, the socket fell off the wrench and down the silo alongside the missile. The socket struck the thrust mount supporting the missile, bounced back upward, and struck the side of the missile's first stage fuel tank. Fuel began to spout outward as if from a garden hose. The trainee remarked, “This is not good.”

Back in the control centre, separated from the silo by massive blast doors, the two man launch team who had been following the servicing operation, saw their status panels light up like a Christmas tree decorated by somebody inordinately fond of the colour red. The warnings were contradictory and clearly not all correct. Had there indeed been both fuel and oxidiser leaks, as indicated, there would already have been an earth-shattering kaboom from the silo, and yet that had not happened. The technicians knew they had to evacuate the silo as soon as possible, but their evacuation route was blocked by dense fuel vapour.

The Air Force handles everything related to missiles by the book, but the book was silent about procedures for a situation like this, with massive quantities of toxic fuel pouring into the silo. Further, communication between the technicians and the control centre were poor, so it wasn't clear at first just what had happened. Before long, the commander of the missile wing, headquarters of the Strategic Air Command (SAC) in Omaha, and the missile's manufacturer, Martin Marietta, were in conference trying to decide how to proceed. The greatest risks were an electrical spark or other source of ignition setting the fuel on fire or, even greater, of the missile collapsing in the silo. With tonnes of fuel pouring from the fuel tank and no vent at its top, pressure in the tank would continue to fall. Eventually, it would be below atmospheric pressure, and would be crushed, likely leading the missile to crumple under the weight of the intact and fully loaded first stage oxidiser and second stage tanks. These tanks would then likely be breached, leading to an explosion. No Titan II had ever exploded in a closed silo, so there was no experience as to what the consequences of this might be.

As the night proceeded, all of the Carter era military malaise became evident. The Air Force lied to local law enforcement and media about what was happening, couldn't communicate with first responders, failed to send an evacuation helicopter for a gravely injured person because an irrelevant piece of equipment wasn't available, and could not come to a decision about how to respond as the situation deteriorated. Also on display was the heroism of individuals, in the Air Force and outside, who took matters into their own hands on the spot, rescued people, monitored the situation, evacuated nearby farms in the path of toxic clouds, and improvised as events required.

Among all of this, nothing whatsoever had been done about the situation of the missile. Events inevitably took their course. In the early morning hours of September 19th, the missile collapsed, releasing all of its propellants, which exploded. The 65 tonne silo door was thrown 200 metres, shearing trees in its path. The nuclear warhead was thrown two hundred metres in another direction, coming to rest in a ditch. Its explosives did not detonate, and no radiation was released.

While there were plenty of reasons to worry about nuclear weapons during the Cold War, most people's concerns were about a conflict escalating to the deliberate use of nuclear weapons or the possibility of an accidental war. Among the general public there was little concern about the tens of thousands of nuclear weapons in depots, aboard aircraft, atop missiles, or on board submarines—certainly every precaution had been taken by the brilliant people at the weapons labs to make them safe and reliable, right?

Well, that was often the view among “defence intellectuals” until they were briefed in on the highly secret details of weapons design and the command and control procedures in place to govern their use in wartime. As documented in this book, which uses the Damascus accident as a backdrop (a ballistic missile explodes in rural Arkansas, sending its warhead through the air, because somebody dropped a socket wrench), the reality was far from reassuring, and it took decades, often against obstructionism and foot-dragging from the Pentagon, to remedy serious risks in the nuclear stockpile.

In the early days of the U.S. nuclear stockpile, it was assumed that nuclear weapons were the last resort in a wartime situation. Nuclear weapons were kept under the civilian custodianship of the Atomic Energy Commission (AEC), and would only be released to the military services by a direct order from the President of the United States. Further, the nuclear cores (“pits”) of weapons were stored separately from the rest of the weapon assembly, and would only be inserted in the weapon, in the case of bombers, in the air, after the order to deliver the weapon was received. (This procedure had been used even for the two bombs dropped on Japan.) These safeguards meant that the probability of an accidental nuclear explosion was essentially nil in peacetime, although the risk did exist of radioactive contamination if a pit were dispersed due to fire or explosion.

As the 1950s progressed, and fears of a Soviet sneak attack grew, pressure grew to shift the custodianship of nuclear weapons to the military. The development of nuclear tactical and air defence weapons, some of which were to be forward deployed outside the United States, added weight to this argument. If radar detected a wave of Soviet bombers heading for the United States, how practical would it be to contact the President, get him to sign off on transferring the anti-aircraft warheads to the Army and Air Force, have the AEC deliver them to the military bases, install them on the missiles, and prepare the missiles for launch? The missile age only compounded this situation. Now the risk existed for a “decapitation” attack which could take out the senior political and military leadership, leaving nobody with the authority to retaliate.

The result of all this was a gradual devolution of control over nuclear weapons from civilian to military commands, with fully-assembled nuclear weapons loaded on aircraft, sitting at the ends of runways in the United States and Europe, ready to take off on a few minutes' notice. As tensions continued to increase, B-52s, armed with hydrogen bombs, were on continuous “airborne alert”, ready at any time to head toward their targets.

The weapons carried by these aircraft, however, had not been designed for missions like this. They used high explosives which could be detonated by heat or shock, often contained few interlocks to prevent a stray electrical signal from triggering a detonation, were not “one point safe” (guaranteed that detonation of one segment of the high explosives could not cause a nuclear yield), and did not contain locks (“permissive action links”) to prevent unauthorised use of a weapon. Through much of the height of the Cold War, it was possible for a rogue B-52 or tactical fighter/bomber crew to drop a weapon which might start World War III; the only protection against this was rigid psychological screening and the enemy's air defence systems.

The resistance to introducing such safety measures stemmed from budget and schedule pressures, but also from what was called the “always/never” conflict. A nuclear weapon should always detonate when sent on a wartime mission. But it should never detonate under any other circumstances, including an airplane crash, technical malfunction, maintenance error, or through the deliberate acts of an insane or disloyal individual or group. These imperatives inevitably conflict with one another. The more safeguards you design into a weapon to avoid an unauthorised detonation, the greater the probability one of them may fail, rendering the weapon inert. SAC commanders and air crews were not enthusiastic about the prospect of risking their lives running the gauntlet of enemy air defences only to arrive over their target and drop a dud.

As documented here, it was only after the end of Cold War, as nuclear weapon stockpiles were drawn down, that the more dangerous weapons were retired and command and control procedures put into place which seem (to the extent outsiders can assess such highly classified matters) to provide a reasonable balance between protection against a catastrophic accident or unauthorised launch and a reliable deterrent.

Nuclear command and control extends far beyond the design of weapons. The author also discusses in detail the development of war plans, how civilian and military authorities interact in implementing them, how emergency war orders are delivered, authenticated, and executed, and how this entire system must be designed not only to be robust against errors when intact and operating as intended, but in the aftermath of an attack.

This is a serious scholarly work and, at 632 pages, a long one. There are 94 pages of end notes, many of which expand substantially upon items in the main text. A Kindle edition is available.

November 2014 Permalink

Scott, Robert Falcon. Journals. Oxford: Oxford University Press, [1913, 1914, 1923, 1927] 2005. ISBN 978-0-19-953680-1.
Robert Falcon Scott, leading a party of five men hauling their supplies on sledges across the ice cap, reached the South Pole on January 17th, 1912. When he arrived, he discovered a cairn built by Roald Amundsen's party, which had reached the Pole on December 14th, 1911 using sledges pulled by dogs. After this crushing disappointment, Scott's polar party turned back toward their base on the coast. After crossing the high portion of the ice pack (which Scott refers to as “the summit”) without severe difficulties, they encountered unexpected, unprecedented, and, based upon subsequent meteorological records, extremely low temperatures on the Ross Ice Shelf (the “Barrier” in Scott's nomenclature). Immobilised by a blizzard, and without food or sufficient fuel to melt ice for water, Scott's party succumbed, with Scott's last journal entry, dated March 29th, 1912.

I do not think we can hope for any better things now. We shall stick it out to the end, but we are getting weaker, of course, and the end cannot be far. It seems a pity, but I do not think I can write more.
R. Scott.

For God's sake look after our people.

A search party found the bodies of Scott and the other two members of the expedition who died with him in the tent (the other two had died earlier on the return journey; their remains were never found). His journals were found with him, and when returned to Britain were prepared for publication, and proved a sensation. Amundsen's priority was almost forgotten in the English speaking world, alongside Scott's first-hand account of audacious daring, meticulous planning, heroic exertion, and dignity in the face of death.

A bewildering variety of Scott's journals were published over the years. They are described in detail and their differences curated in this Oxford World's Classics edition. In particular, Scott's original journals contained very candid and often acerbic observations about members of his expedition and other explorers, particularly Shackleton. These were elided or toned down in the published copies of the journals. In this edition, the published text is used, but the original manuscript text appears in an appendix.

Scott was originally considered a hero, then was subjected to a revisionist view that deemed him ill-prepared for the expedition and distracted by peripheral matters such as a study of the embryonic development of emperor penguins as opposed to Amundsen's single-minded focus on a dash to the Pole. The pendulum has now swung back somewhat, and a careful reading of Scott's own journals seems, at least to this reader, to support this more balanced view. Yes, in some ways Scott's expedition seems amazingly amateurish (I mean, if you were planning to ski across the ice cap, wouldn't you learn to ski before you arrived in Antarctica, rather than bring along a Norwegian to teach you after you arrived?), but ultimately Scott's polar party died due to a combination of horrific weather (present-day estimates are that only one year in sixteen has temperatures as low as those Scott experienced on the Ross Ice Shelf) and an equipment failure: leather washers on cans of fuel failed in the extreme temperatures, which caused loss of fuel Scott needed to melt ice to sustain the party on its return. And yet the same failure had been observed during Scott's 1901–1904 expedition, and nothing had been done to remedy it. The record remains ambiguous and probably always will.

The writing, especially when you consider the conditions under which it was done, makes you shiver. At the Pole:

The Pole. Yes, but under very different circumstances from those expected.

… Great God! this is an awful place and terrible enough for us to have laboured to it without the reward of priority.

and from his “Message to the Public” written shortly before his death:

We took risks, we knew we took them; things have come out against us, and therefore we have no cause for complaint, but bow to the will of Providence, determined still to do our best to the last.

Now that's an explorer.

February 2013 Permalink

Scurr, Ruth. Fatal Purity. London: Vintage Books, 2006. ISBN 0-09-945898-5.
In May 1791, Maximilien Robespierre, not long before an obscure provincial lawyer from Arras in northern France, elected to the Estates General convened by Louis XVI in 1789, spoke before what had by then reconstituted itself as the National Assembly, engaged in debating the penal code for the new Constitution of France. Before the Assembly were a number of proposals by a certain Dr. Guillotin, among which the second was, “In all cases of capital punishment (whatever the crime), it shall be of the same kind—i.e. beheading—and it shall be executed by means of a machine.” Robespierre argued passionately against all forms of capital punishment: “A conqueror that butchers his captives is called barbaric. Someone who butchers a perverse child that he could disarm and punish seems monstrous.” (pp. 133–136)

Just two years later, Robespierre had become synonymous not only with the French Revolution but with the Terror it had spawned. Either at his direction, with his sanction, or under the summary arrest and execution without trial or appeal which he advocated, the guillotine claimed more than 2200 lives in Paris alone, 1376 between June 10th and July 27th of 1793, when Robespierre's power abruptly ended, along with the Terror, with his own date with the guillotine.

How did a mild-mannered provincial lawyer who defended the indigent and disadvantaged, amused himself by writing poetry, studied philosophy, and was universally deemed, even by his sworn enemies, to merit his sobriquet, “The Incorruptible”, become an archetypal monster of the modern age, a symbol of the darkness beneath the Enlightenment?

This lucidly written, well-argued, and meticulously documented book traces Robespierre's life from birth through downfall and execution at just age 36, and places his life in the context of the upheavals which shook France and to which, in his last few years, he contributed mightily. The author shows the direct link between Rousseau's philosophy, Robespierre's inflexible, whatever-the-cost commitment to implementing it, and its horrific consequences for France. Too many people forget that it was Rousseau who wrote in The Social Contract, “Now, as citizen, no man is judge any longer of the danger to which the law requires him to expose himself, and when the prince says to him: ‘It is expedient for the state that you should die’, then he should die…”. Seen in this light, the madness of Robespierre's reign is not the work of a madman, but of a rigorously rational application of a profoundly anti-human system of beliefs which some people persist in taking seriously even today.

A U.S. edition is available.

May 2007 Permalink

Shirer, William L. The Rise and Fall of the Third Reich. New York: Touchstone Books, [1959, 1960] 1990. ISBN 978-0-671-72868-7.
According to an apocryphal story, a struggling author asks his agent why his books aren't selling better, despite getting good reviews. The agent replies, “Look, the only books guaranteed to sell well are books about golf, books about cats, and books about Nazis.” Some authors have taken this too much to heart. When this massive cinder block of a book (1250 pages in the trade paperback edition) was published in 1960, its publisher did not believe a book about Nazis (or at least such a long one) would find a wide audience, and ordered an initial print run of just 12,500 copies. Well, it immediately went on to sell more than a million copies in hardback, and then another million in paperback (it was, at the time, the thickest paperback ever published). It has remained in print continuously for more than half a century, has been translated into a number of languages, and at this writing is in the top ten thousand books by sales rank on Amazon.com.

The author did not just do extensive research on Nazi Germany, he lived there from 1934 through 1940, working as a foreign correspondent based in Berlin and Vienna. He interviewed many of the principals of the Nazi regime and attended Nazi rallies and Hitler's Reichstag speeches. He was the only non-Nazi reporter present at the signing of the armistice between France and Germany in June 1940, and broke the news on CBS radio six hours before it was announced in Germany. Living in Germany, he was able to observe the relationship between ordinary Germans and the regime, but with access to news from the outside which was denied to the general populace by the rigid Nazi control of information. He left Germany in December 1940 when increasingly rigid censorship made it almost impossible to get accurate reporting out of Germany, and he feared the Gestapo were preparing an espionage case against him.

Shirer remarks in the foreword to the book that never before, and possibly never again, will historians have access to the kind of detailed information on the day-to-day decision making and intrigues of a totalitarian state that we have for Nazi Germany. Germans are, of course, famously meticulous record-keepers, and the rapid collapse and complete capitulation of the regime meant that those voluminous archives fell into the hands of the Allies almost intact. That, and the survival of diaries by a number of key figures in the senior leadership of Germany and Italy, provides a window into what those regimes were thinking as they drew plans which would lead to calamity for Europe and their ultimate downfall. The book is extensively footnoted with citations of primary sources, and footnotes expand upon items in the main text.

This book is precisely what its subtitle, “A History of Nazi Germany”, identifies it to be. It is not, and does not purport to be, an analysis of the philosophical origins of Nazism, investigation of Hitler's personality, or a history of Germany's participation in World War II. The war years occupy about half of the book, but the focus is not on the actual conduct of the war but rather the decisions which ultimately determined its outcome, and the way (often bizarre) those decisions were made. I first read this book in 1970. Rereading it four decades later, I got a great deal more out of it than I did the first time, largely because in the intervening years I'd read many other books about the period which cover aspects of the period which Shirer's pure Germany-focused reportage does not explore in detail.

The book has stood up well to the passage of time. The only striking lacuna is that when the book was written the fact that Britain had broken the German naval Enigma cryptosystem, and was thus able to read traffic between the German admiralty and the U-boats, had not yet been declassified by the British. Shirer's coverage of the Battle of the Atlantic (which is cursory), thus attributes the success in countering the U-boat threat to radar, antisubmarine air patrols, and convoys, which were certainly important, but far from the whole story.

Shirer is clearly a man of the Left (he manages to work in a snarky comment about the Coolidge administration in a book about Nazi Germany), although no fan of Stalin, who he rightly identifies as a monster. But I find that the author tangles himself up intellectually in trying to identify Hitler and Mussolini as “right wing”. Again and again he describes the leftist intellectual and political background of key figures in the Nazi and Fascist movements, and then tries to persuade us they somehow became “right wing” because they changed the colour of their shirts, even though the official platform and policies of the Nazi and Fascist regimes differed only in the details from those of Stalin, and even Stalin believed, by his own testimony, that he could work with Nazi Germany to the mutual benefit of both countries. It's worth revisiting Liberal Fascism (January 2008) for a deeper look at how collectivism, whatever the colour of the shirts or the emblem on the flags, stems from the same intellectual roots and proceeds to the same disastrous end point.

But these are quibbles about a monument of twentieth century reportage which has the authenticity of having been written by an eyewitness to many of the events described therein, the scholarship of extensive citations and quotations of original sources, and accessibility to the general reader. It is a classic which has withstood the test of time, and if I'm still around forty years hence, I'm sure I'll enjoy reading it a third time.

October 2010 Permalink

Shlaes, Amity. The Forgotten Man. New York: Harper Perennial, [2007] 2008. ISBN 978-0-06-093642-6.
The conventional narrative of the Great Depression and New Deal is well-defined, and generations have been taught the story of how financial hysteria and lack of regulation led to the stock market crash of October 1929, which tipped the world economy into depression. The do-nothing policies of Herbert Hoover and his Republican majority in Congress allowed the situation to deteriorate until thousands of banks had failed, unemployment rose to around a quarter of the work force, collapsing commodity prices bankrupted millions of farmers, and world trade and credit markets froze, exporting the Depression from the U.S. to developed countries around the world. Upon taking office in 1932, Franklin Roosevelt embarked on an aggressive program of government intervention in the economy, going off the gold standard, devaluing the dollar, increasing government spending and tax rates on corporations and the wealthy by breathtaking amounts, imposing comprehensive regulation on every aspect of the economy, promoting trade unions, and launching public works and job creation programs on a massive scale. Although neither financial markets nor unemployment recovered to pre-crash levels, and full recovery did not occur until war production created demand for all industry could produce, at least FDR's New Deal kept things from getting much worse, kept millions from privation and starvation, and just possibly, by interfering with the free market in ways never before imagined in America, preserved it, and democracy, from the kind of revolutionary upheaval seen in the Soviet Union, Italy, Japan, and Germany. The New Deal pitted plutocrats, big business, and Wall Street speculators against the “forgotten man”—the people who farmed their land, toiled in the factories, and strove to pay their bills and support their families and, for once, allied with the Federal Government, the little guys won.

This is a story of which almost any student having completed an introductory course in American history can recount the key points. It is a tidy story, an inspiring one, and both a justification for an activist government and demonstration that such intervention can work, even in the most dire of economic situations. But is it accurate? In this masterful book, based largely on primary and often contemporary sources, the author makes a forceful argument that is is not—she does not dispute the historical events, most of which did indeed occur as described above, but rather the causal narrative which has been erected, largely after the fact, to explain them. Looking at what actually happened and when, the tidily wrapped up package begins to unravel and discordant pieces fall out.

For example, consider the crash of 1929. Prior to the crash, unemployment was around three percent (the Federal Government did not compile unemployment figures at the time, and available sources differ in methodology and hence in the precise figures). Following the crash, unemployment began to rise steeply and had reached around 9% by the end of 1929. But then the economy began to recover and unemployment fell. President Hoover was anything but passive: the Great Engineer launched a flurry of initiatives, almost all disastrously misguided. He signed the Hawley-Smoot Tariff (over the objection of an open letter signed by 1,028 economists and published in the New York Times). He raised taxes and, diagnosing the ills of the economy as due to inflation, encouraged the Federal Reserve to contract the money supply. To counter falling wages, he jawboned industry leaders to maintain wage levels which predictably resulted in layoffs instead of reduced wages. It was only after these measures took hold that the economy, which before seemed to be headed into a 1921-like recession, nosed over and began to collapse toward the depths of the Depression.

There was a great deal of continuity between the Hoover and early Roosevelt administrations. Roosevelt did not rescind Hoover's disastrous policies, but rather piled on intrusive regulation of agriculture and industry, vastly increased Federal spending (he almost doubled the Federal budget in his first term), increased taxes to levels before unimaginable in peacetime, and directly attacked private enterprise in sectors such as electrical power generation and distribution, which he felt should be government enterprises. Investment, the author contends, is the engine of economic recovery, and Roosevelt's policies resulted in a “capital strike” (a phrase used at the time), as investors weighed their options and decided to sit on their money. Look at this way: suppose you're a plutocrat and have millions at your disposal. You can invest them in a business, knowing that if the business fails you're out your investment, but that if it generates a profit the government will tax away more than 75% of your gains. Or, you can put your money in risk- and tax-free government bonds and be guaranteed a return. Which would you choose?

The story of the Great Depression is told largely by following a group of individuals through the era. Many of the bizarre aspects of the time appear here: Father Divine; businesses and towns printing their own scrip currency; the Schechter Brothers kosher poultry butchers taking on FDR's NRA and utterly defeating it in the Supreme Court; the prosecution of Andrew Mellon, Treasury Secretary to three Presidents, for availing himself of tax deductions the government admitted were legal; and utopian “planned communities” such as Casa Grande in Arizona, where displaced farmers found themselves little more than tenants in a government operation resembling Stalin's collective farms.

From the tone of some of the reaction to the original publication of this book, you might think it a hard-line polemic longing to return to the golden days of the Coolidge administration. It is nothing of the sort. This is a fact-based re-examination of the Great Depression and the New Deal which, better than any other book I've read, re-creates the sense of those living through it, when nobody really understood what was happening and people acting with the best of intentions (and the author imputes nothing else to either Hoover or Roosevelt) could not see what the consequences of their actions would be. In fact, Roosevelt changed course so many times that it is difficult to discern a unifying philosophy from his actions—sadly, this very pragmatism created an uncertainty in the economy which quite likely lengthened and deepened the Depression. This paperback edition contains an afterword in which the author responds to the principal criticisms of the original work.

It is hard to imagine a more timely book. Since this book was published, the U.S. have experienced a debt crisis, real estate bubble collapse, sharp stock market correction, rapidly rising unemployment and economic contraction, with an activist Republican administration taking all kinds of unprecedented actions to try to avert calamity. A Democratic administration, radiating confidence in itself and the power of government to make things better, is poised to take office, having promised programs in its electoral campaign which are in many ways reminiscent of those enacted in FDR's “hundred days”. Apart from the relevance of the story to contemporary events, this book is a pure delight to read.

December 2008 Permalink

Shlaes, Amity. Coolidge. New York: Harper Perennial, [2013] 2014. ISBN 978-0-06-196759-7.
John Calvin Coolidge, Jr. was born in 1872 in Plymouth Notch, Vermont. His family were among the branch of the Coolidge clan who stayed in Vermont while others left its steep, rocky, and often bleak land for opportunity in the Wild West of Ohio and beyond when the Erie canal opened up these new territories to settlement. His father and namesake made his living by cutting wood, tapping trees for sugar, and small-scale farming on his modest plot of land. He diversified his income by operating a general store in town and selling insurance. There was a long tradition of public service in the family. Young Coolidge's great-grandfather was an officer in the American Revolution and his grandfather was elected to the Vermont House of Representatives. His father was justice of the peace and tax collector in Plymouth Notch, and would later serve in the Vermont House of Representatives and Senate.

Although many in the cities would consider their rural life far from the nearest railroad terminal hard-scrabble, the family was sufficiently prosperous to pay for young Calvin (the name he went by from boyhood) to attend private schools, boarding with families in the towns where they were located and infrequently returning home. He followed a general college preparatory curriculum and, after failing the entrance examination the first time, was admitted on his second attempt to Amherst College as a freshman in 1891. A loner, and already with a reputation for being taciturn, he joined none of the fraternities to which his classmates belonged, nor did he participate in the athletics which were a part of college life. He quickly perceived that Amherst had a class system, where the scions of old money families from Boston who had supported the college were elevated above nobodies from the boonies like himself. He concentrated on his studies, mastering Greek and Latin, and immersing himself in the works of the great orators of those cultures.

As his college years passed, Coolidge became increasingly interested in politics, joined the college Republican Club, and worked on the 1892 re-election campaign of Benjamin Harrison, whose Democrat opponent, Grover Cleveland, was seeking to regain the presidency he had lost to Harrison in 1888. Writing to his father after Harrison's defeat, his analysis was that “the reason seems to be in the never satisfied mind of the American and in the ever desire to shift in hope of something better and in the vague idea of the working and farming classes that somebody is getting all the money while they get all the work.”

His confidence growing, Coolidge began to participate in formal debates, finally, in his senior year, joined a fraternity, and ran for and won the honour of being an orator at his class's graduation. He worked hard on the speech, which was a great success, keeping his audience engaged and frequently laughing at his wit. While still quiet in one-on-one settings, he enjoyed public speaking and connecting with an audience.

After graduation, Coolidge decided to pursue a career in the law and considered attending law school at Harvard or Columbia University, but decided he could not afford the tuition, as he was still being supported by his father and had no prospects for earning sufficient money while studying the law. In that era, most states did not require a law school education; an aspiring lawyer could, instead, become an apprentice at an established law firm and study on his own, a practice called reading the law. Coolidge became an apprentice at a firm in Northampton, Massachusetts run by two Amherst graduates and, after two years, in 1897, passed the Massachusetts bar examination and was admitted to the bar. In 1898, he set out on his own and opened a small law office in Northampton; he had embarked on the career of a country lawyer.

While developing his law practice, Coolidge followed in the footsteps of his father and grandfather and entered public life as a Republican, winning election to the Northampton City Council in 1898. In the following years, he held the offices of City Solicitor and county clerk of courts. In 1903 he married Grace Anna Goodhue, a teacher at the Clarke School for the Deaf in Northampton. The next year, running for the local school board, he suffered the only defeat of his political career, in part because his opponents pointed out he had no children in the schools. Coolidge said, “Might give me time.” (The Coolidges went on to have two sons, John, born in 1906, and Calvin Jr., in 1908.)

In 1906, Coolidge sought statewide office for the first time, running for the Massachusetts House of Representatives and narrowly defeating the Democrat incumbent. He was re-elected the following year, but declined to run for a third term, returning to Northampton where he ran for mayor, won, and served two one year terms. In 1912 he ran for the State Senate seat of the retiring Republican incumbent and won. In the presidential election of that year, when the Republican party split between the traditional wing favouring William Howard Taft and progressives backing Theodore Roosevelt, Coolidge, although identified as a progressive, having supported women's suffrage and the direct election of federal senators, among other causes, stayed with the Taft Republicans and won re-election. Coolidge sought a third term in 1914 and won, being named President of the State Senate with substantial influence on legislation in the body.

In 1915, Coolidge moved further up the ladder by running for the office of Lieutenant Governor of Massachusetts, balancing the Republican ticket led by a gubernatorial candidate from the east of the state with his own base of support in the rural west. In Massachusetts, the Lieutenant Governor does not preside over the State Senate, but rather fulfils an administrative role, chairing executive committees. Coolidge presided over the finance committee, which provided him experience in managing a budget and dealing with competing demands from departments that was to prove useful later in his career. After being re-elected to the office in 1915 and 1916 (statewide offices in Massachusetts at the time had a term of only one year), with the governor announcing his retirement, Coolidge was unopposed for the Republican nomination for governor and narrowly defeated the Democrat in the 1918 election.

Coolidge took office at a time of great unrest between industry and labour. Prices in 1918 had doubled from their 1913 level; nothing of the kind had happened since the paper money inflation during the Civil War and its aftermath. Nobody seemed to know why: it was usually attributed to the war, but nobody understood the cause and effect. There doesn't seem to have been a single mainstream voice who observed that the rapid rise in prices (which was really a depreciation of the dollar) began precisely at the moment the Creature from Jekyll Island was unleashed upon the U.S. economy and banking system. What was obvious, however, was that in most cases industrial wages had not kept pace with the rise in the cost of living, and that large companies which had raised their prices had not correspondingly increased what they paid their workers. This gave a powerful boost to the growing union movement. In early 1919 an ugly general strike in Seattle idled workers across the city, and the United Mine Workers threatened a nationwide coal strike for November 1919, just as the maximum demand for coal in winter would arrive. In Boston, police officers voted to unionise and affiliate with the American Federation of Labor, ignoring an order from the Police Commissioner forbidding officers to join a union. On September 9th, a majority of policemen defied the order and walked off the job.

Those who question the need for a police presence on the street in big cities should consider the Boston police strike as a cautionary tale, at least as things were in the city of Boston in the year 1919. As the Sun went down, the city erupted in chaos, mayhem, looting, and violence. A streetcar conductor was shot for no apparent reason. There were reports of rapes, murders, and serious injuries. The next day, more than a thousand residents applied for gun permits. Downtown stores were boarding up their display windows and hiring private security forces. Telephone operators and employees at the electric power plant threatened to walk out in sympathy with the police. From Montana, where he was campaigning in favour of ratification of the League of Nations treaty, President Woodrow Wilson issued a mealy-mouthed statement saying, “There is no use in talking about political democracy unless you have also industrial democracy”.

Governor Coolidge acted swiftly and decisively. He called up the Guard and deployed them throughout the city, fired all of the striking policemen, and issued a statement saying “The action of the police in leaving their posts of duty is not a strike. It is a desertion. … There is nothing to arbitrate, nothing to compromise. In my personal opinion there are no conditions under which the men can return to the force.” He directed the police commissioner to hire a new force to replace the fired men. He publicly rebuked American Federation of Labor chief Samuel Gompers in a telegram released to the press which concluded, “There is no right to strike against the public safety by anybody, anywhere, any time.”

When the dust settled, the union was broken, peace was restored to the streets of Boston, and Coolidge had emerged onto the national stage as a decisive leader and champion of what he called the “reign of law.” Later in 1919, he was re-elected governor with seven times the margin of his first election. He began to be spoken of as a potential candidate for the Republican presidential nomination in 1920.

Coolidge was nominated at the 1920 Republican convention, but never came in above sixth in the balloting, in the middle of the pack of regional and favourite son candidates. On the tenth ballot, Warren G. Harding of Ohio was chosen, and party bosses announced their choice for Vice President, a senator from Wisconsin. But when time came for delegates to vote, a Coolidge wave among rank and file tired of the bosses ordering them around gave him the nod. Coolidge did not attend the convention in Chicago; he got the news of his nomination by telephone. After he hung up, Grace asked him what it was all about. He said, “Nominated for vice president.” She responded, “You don't mean it.” “Indeed I do”, he answered. “You are not going to accept it, are you?” “I suppose I shall have to.”

Harding ran on a platform of “normalcy” after the turbulence of the war and Wilson's helter-skelter progressive agenda. He expressed his philosophy in a speech several months earlier,

America's present need is not heroics, but healing; not nostrums, but normalcy; not revolution, but restoration; not agitation, but adjustment; not surgery, but serenity; not the dramatic, but the dispassionate; not experiment, but equipoise; not submergence in internationality, but sustainment in triumphant nationality. It is one thing to battle successfully against world domination by military autocracy, because the infinite God never intended such a program, but it is quite another to revise human nature and suspend the fundamental laws of life and all of life's acquirements.

The election was a blow-out. Harding and Coolidge won the largest electoral college majority (404 to 127) since James Monroe's unopposed re-election in 1820, and more than 60% of the popular vote. Harding carried every state except for the Old South, and was the first Republican to win Tennessee since Reconstruction. Republicans picked up 63 seats in the House, for a majority of 303 to 131, and 10 seats in the Senate, with 59 to 37. Whatever Harding's priorities, he was likely to be able to enact them.

The top priority in Harding's quest for normalcy was federal finances. The Wilson administration and the Great War had expanded the federal government into terra incognita. Between 1789 and 1913, when Wilson took office, the U.S. had accumulated a total of US$2.9 billion in public debt. When Harding was inaugurated in 1921, the debt stood at US$24 billion, more than a factor of eight greater. In 1913, total federal spending was US$715 million; by 1920 it had ballooned to US$6358 million, almost nine times more. The top marginal income tax rate, 7% before the war, was 70% when Harding took the oath of office, and the cost of living had approximately doubled since 1913, which shouldn't have been a surprise (although it was largely unappreciated at the time), because a complaisant Federal Reserve had doubled the money supply from US$22.09 billion in 1913 to US$48.73 billion in 1920.

At the time, federal spending worked much as it had in the early days of the Republic: individual agencies presented their spending requests to Congress, where they battled against other demands on the federal purse, with congressional advocates of particular agencies doing deals to get what they wanted. There was no overall budget process worthy of the name (or as existed in private companies a fraction the size of the federal government), and the President, as chief executive, could only sign or veto individual spending bills, not an overall budget for the government. Harding had campaigned on introducing a formal budget process and made this his top priority after taking office. He called an extraordinary session of Congress and, making the most of the Republican majorities in the House and Senate, enacted a bill which created a Budget Bureau in the executive branch, empowered the president to approve a comprehensive budget for all federal expenditures, and even allowed the president to reduce agency spending of already appropriated funds. The budget would be a central focus for the next eight years.

Harding also undertook to dispose of surplus federal assets accumulated during the war, including naval petroleum reserves. This, combined with Harding's penchant for cronyism, led to a number of scandals which tainted the reputation of his administration. On August 2nd, 1923, while on a speaking tour of the country promoting U.S. membership in the World Court, he suffered a heart attack and died in San Francisco. Coolidge, who was visiting his family in Vermont, where there was no telephone service at night, was awakened to learn that he had succeeded to the presidency. He took the oath of office by kerosene light in his parents' living room, administered by his father, a Vermont notary public. As he left Vermont for Washington, he said, “I believe I can swing it.”

As Coolidge was in complete agreement with Harding's policies, if not his style and choice of associates, he interpreted “normalcy” as continuing on the course set by his predecessor. He retained Harding's entire cabinet (although he had his doubts about some of its more dodgy members), and began to work closely with his budget director, Herbert Lord, meeting with him weekly before the full cabinet meeting. Their goal was to continue to cut federal spending, generate surpluses to pay down the public debt, and eventually cut taxes to boost the economy and leave more money in the pockets of those who earned it. He had a powerful ally in these goals in Treasury secretary Andrew Mellon, who went further and advocated his theory of “scientific taxation”. He argued that the existing high tax rates not only hampered economic growth but actually reduced the amount of revenue collected by the government. Just as a railroad's profits would suffer from a drop in traffic if it set its freight rates too high, a high tax rate would deter individuals and companies from making more taxable income. What was crucial was the “top marginal tax rate”: the tax paid on the next additional dollar earned. With the tax rate on high earners at the postwar level of 70%, individuals got to keep only thirty cents of each additional dollar they earned; many would not bother putting in the effort.

Half a century later, Mellon would have been called a “supply sider”, and his ideas were just as valid as when they were applied in the Reagan administration in the 1980s. Coolidge wasn't sure he agreed with all of Mellon's theory, but he was 100% in favour of cutting the budget, paying down the debt, and reducing the tax burden on individuals and business, so he was willing to give it a try. It worked. The last budget submitted by the Coolidge administration (fiscal year 1929) was 3.127 billion, less than half of fiscal year 1920's expenditures. The public debt had been paid down from US$24 billion go US$17.6 billion, and the top marginal tax rate had been more than halved from 70% to 31%.

Achieving these goals required constant vigilance and an unceasing struggle with the congress, where politicians of both parties regarded any budget surplus or increase in revenue generated by lower tax rates and a booming economy as an invitation to spend, spend, spend. The Army and Navy argued for major expenditures to defend the nation from the emerging threat posed by aviation. Coolidge's head of defense aviation observed that the Great Lakes had been undefended for a century, yet Canada had not so far invaded and occupied the Midwest and that, “to create a defense system based upon a hypothetical attack from Canada, Mexico, or another of our near neighbors would be wholly unreasonable.” When devastating floods struck the states along the Mississippi, Coolidge was steadfast in insisting that relief and recovery were the responsibility of the states. The New York Times approved, “Fortunately, there are still some things that can be done without the wisdom of Congress and the all-fathering Federal Government.”

When Coolidge succeeded to the presidency, Republicans were unsure whether he would run in 1924, or would obtain the nomination if he sought it. By the time of the convention in June of that year, Coolidge's popularity was such that he was nominated on the first ballot. The 1924 election was another blow-out, with Coolidge winning 35 states and 54% of the popular vote. His Democrat opponent, John W. Davis, carried just the 12 states of the “solid South” and won 28.8% of the popular vote, the lowest popular vote percentage of any Democrat candidate to this day. Robert La Follette of Wisconsin, who had challenged Coolidge for the Republican nomination and lost, ran as a Progressive, advocating higher taxes on the wealthy and nationalisation of the railroads, and won 16.6% of the popular vote and carried the state of Wisconsin and its 13 electoral votes.

Tragedy struck the Coolidge family in the White House in 1924 when his second son, Calvin Jr., developed a blister while playing tennis on the White House courts. The blister became infected with Staphylococcus aureus, a bacterium which is readily treated today with penicillin and other antibiotics, but in 1924 had no treatment other than hoping the patient's immune system would throw off the infection. The infection spread to the blood and sixteen year old Calvin Jr. died on July 7th, 1924. The president was devastated by the loss of his son and never forgave himself for bringing his son to Washington where the injury occurred.

In his second term, Coolidge continued the policies of his first, opposing government spending programs, paying down the debt through budget surpluses, and cutting taxes. When the mayor of Johannesburg, South Africa, presented the president with two lion cubs, he named them “Tax Reduction” and “Budget Bureau” before donating them to the National Zoo. In 1927, on vacation in South Dakota, the president issued a characteristically brief statement, “I do not choose to run for President in nineteen twenty eight.” Washington pundits spilled barrels of ink parsing Coolidge's twelve words, but they meant exactly what they said: he had had enough of Washington and the endless struggle against big spenders in Congress, and (although re-election was considered almost certain given his landslide the last time, popularity, and booming economy) considered ten years in office (which would have been longer than any previous president) too long for any individual to serve. Also, he was becoming increasingly concerned about speculation in the stock market, which had more than doubled during his administration and would continue to climb in its remaining months. He was opposed to government intervention in the markets and, in an era before the Securities and Exchange Commission, had few tools with which to do so. Edmund Starling, his Secret Service bodyguard and frequent companion on walks, said, “He saw economic disaster ahead”, and as the 1928 election approached and it appeared that Commerce Secretary Herbert Hoover would be the Republican nominee, Coolidge said, “Well, they're going to elect that superman Hoover, and he's going to have some trouble. He's going to have to spend money. But he won't spend enough. Then the Democrats will come in and they'll spend money like water. But they don't know anything about money.” Coolidge may have spoken few words, but when he did he was worth listening to.

Indeed, Hoover was elected in 1928 in another Republican landslide (40 to 8 states, 444 to 87 electoral votes, and 58.2% of the popular vote), and things played out exactly as Coolidge had foreseen. The 1929 crash triggered a series of moves by Hoover which undid most of the patient economies of Harding and Coolidge, and by the time Hoover was defeated by Franklin D. Roosevelt in 1932, he had added 33% to the national debt and raised the top marginal personal income tax rate to 63% and corporate taxes by 15%. Coolidge, in retirement, said little about Hoover's policies and did his duty to the party, campaigning for him in the foredoomed re-election campaign in 1932. After the election, he remarked to an editor of the New York Evening Mail, “I have been out of touch so long with political activities I feel that I no longer fit in with these times.” On January 5, 1933, Coolidge, while shaving, suffered a sudden heart attack and was found dead in his dressing room by his wife Grace.

Calvin Coolidge was arguably the last U.S. president to act in office as envisioned by the Constitution. He advanced no ambitious legislative agenda, leaving lawmaking to Congress. He saw his job as similar to an executive in a business, seeking economies and efficiency, eliminating waste and duplication, and restraining the ambition of subordinates who sought to broaden the mission of their departments beyond what had been authorised by Congress and the Constitution. He set difficult but limited goals for his administration and achieved them all, and he was popular while in office and respected after leaving it. But how quickly it was all undone is a lesson in how fickle the electorate can be, and how tempting ill-conceived ideas are in a time of economic crisis.

This is a superb history of Coolidge and his time, full of lessons for our age which has veered so far from the constitutional framework he so respected.

August 2019 Permalink

Shlaes, Amity. Great Society. New York: Harper, 2019. ISBN 978-0-06-170642-4.
Adam Smith wrote, “There is a great deal of ruin in a nation”—even though nations and their rulers may adopt ruinous policies for a while, a great nation has deep resources and usually has time to observe the consequences, change course, and restore sound governance. But, as this book shows, the amount of ruin in a nation is not unlimited, and well-intended policies which fundamentally change the character of the citizenry and their relationship to the state can have ruinous consequences that cast a long shadow and may not be reversible. Between 1960 and 1974, under three presidents: Kennedy, Johnson, and Nixon, the United States, starting from peace and prosperity unprecedented in the human experience, reached for greatness and tragically embraced top-down, centrally-planned, deficit-spending funded, and socialist (in all but the forbidden name), policies which, by the mid 1970s, had destroyed prosperity, debased the dollar and unleashed ruinous inflation, wrecked the world's monetary system, incited urban riots and racial strife, created an unemployable underclass, destroyed neighbourhoods and built Soviet-style public housing in their place, and set into motion the destruction of domestic manufacturing and the middle class it supported. It is a tragic tale, an utterly unnecessary destruction of a once-great nation, as this magnificently written and researched but unavoidably depressing history of the era recounts.

May 2020 Permalink

Siddiqi, Asif A. Challenge to Apollo. Washington: National Aeronautics and Space Administration, 2000. NASA SP-2000-4408.
Prior to the collapse of the Soviet Union, accounts of the Soviet space program were a mix of legend, propaganda, speculations by Western analysts, all based upon a scanty collection of documented facts. The 1990s saw a wealth of previously secret information come to light (although many primary sources remain unavailable), making it possible for the first time to write an authoritative scholarly history of Soviet space exploration from the end of World War II through the mid-1970s; this book, published by the NASA History Division in 2000, is that history.

Whew! Many readers are likely to find that reading this massive (1011 7×14 cm pages, 1.9 kg) book cover to cover tells them far, far more about the Soviet space effort than they ever wanted to know. I bought the book from the U.S. Government Printing Office when it was published in 2000 and have been using it as a reference since then, but decided finally, as the bloggers say, to “read the whole thing”. It was a chore (it took me almost three weeks to chew through it), but ultimately rewarding and enlightening.

Back in the 1960s, when observers in the West pointed out the failure of the communist system to feed its own people or provide them with the most basic necessities, apologists would point to the successes of the Soviet space program as evidence that central planning and national mobilisation in a military-like fashion could accomplish great tasks more efficiently than the chaotic, consumer-driven market economies of the West. Indeed, with the first satellite, the first man in space, long duration piloted flights, two simultaneous piloted missions, the first spacecraft with a crew of more than one, and the first spacewalk, the Soviets racked up an impressive list of firsts. The achievements were real, but based upon what we now know from documents released in the post-Soviet era which form the foundation of this history, the interpretation of these events in the West was a stunning propaganda success by the Soviet Union backed by remarkably little substance.

Indeed, in the 1945–1974 time period covered here, one might almost say that the Soviet Union never actually had a space program at all, in the sense one uses those words to describe the contemporary activities of NASA. The early Soviet space achievements were all spin-offs of ballistic missile technology driven by Army artillery officers become rocket men. Space projects, and especially piloted flight, interested the military very little, and the space spectaculars were sold to senior political figures for their propaganda value, especially after the unanticipated impact of Sputnik on world opinion. But there was never a roadmap for the progressive development of space capability, such as NASA had for projects Mercury, Gemini, and Apollo. Instead, in most cases, it was only after a public success that designers and politicians would begin to think of what they could do next to top that.

Not only did this supposedly centrally planned economy not have a plan, the execution of its space projects was anything but centralised. Throughout the 1960s, there were constant battles among independent design bureaux run by autocratic chief designers, each angling for political support and funding at the expense of the others. The absurdity of this is perhaps best illustrated by the fact that on November 17th, 1967, six days after the first flight of NASA's Saturn V, the Central Committee issued a decree giving the go-ahead to the Chelomey design bureau to develop the UR-700 booster and LK-700 lunar spacecraft to land two cosmonauts on the Moon, notwithstanding having already spent millions of rubles on Korolev's already-underway N1-L3 project, which had not yet performed its first test flight. Thus, while NASA was checking off items in its Apollo schedule, developed years before, the Soviet Union, spending less than half of NASA's budget, found itself committed to two completely independent and incompatible lunar landing programs, with a piloted circumlunar project based on still different hardware simultaneously under development (p. 645).

The catastrophes which ensued from this chaotic situation are well documented, as well as how effective the Soviets were in concealing all of this from analysts in the West. Numerous “out there” proposed projects are described, including Chelomey's monster UR-700M booster (45 million pounds of liftoff thrust, compared to 7.5 million for the Saturn V), which would send a crew of two cosmonauts on a two-year flyby of Mars in an MK-700 spacecraft with a single launch. The little-known Soviet spaceplane projects are documented in detail.

This book is written in the same style as NASA's own institutional histories, which is to say that much of it is heroically boring and dry as the lunar regolith. Unless you're really into reorganisations, priority shifts, power grabs, and other manifestations of gigantic bureaucracies doing what they do best, you may find this tedious. This is not the fault of the author, but of the material he so assiduously presents. Regrettably, the text is set in a light sans-serif font in which (at least to my eyes) the letter “l” and the digit “1” are indistinguishable, and differ from the letter “I” in a detail I can spot only with a magnifier. This, in a book bristling with near-meaningless Soviet institutional names such as the Ministry of General Machine Building and impenetrable acronyms such as NII-1, TsKBEM (not to be confused with TsKBM) and 11F615, only compounds the reader's confusion. There are a few typographical errors, but none are serious.

This NASA publication was never assigned an ISBN, so looking it up on online booksellers will generally only find used copies. You can order new copies from the NASA Information Center at US$79 each. As with all NASA publications, the work is in the public domain, and a scanned online edition (PDF) is available. This is a 64 megabyte download, so unless you have a fast Internet connection, you'll need to be patient. Be sure to download it to a local file as opposed to viewing it in your browser, because otherwise you'll have to download the whole thing each time you open the document.

April 2008 Permalink

Sinclair, Upton. Dragon's Teeth. Vol. 1. Safety Harbor, FL: Simon Publications, [1942] 2001. ISBN 1-931313-03-2.
Between 1940 and 1953, Upton Sinclair published a massive narrative of current events, spanning eleven lengthy novels, in which real-world events between 1913 and 1949 were seen through the eyes of Lanny Budd, scion of a U.S. munitions manufacturer family become art dealer and playboy husband of an heiress whose fortune dwarfs his own. His extended family and contacts in the art and business worlds provide a window into the disasters and convulsive changes which beset Europe and America in two world wars and the period between them and afterward.

These books were huge bestsellers in their time, and this one won the Pulitzer Prize, but today they are largely forgotten. Simon Publications have made them available in facsimile reprint editions, with each original novel published in two volumes of approximately 300 pages each. This is the third novel in the saga, covering the years 1929–1934; this volume, comprising the first three books of the novel, begins shortly after the Wall Street crash of 1929 and ends with the Nazi consolidation of power in Germany after the Reichstag fire in 1933.

It's easy to understand both why these books were such a popular and critical success at the time and why they have since been largely forgotten. In each book, we see events of a few years before the publication date from the perspective of socialites and people in a position of power (in this book Lanny Budd meets “Adi” Hitler and gets to see both his attraction and irrationality first-hand), but necessarily the story is written without the perspective of knowing how it's going to come out, which makes it “current events fiction”, not historical fiction in the usual sense. Necessarily, that means it's going to be dated not long after the books scroll off the bestseller list. Also, the viewpoint characters are mostly rather dissipated and shallow idlers, wealthy dabblers in “pink” or “red” politics, who, with hindsight, seem not so dissimilar to the feckless politicians in France and Britain who did nothing as Europe drifted toward another sanguinary catastrophe.

Still, I enjoyed this book. You get the sense that this is how the epoch felt to the upper-class people who lived through it, and it was written so shortly after the events it chronicles that it avoids the simplifications that retrospection engenders. I will certainly read the second half of this reprint, which currently sits on my bookshelf, but I doubt if I'll read any of the others in the epic.

November 2007 Permalink

Sinclair, Upton. Dragon's Teeth. Vol. 2. Safety Harbor, FL: Simon Publications, [1942] 2001. ISBN 978-1-931313-15-5.
This is the second half of the third volume in Upton Sinclair's grand-scale historical novel covering the years from 1913 through 1949. Please see my notes on the first half for details on the series and this novel. The second half, comprising books four through six of the original novel (this is a print on demand facsimile edition, in which each of the original novels is split into two parts due to constraints of the publisher), covers the years 1933 and 1934, as Hitler tightens his grip on Germany and persecution of the Jews begins in earnest.

The playboy hero Lanny Budd finds himself in Germany trying to arrange the escape of Jewish relatives from the grasp of the Nazi tyranny, meets Goebbels, Göring, and eventually Hitler, and discovers the depth of the corruption and depravity of the Nazi regime, and then comes to experience it directly when he becomes caught up in the Night of the Long Knives.

This book was published in January 1942, less than a month after Pearl Harbor. It is remarkable to read a book written in a time when the U.S. and Nazi Germany were at peace and the swastika flag flew from the German embassy in Washington which got the essence of the Nazis so absolutely correct (especially the corruption of the regime, which was overlooked by so many until Albert Speer's books decades later). This is very much a period piece, and enjoyable in giving a sense of how people saw the events of the 1930s not long after they happened. I'm not, however, inclined to slog on through the other novels in the saga—one suffices for me.

January 2009 Permalink

Skousen, W. Cleon. The Naked Communist. Salt Lake City: Izzard Ink, [1958, 1964, 1966, 1979, 1986, 2007, 2014] 2017. ISBN 978-1-5454-0215-3.
In 1935 the author joined the FBI in a clerical position while attending law school at night. In 1940, after receiving his law degree, he was promoted to Special Agent and continued in that capacity for the rest of his 16 year career at the Bureau. During the postwar years, one of the FBI's top priorities was investigating and responding to communist infiltration and subversion of the United States, a high priority of the Soviet Union. During his time at the FBI Skousen made the acquaintance of several of the FBI's experts on communist espionage and subversion, but he perceived a lack of information, especially available to the general public, which explained communism: where did it come from, what are its philosophical underpinnings, what do communists believe, what are their goals, and how do they intend to achieve them?

In 1951, Skousen left the FBI to take a teaching position at Brigham Young University in Provo, Utah. In 1957, he accepted an offer to become Chief of Police in Salt Lake City, a job he held for the next three and a half years before being fired after raiding an illegal poker game in which newly-elected mayor J. Bracken Lee was a participant. During these years, Skousen continued his research on communism, mostly consulting original sources. By 1958, his book was ready for publication. After struggling to find a title, he settled on “The Naked Communist”, suggested by film producer and ardent anti-communist Cecil B. DeMille.

Spurned by the major publishers, Skousen paid for printing the first edition of 5000 copies out of his own pocket. Sales were initially slow, but quickly took off. Within two years of the book's launch, press runs were 10,000 to 20,000 copies with one run of 50,000. In 1962, the book passed the milestone of one million copies in print. As the 1960s progressed and it became increasingly unfashionable to oppose communist tyranny and enslavement, sales tapered off, but picked up again after the publication of a 50th anniversary edition in 2008 (a particularly appropriate year for such a book).

This 60th anniversary edition, edited and with additional material by the author's son, Paul B. Skousen, contains most of the original text with a description of the history of the work and additions bringing events up to date. It is sometimes jarring when you transition from text written in 1958 to that from the standpoint of more than a half century hence, but for the most part it works. One of the most valuable parts of the book is its examination of the intellectual foundations of communism in the work of Marx and Engels. Like the dogma of many other cults, these ideas don't stand up well to critical scrutiny, especially in light of what we've learned about the universe since they were proclaimed. Did you know that Engels proposed a specific theory of the origin of life based upon his concepts of Dialectical Materialism? It was nonsense then and it's nonsense now, but it's still in there. What's more, this poppycock is at the centre of the communist theories of economics, politics, and social movements, where it makes no more sense than in the realm of biology and has been disastrous every time some society was foolish enough to try it.

All of this would be a historical curiosity were it not for the fact that communists, notwithstanding their running up a body count of around a hundred million in the countries where they managed to come to power, and having impoverished people around the world, have managed to burrow deep into the institutions of the West: academia, media, politics, judiciary, and the administrative state. They may not call themselves communists (it's “social democrats”, “progressives”, “liberals”, and other terms, moving on after each one becomes discredited due to the results of its policies and the borderline insanity of those who so identify), but they have been patiently putting the communist agenda into practice year after year, decade after decade. What is that agenda? Let's see.

In the 8th edition of this book, published in 1961, the following “forty-five goals of Communism” were included. Derived by the author from the writings of current and former communists and testimony before Congress, many seemed absurd or fantastically overblown to readers at the time. The complete list, as follows, was read into the Congressional Record in 1963, placing it in the public domain. Here is the list.

Goals of Communism

  1. U.S. acceptance of coexistence as the only alternative to atomic war.
  2. U.S. willingness to capitulate in preference to engaging in atomic war.
  3. Develop the illusion that total disarmament by the United States would be a demonstration of moral strength.
  4. Permit free trade between all nations regardless of Communist affiliation and regardless of whether or not items could be used for war.
  5. Extension of long-term loans to Russia and Soviet satellites.
  6. Provide American aid to all nations regardless of Communist domination.
  7. Grant recognition of Red China. Admission of Red China to the U.N.
  8. Set up East and West Germany as separate states in spite of Khrushchev's promise in 1955 to settle the German question by free elections under supervision of the U.N.
  9. Prolong the conferences to ban atomic tests because the United States has agreed to suspend tests as long as negotiations are in progress.
  10. Allow all Soviet satellites individual representation in the U.N.
  11. Promote the U.N. as the only hope for mankind. If its charter is rewritten, demand that it be set up as a one-world government with its own independent armed forces. (Some Communist leaders believe the world can be taken over as easily by the U.N. as by Moscow. Sometimes these two centers compete with each other as they are now doing in the Congo.)
  12. Resist any attempt to outlaw the Communist Party.
  13. Do away with all loyalty oaths.
  14. Continue giving Russia access to the U.S. Patent Office.
  15. Capture one or both of the political parties in the United States.
  16. Use technical decisions of the courts to weaken basic American institutions by claiming their activities violate civil rights.
  17. Get control of the schools. Use them as transmission belts for socialism and current Communist propaganda. Soften the curriculum. Get control of teachers' associations. Put the party line in textbooks.
  18. Gain control of all student newspapers.
  19. Use student riots to foment public protests against programs or organizations which are under Communist attack.
  20. Infiltrate the press. Get control of book-review assignments, editorial writing, policymaking positions.
  21. Gain control of key positions in radio, TV, and motion pictures.
  22. Continue discrediting American culture by degrading all forms of artistic expression. An American Communist cell was told to “eliminate all good sculpture from parks and buildings, substitute shapeless, awkward and meaningless forms.”
  23. Control art critics and directors of art museums. “Our plan is to promote ugliness, repulsive, meaningless art.”
  24. Eliminate all laws governing obscenity by calling them “censorship” and a violation of free speech and free press.
  25. Break down cultural standards of morality by promoting pornography and obscenity in books, magazines, motion pictures, radio, and TV.
  26. Present homosexuality, degeneracy and promiscuity as “normal, natural, healthy.”
  27. Infiltrate the churches and replace revealed religion with “social” religion. Discredit the Bible and emphasize the need for intellectual maturity which does not need a “religious crutch.”
  28. Eliminate prayer or any phase of religious expression in the schools on the ground that it violates the principle of “separation of church and state.”
  29. Discredit the American Constitution by calling it inadequate, old-fashioned, out of step with modern needs, a hindrance to cooperation between nations on a worldwide basis.
  30. Discredit the American Founding Fathers. Present them as selfish aristocrats who had no concern for the “common man.”
  31. Belittle all forms of American culture and discourage the teaching of American history on the ground that it was only a minor part of the “big picture.” Give more emphasis to Russian history since the Communists took over.
  32. Support any socialist movement to give centralized control over any part of the culture—education, social agencies, welfare programs, mental health clinics, etc.
  33. Eliminate all laws or procedures which interfere with the operation of the Communist apparatus.
  34. Eliminate the House Committee on Un-American Activities.
  35. Discredit and eventually dismantle the FBI.
  36. Infiltrate and gain control of more unions.
  37. Infiltrate and gain control of big business.
  38. Transfer some of the powers of arrest from the police to social agencies. Treat all behavioral problems as psychiatric disorders which no one but psychiatrists can understand or treat.
  39. Dominate the psychiatric profession and use mental health laws as a means of gaining coercive control over those who oppose Communist goals.
  40. Discredit the family as an institution. Encourage promiscuity and easy divorce.
  41. Emphasize the need to raise children away from the negative influence of parents. Attribute prejudices, mental blocks and retarding of children to suppressive influence of parents.
  42. Create the impression that violence and insurrection are legitimate aspects of the American tradition; that students and special-interest groups should rise up and use “united force” to solve economic, political or social problems.
  43. Overthrow all colonial governments before native populations are ready for self-government.
  44. Internationalize the Panama Canal.
  45. Repeal the Connally Reservation so the US can not prevent the World Court from seizing jurisdiction over domestic problems. Give the World Court jurisdiction over domestic problems. Give the World Court jurisdiction over nations and individuals alike.

In chapter 13 of the present edition, a copy of this list is reproduced with commentary on the extent to which these goals have been accomplished as of 2017. What's your scorecard? How many of these seem extreme or unachievable from today's perspective?

When Skousen was writing his book, the world seemed divided into two camps: one communist and the other committed (more or less) to personal and economic liberty. In the free world, there were those advancing the cause of the collectivist slavers, but mostly covertly. What is astonishing today is that, despite more than a century of failure and tragedy resulting from communism, there are more and more who openly advocate for it or its equivalents (or an even more benighted medieval ideology masquerading as a religion which shares communism's disregard for human life and liberty, and willingness to lie, cheat, discard treaties, and murder to achieve domination).

When advocates of this deadly cult of slavery and death are treated with respect while those who defend the Enlightenment values of life, liberty, and property are silenced, this book is needed more than ever.

May 2018 Permalink

Sledge, E[ugene] B[ondurant]. With the Old Breed. New York: Presidio Press, [1981] 2007. ISBN 978-0-89141-906-8.
When the United States entered World War II after the attack on Pearl Harbor, the author was enrolled at the Marion Military Institute in Alabama preparing for an officer's commission in the U.S. Army. Worried that the war might end before he was able to do his part, in December, 1942, still a freshman at Marion, he enrolled in a Marine Corps officer training program. The following May, after the end of his freshman year, he was ordered to report for Marine training at Georgia Tech on July 1, 1943. The 180 man detachment was scheduled to take courses year-round then, after two years, report to Quantico to complete their officers' training prior to commission.

This still didn't seem fast enough (and, indeed, had he stayed with the program as envisioned, he would have missed the war), so he and around half of his fellow trainees neglected their studies, flunked out, and immediately joined the Marine Corps as enlisted men. Following boot camp at a base near San Diego, he was assigned to infantry and sent to nearby Camp Elliott for advanced infantry training. Although all Marines are riflemen (Sledge had qualified at the sharpshooter level during basic training), newly-minted Marine infantrymen were, after introduction to all of the infantry weapons, allowed to choose the one in which they would specialise. In most cases, they'd get their first or second choice. Sledge got his first: the 60 mm M2 mortar which he, as part of a crew of three, would operate in combat in the Pacific. Mortarmen carried the M1 carbine, and this weapon, which fired a less powerful round than the M1 Garand main battle rifle used by riflemen, would be his personal weapon throughout the war.

With the Pacific island-hopping war raging, everything was accelerated, and on February 28th, 1944, Sledge's 46th Replacement Battalion (the name didn't inspire confidence—they would replace Marines killed or injured in combat, or the lucky few rotated back to the U.S. after surviving multiple campaigns) shipped out, landing first at New Caledonia, where they received additional training, including practice amphibious landings and instruction in Japanese weapons and tactics. At the start of June, Sledge's battalion was sent to Pavuvu island, base of the 1st Marine Division, which had just concluded the bloody battle of Cape Gloucester.

On arrival, Sledge was assigned as a replacement to the 1st Marine Division, 5th Regiment, 3rd Battalion. This unit had a distinguished combat record dating back to the First World War, and would have been his first choice if he'd been given one, which he hadn't. He says, “I felt as though I had rolled the dice and won.” This was his first contact with what he calls the “Old Breed”: Marines, some of whom had been in the Corps before Pearl Harbor, who had imbibed the traditions of the “Old Corps” and survived some of the most intense combat of the present conflict, including Guadalcanal. Many of these veterans had, in the argot of the time, “gone Asiatic”: developed the eccentricities of who had seen and lived things those just arriving in theatre never imagined, and become marinated in deep hatred for the enemy based upon personal experience. A glance was all it took to tell the veterans from the replacements.

After additional training, in late August the Marines embarked for the assault on the island of Peleliu in the Palau Islands. The tiny island, just 13 square kilometres, was held by a Japanese garrison of 10,900, and was home to an airfield. Capturing the island was considered essential to protect the right flank of MacArthur's forces during the upcoming invasion of the Philippines, and to secure the airfield which could support the invasion. The attack on Peleliu was fixed for 15 September 1944, and it would be Sledge's first combat experience.

From the moment of landing, resistance was fierce. Despite an extended naval bombardment, well-dug-in Japanese defenders engaged the Marines as they hit the beaches, and continued as they progressed into the interior. In previous engagements with the Japanese, they had adopted foolhardy and suicidal tactics such as mass frontal “banzai” charges into well-defended Marine positions. By Peleliu, however, they had learned that this did not work, and shifted their strategy to defence in depth, turning the entire island into a network of defensive positions, covering one another, and linked by tunnels for resupply and redeploying forces. They were prepared to defend every square metre of territory to the death, even after their supplies were cut off and there was no hope of relief. Further, Marines were impressed by the excellent fire discipline of the Japanese—they did not expend ammunition firing blindly but chose their shots carefully, and would expend scarce supplies such as mortar rounds only on concentrations of troops or high value targets such as tanks and artillery.

This, combined with the oppressive heat and humidity, lack of water and food, and terror from incessant shelling by artillery by day and attacks by Japanese infiltrators by night, made the life of the infantry a living Hell. Sledge chronicles this from the viewpoint of a Private First Class, not an officer or historian after the fact. He and his comrades rarely knew precisely where they were, where the enemy was located, how other U.S. forces on the island were faring, or what the overall objectives of the campaign were. There was simply a job to be done, day by day, with their best hope being to somehow survive it. Prior to the invasion, Marine commanders estimated the island could be taken in four days. Rarely in the Pacific war was a forecast so wrong. In fact, it was not until November 27th that the island was declared secured. The Japanese demonstrated their willingness to defend to the last man. Of the initial force of 10,900 defending the island, 10,695 were killed. Of the 220 taken prisoner, 183 were foreign labourers, and only 19 were Japanese soldiers and sailors. Of the Marine and Army attackers, 2,336 were killed and 8,450 wounded. The rate of U.S. casualties exceeded those of all other amphibious landings in the Pacific, and the Battle of Peleliu is considered among the most difficult ever fought by the Marine Corps.

Despite this, the engagement is little-known. In retrospect, it was probably unnecessary. The garrison could have done little to threaten MacArthur's forces and the airfield was not required to support the Philippine campaign. There were doubts about the necessity and wisdom of the attack before it was launched, but momentum carried it forward. None of these matters concerned Sledge and the other Marines in the line—they had their orders, and they did their job, at enormous cost. Sledge's company K landed on Peleliu with 235 men. It left with only 85 unhurt—a 64% casualty rate. Only two of its original seven officers survived the campaign. Sledge was now a combat veteran. He may not have considered himself one of the “Old Breed”, but he was on the way to becoming one of them to the replacements who arrived to replace casualties in his unit.

But for the survivors of Peleliu, the war was far from over. While some old-timers for whom Peleliu was their third campaign were being rotated Stateside, for the rest it was recuperation, refitting, and preparation for the next amphibious assault: the Japanese island of Okinawa. Unlike Peleliu, which was a tiny dot on the map, Okinawa was a large island with an area of 1207 square kilometres and a pre-war population of around 300,000. The island was defended by 76,000 Japanese troops and 20,000 Okinawan conscripts fighting under their orders. The invasion of Okinawa on April 1, 1945 was the largest amphibious landing in the Pacific war.

As before, Sledge does not present the big picture, but an infantryman's eye view. To the astonishment of all involved, including commanders who expected 80–85% casualties on the beaches, the landing was essentially unopposed. The Japanese were dug in awaiting the attack from prepared defensive positions inland, ready to repeat the strategy at Peleliu on a much grander scale.

After the tropical heat and horrors of Peleliu, temperate Okinawa at first seemed a pastoral paradise afflicted with the disease of war, but as combat was joined and the weather worsened, troops found themselves confronted with the infantryman's implacable, unsleeping enemy: mud. Once again, the Japanese defended every position to the last man. Almost all of the Japanese defenders were killed, with the 7000 prisoners made up mostly of Okinawan conscripts. Estimates of U.S. casualties range from 14,000 to 20,000 killed and 38,000 to 55,000 wounded. Civilian casualties were heavy: of the original population of around 300,000 estimates of civilian deaths are from 40,000 to 150,000.

The Battle of Okinawa was declared won on June 22, 1945. What was envisioned as the jumping-off point for the conquest of the Japanese home islands became, in retrospect, almost an afterthought, as Japan surrendered less than two months after the conclusion of the battle. The impact of the Okinawa campaign on the war is debated to this day. Viewed as a preview of what an invasion of the home islands would have been, it strengthened the argument for using the atomic bomb against Japan (or, if it didn't work, burning Japan to the ground with round the clock raids from Okinawa airbases by B-17s transferred from the European theatre). But none of these strategic considerations were on the mind of Sledge and his fellow Marines. They were glad to have survived Okinawa and elated when, not long thereafter, the war ended and they could look forward to going home.

This is a uniquely authentic first-hand narrative of World War II combat by somebody who lived it. After the war, E. B. Sledge pursued his education, eventually earning a doctorate in biology and becoming a professor at the University of Montevallo in Alabama, where he taught zoology, ornithology, and comparative anatomy until his retirement in 1990. He began the memoir which became this book in 1944. He continued to work on it after the war and, at the urging of family, finally prepared it for publication in 1981. The present edition includes an introduction by Victor Davis Hanson.

September 2018 Permalink

Sloane, Eric. Diary of an Early American Boy. Mineola, NY: Dover, [1962] 2004. ISBN 0-486-43666-7.
In 1805, fifteen year old Noah Blake kept a diary of his life on a farm in New England. More than a century and a half later, artist, author, and collector of early American tools Eric Sloane discovered the diary and used it as the point of departure for this look at frontier life when the frontier was still in Connecticut. Young Noah was clearly maturing into a fine specimen of the taciturn Yankee farmer—much of the diary reads like:
21: A sour, foggy Sunday.
22: Heavy downpour, but good for the crops.
23: Second day of rain. Father went to work under cover at the mill.
24: Clear day. Worked in the fields. Some of the corn has washed away.
The laconic diary entries are spun into a fictionalised but plausible story of farm life focusing on the self-reliant lifestyle and the tools and techniques upon which it was founded. Noah Blake was atypical in being an only child at a time when large families were the norm; Sloane takes advantage of this in showing Noah learning all aspects of farm life directly from his father. The numerous detailed illustrations provide a delightful glimpse into the world of two centuries ago and an appreciation for the hard work and multitude of skills it took to make a living from the land in those days.

July 2005 Permalink

Sloane, Eric. The Cracker Barrel. Mineola, NY: Dover, [1967] 2005. ISBN 0-486-44101-6.
In the 1960s, artist and antiquarian Eric Sloane wrote a syndicated column of which many of the best are collected in this volume. This is an excellent book for browsing in random order in the odd moment, but like the contents of the eponymous barrel, it's hard to stop after just one, so you may devour the whole thing at one sitting. Hey, at least it isn't fattening!

The column format allowed Sloane to address a variety of topics which didn't permit book-length treatment. There are gems here about word origins, what was good and not so good about “the good old days”, tools and techniques (the “variable wrench” is pure genius), art and the business of being an artist, and much more. Each column is illustrated with one of Sloane's marvelous line drawings. Praise be to Dover for putting this classic back into print where it belongs.

October 2005 Permalink

Smith, Lee. The Strong Horse. New York: Doubleday, 2010. ISBN 978-0-385-51611-2.
After the attacks upon the U.S. in September 2001, the author, who had been working as an editor in New York City, decided to find out for himself what in the Arab world could provoke such indiscriminate atrocities. Rather than turn to the works of establishment Middle East hands or radical apologists for Islamist terror, he pulled up stakes and moved to Cairo and later Beirut, spending years there living in the community, meeting people from all walks of life from doormen, cab drivers, students, intellectuals, clerics, politicians, artists, celebrities, and more. This book presents his conclusions in a somewhat unusual form: it is hard to categorise—it's part travelogue; collection of interviews; survey of history, exploration of Arab culture, art, and literature; and geopolitical analysis. What is clear is that this book is a direct assault upon the consensus view of the Middle East among Western policymakers which, if correct (and the author is very persuasive indeed) condemns many of the projects of “democratisation”, “peace processes”, and integration of the nations of the region into a globalised economy to failure; it calls for an entirely different approach to the Arab world, one from which many Western feel-good diplomats and politically correct politicians will wilt in horror.

In short, Smith concludes that the fundamental assumption of the program whose roots can be traced from Woodrow Wilson to George W. Bush—that all people, and Arabs in particular, strive for individual liberty, self-determination, and a civil society with democratically elected leaders—is simply false: those are conditions which have been purchased by Western societies over centuries at the cost of great bloodshed and suffering by the actions of heroes. This experience has never occurred in the Arab world, and consequently its culture is entirely different. One can attempt to graft the trappings of Western institutions onto an Arab state, but without a fundamental change in the culture, the graft will not take and before long things will be just as before.

Let me make clear a point the author stresses. There is not the slightest intimation in this book that there is some kind of racial or genetic difference (which are the same thing) between Arabs and Westerners. Indeed, such a claim can be immediately falsified by the large community of Arabs who have settled in the West, assimilated themselves to Western culture, and become successful in all fields of endeavour. But those are Arabs, often educated in the West, who have rejected the culture in which they were born, choosing consciously to migrate to a very different culture they find more congenial to the way they choose to live their lives. What about those who stay (whether by preference, or due to lack of opportunity to emigrate)?

No, Arabs are not genetically different in behaviour, but culture is just as heritable as any physical trait, and it is here the author says we must look to understand the region. The essential dynamic of Arab political culture and history, as described by the 14th century Islamic polymath Ibn Khaldun, is that of a strong leader establishing a dynasty or power structure to which subjects submit, but which becomes effete and feckless over time, only to eventually be overthrown violently by a stronger force (often issuing from desert nomads in the Arab experience), which begins the cycle again. The author (paraphrasing Osama bin Laden) calls this the “strong horse” theory: Arab populations express allegiance to the strongest perceived power, and expect changes in governance to come through violent displacement of a weaker existing order.

When you look at things this way, many puzzles regarding the Middle East begin to make more sense. First of all, the great success which imperial powers over the millennia, including the Persian, Ottoman, French, and British empires, have had in subduing and ruling Arabs without substantial internal resistance is explained: the empire was seen as the strong horse and Arab groups accepted subordination to it. Similarly, the ability of sectarian minorities to rule on a long-term basis in modern states such as Lebanon, Syria, and Iraq is explained, as is the great stability of authoritarian regimes in the region—they usually fall only when deposed by an external force or by a military coup, not due to popular uprisings.

Rather than presenting a lengthy recapitulation of the arguments in the book filtered through my own comprehension and prejudices, this time I invite you to read a comprehensive exposition of the author's arguments in his own words, in a transcript of a three hour interview by Hugh Hewitt. If you're interested in the topics raised so far, please read the interview and return here for some closing comments.

Is the author's analysis correct? I don't know—certainly it is at variance with that of a mass of heavy-hitting intellectuals who have studied the region for their entire careers and, if correct, means that much of Western policy toward the Middle East since the fall of the Ottoman Empire has been at best ill-informed and at worst tragically destructive. All of the debate about Islam, fundamentalist Islam, militant Islam, Islamism, Islamofascism, etc., in Smith's view, misses the entire point. He contends that Islam has nothing, or next to nothing, to do with the present conflict. Islam, born in the Arabian desert, simply canonised, with a few minor changes, a political and social regime already extant in Arabia for millennia before the Prophet, based squarely on rule by the strong horse. Islam, then, is not the source of Arab culture, but a consequence of it, and its global significance is as a vector which inoculates Arab governance by the strong horse into other cultures where Islam takes root. The extent to which the Arab culture is adopted depends upon the strength and nature of the preexisting local culture into which Islam is introduced: certainly the culture and politics of Islamic Turkey, Iran, and Indonesia are something very different from that of Arab nations, and from each other.

The author describes democracy as “a flower, not a root”. An external strong horse can displace an Arab autocracy and impose elections, a legislature, and other trappings of democracy, but without the foundations of the doctrine of natural rights, the rule of law, civil society, free speech and the tolerance of dissent, freedom of conscience, and the separation of the domain of the state from the life of the individual, the result is likely to be “one person, one vote, one time” and a return to strong horse government as has been seen so many times in the post-colonial era. Democracy in the West was the flowering of institutions and traditions a thousand years in the making, none of which have ever existed in the Arab world. Those who expect democracy to create those institutions, the author would argue, suffer from an acute case of inverting causes and effects.

It's tempting to dismiss Arab culture as described here as “dysfunctional”, but (if the analysis be correct), I don't think that's a fair characterisation. Arab governance looks dysfunctional through the eyes of Westerners who judge it based on the values their own cultures cherish, but then turnabout's fair play, and Arabs have many criticisms of the West which are equally well founded based upon their own values. I'm not going all multicultural here—there's no question that by almost any objective measure such as per capita income; industrial and agricultural output; literacy and education; treatment of women and minorities; public health and welfare; achievements in science, technology, and the arts; that the West has drastically outperformed Arab nations, which would be entirely insignificant in the world economy absent their geological good fortune to be sitting on top of an ocean of petroleum. But again, that's applying Western metrics to Arab societies. When Nasser seized power in Egypt, he burned with a desire to do the will of the Egyptian people. And like so many people over the millennia who tried to get something done in Egypt, he quickly discovered that the will of the people was to be left alone, and the will of the bureaucracy was to go on shuffling paper as before, counting down to their retirement as they'd done for centuries. In other words, by their lights, the system was working and they valued stability over the risks of change. There is also what might be described as a cultural natural selection effect in action here. In a largely static authoritarian society, the ambitious, the risk-takers, and the innovators are disproportionately prone to emigrate to places which value those attributes, namely the West. This deprives those who remain of the élite which might improve the general welfare, resulting in a population even more content with the status quo.

The deeply pessimistic message of this book is that neither wishful thinking, soaring rhetoric, global connectivity, precision guided munitions, nor armies of occupation can do very much to change a culture whose general way of doing things hasn't changed fundamentally in more than two millennia. While change may be possible, it certainly isn't going to happen on anything less than the scale of several generations, and then only if the cultural transmission belt from generation to generation can be interrupted. Is this depressing? Absolutely, but if this is the case, better to come to terms with it and act accordingly than live in a fantasy world where one's actions may lead to catastrophe for both the West and the Arab world.

March 2010 Permalink

Smith, Michael. Station X. New York: TV Books, 1999. ISBN 1-57500-094-6.

July 2001 Permalink

Smith, Michael. The Emperor's Codes. New York: Arcade Publishing, 2000. ISBN 1-55970-568-X.

August 2001 Permalink

Solé, Robert. Le grand voyage de l'obélisque. Paris: Seuil, 2004. ISBN 2-02-039279-8.
No, this is not an Astérix book—it's “obélisque”, not “Obélix”! This is the story of how an obelisk of Ramses II happened to end up in the middle of la Place de la Concorde in Paris. Moving a 22 metre, 220 metric ton chunk of granite from the banks of the Nile to the banks of the Seine in the 1830s was not a simple task—it involved a purpose-built ship, an expedition of more than two and a half years with a crew of 121, twelve of whom died in Egypt from cholera and dysentery, and the combined muscle power of 350 artillerymen in Paris to erect the obelisk where it stands today. One has to be impressed with the ancient Egyptians, who managed much the same more than thirty centuries earlier. The book includes a complete transcription and translation of the hieroglyphic inscriptions—Ramses II must have set the all-time record for effort expended in publishing banal text.

May 2004 Permalink

Sowell, Thomas. Black Rednecks and White Liberals. San Francisco: Encounter Books, 2005. ISBN 1-59403-086-3.
One of the most pernicious calumnies directed at black intellectuals in the United States is that they are “not authentic”—that by speaking standard English, assimilating into the predominant culture, and seeing learning and hard work as the way to get ahead, they have somehow abandoned their roots in the ghetto culture. In the title essay in this collection, Thomas Sowell demonstrates persuasively that this so-called “black culture” owes its origins, in fact, not to anything blacks brought with them from Africa or developed in times of slavery, but rather to a white culture which immigrants to the American South from marginal rural regions of Britain imported and perpetuated long after it had died out in the mother country. Members of this culture were called “rednecks” and “crackers” in Britain long before they arrived in America, and they proceeded to install this dysfunctional culture in much of the rural South. Blacks arriving from Africa, stripped of their own culture, were immersed into this milieu, and predictably absorbed the central values and characteristics of the white redneck culture, right down to patterns of speech which can be traced back to the Scotland, Wales, and Ulster of the 17th century. Interestingly, free blacks in the North never adopted this culture, and were often well integrated into the community until the massive northward migration of redneck blacks (and whites) from the South spawned racial prejudice against all blacks. While only 1/3 of U.S. whites lived in the South, 90% of blacks did, and hence the redneck culture which was strongly diluted as southern whites came to the northern cities, was transplanted whole as blacks arrived in the north and were concentrated in ghetto communities.

What makes this more than an anthropological and historical footnote is, that as Sowell describes, the redneck culture does not work very well—travellers in the areas of Britain it once dominated and in the early American South described the gratuitous violence, indolence, disdain for learning, and a host of other characteristics still manifest in the ghetto culture today. This culture is alien to the blacks who it mostly now afflicts, and is nothing to be proud of. Scotland, for example, largely eradicated the redneck culture, and became known for learning and enterprise; it is this example, Sowell suggests, that blacks could profitably follow, rather than clinging to a bogus culture which was in fact brought to the U.S. by those who enslaved their ancestors.

Although the title essay is the most controversial and will doubtless generate the bulk of commentary, it is in fact only 62 pages in this book of 372 pages. The other essays discuss the experience of “middleman minorities” such as the Jews, Armenians in the Ottoman Empire, Lebanese in Africa, overseas Chinese, etc.; the actual global history of slavery, as a phenomenon in which people of all races, continents, and cultures have been both slaves and slaveowners; the history of ethnic German communities around the globe and whether the Nazi era was rooted in the German culture or an aberration; and forgotten success stories in black education in the century prior to the civil rights struggles of the mid 20th century. The book concludes with a chapter on how contemporary “visions” and agendas can warp the perception of history, discarding facts which don't fit and obscuring lessons from the past which can be vital in deciding what works and what doesn't in the real world. As with much of Sowell's work, there are extensive end notes (more than 60 pages, with 289 notes on the title essay alone) which contain substantial “meat” along with source citations; they're well worth reading over after the essays.

July 2005 Permalink

Spencer, Robert. The Politically Incorrect Guide to Islam (and the Crusades). Washington: Regnery Publishing, 2005. ISBN 0-89526-013-1.
This book has the worthy goal of providing a brief, accessible antidote to the airbrushed version of Islam dispensed by its apologists and echoed by the mass media, and the relentlessly anti-Western account of the Crusades indoctrinated in the history curricula of government schools. Regrettably, the attempt falls short of the mark. The tone throughout is polemical—you don't feel like you're reading about history, religion, and culture so much as that the author is trying to persuade you to adopt his negative view of Islam, with historical facts and citations from original sources trotted out as debating points. This runs the risk of the reader suspecting the author of having cherry-picked source material, omitting that which argues the other way. I didn't find the author guilty of this, but the result is that this book is only likely to persuade those who already agree with its thesis before picking it up, which makes one wonder what's the point.

Spencer writes from an overtly Christian perspective, with parallel “Muhammad vs. Jesus” quotes in each chapter, and statements like, “If Godfrey of Bouillon, Richard the Lionhearted, and countless others hadn't risked their lives to uphold the honor of Christ and His Church thousands of miles from home, the jihadists would almost certainly have swept across Europe much sooner” (p. 160). Now, there's nothing wrong with comparing aspects of Islam to other religions to counter “moral equivalence” arguments which claim that every religion is equally guilty of intolerance, oppression, and incitement to violence, but the near-exclusive focus on Christianity is likely to be off-putting to secular readers and adherents of other religions who are just as threatened by militant, expansionist Islamic fundamentalism as Christians.

The text is poorly proofread; in several block quotations, words are run together without spaces, three times in as many lines on page 110. In the quote from John Wesley on p. 188, the whole meaning is lost when the phrase “cities razed from the foundation” is written with “raised” instead of “razed”.

The author's earlier Islam Unveiled (February 2003) is similarly flawed in tone and perspective. Had I noticed that this book was by the same author, I wouldn't have read it. It's more to read, but the combination of Ibn Warraq's Why I Am Not a Muslim (February 2002) and Paul Fregosi's Jihad in the West (July 2002) will leave you with a much better understanding of the issues than this disappointing effort.

November 2005 Permalink

Spencer, Robert. Did Muhammad Exist? Wilmington, DE: ISI Books, 2012. ISBN 978-1-61017-061-1.
In 1851, Ernest Renan wrote that Islam “was born in the full light of history…”. But is this the case? What do we actually know of the origins of Islam, the life of its prophet, and the provenance of its holy book? In this thoroughly researched and documented investigation the author argues that the answer to these questions is very little indeed, and that contemporary evidence for the existence of a prophet in Arabia who proclaimed a scripture, led the believers into battle and prevailed, unifying the peninsula, and lived the life documented in the Muslim tradition is entirely nonexistent during the time of Muhammad's supposed life, and did not emerge until decades, and in many cases, more than a century later. Further, the historical record shows clear signs, acknowledged by contemporary historians, of having been fabricated by rival factions contending for power in the emerging Arab empire.

What is beyond dispute is that in the century and a quarter between A.D. 622 and 750, Arab armies erupted from the Arabian peninsula and conquered an empire spanning three continents, propagating a change in culture, governance, and religion which remains in effect in much of that region today. The conventional story is that these warriors were the armies of Islam, following their prophet's command to spread the word of their God and bearing his holy writ, the Qur'an, before them as they imposed it upon those they subdued by the sword. But what is the evidence for this?

When you look for it, it's remarkably scanty. As the peoples conquered by the Arab armies were, in many cases, literate, they have left records of their defeat. And in every case, they speak of the invaders as “Hagarians”, “Ishmaelites”, “Muhajirun”, or “Saracens”, and in none of these records is there a mention of an Arab prophet, much less one named “Muhammad”, or of “Islam”, or of a holy book called the “Qur'an”.

Now, for those who study the historical foundations of Christianity or Judaism, these results will be familiar—when you trace the origins of a great religious tradition back to its roots, you often discover that they disappear into a fog of legend which believers must ultimately accept on faith since historical confirmation, at this remove, is impossible. This has been the implicit assumption of those exploring the historical foundations of the Bible for at least two centuries, but it is considered extremely “edgy” to pose these questions about Islam, even today. This is because when you do, the believers are prone to use edgy weapons to cut your head off. Jews and Christians have gotten beyond this, and just shake their heads and chuckle. So some say it takes courage to raise these questions about Islam. I'd say “some” are the kind of cowards who opposed the translation of the Bible into the vernacular, freeing it from the priesthood and placing it in the hands of anybody who could read. And if any throat-slitter should be irritated by these remarks and be inclined to act upon them, be advised that I not only shoot back but, circumstances depending, first.

I find the author's conclusion very plausible. After the Arab conquest, its inheritors found themselves in command of a multicontinental empire encompassing a large number of subject peoples and a multitude of cultures and religious traditions. If you were the ruler of such a newly-cobbled-together empire, wouldn't you be motivated, based upon the experience of those you have subdued, to promulgate a state religion, proclaimed in the language of the conquerer, which demanded submission? Would you not base that religion upon the revelation of a prophet, speaking to the conquerers in their own language, which came directly from God?

It is often observed that Islam, unlike the other Abrahamic religions, is uniquely both a religious and political system, leading inevitably to theocracy (which I've always believed misnamed—I'd have no problem with theocracy: rule by God; it's rule by people claiming to act in His name that always seems to end badly). But what if Islam is so intensely political precisely because it was invented to support a political agenda—that of the Arabic Empire of the Umayyad Caliphate? It's not that Islam is political because its doctrine encompasses politics as well as religion; it's that's it's political because it was designed that way by the political rulers who directed the compilation of its sacred books, its traditions, and spread it by the sword to further their imperial ambitions.

July 2012 Permalink

Spira, S. F., Eaton S. Lothrop, Jr., and Jonathan B. Spira. The History of Photography As Seen Through the Spira Collection. Danville, NJ: Aperture, 2001. ISBN 978-0-89381-953-8.
If you perused the back pages of photographic magazines in the 1960s and 1970s, you'll almost certainly recall the pages of advertising from Spiratone, which offered a panoply of accessories and gadgets, many tremendously clever and useful, and some distinctly eccentric and bizarre, for popular cameras of the epoch. The creation of Fred Spira, a refugee from Nazi anschluss Austria who arrived in New York almost penniless, his ingenuity, work ethic, and sense for the needs of the burgeoning market of amateur photographers built what started as a one-man shop into a flourishing enterprise, creating standards such as the “T mount” lenses which persist to the present day. His company was a pioneer in importing high quality photographic gear from Japan and instrumental in changing the reputation of Japan from a purveyor of junk to a top end manufacturer.

Like so many businessmen who succeed to such an extent they redefine the industries in which they participate, Spira was passionate about the endeavour pursued by his customers: in his case photography. As his fortune grew, he began to amass a collection of memorabilia from the early days of photography, and this Spira Collection finally grew to more than 20,000 items, covering the entire history of photography from its precursors to the present day.

This magnificent coffee table book draws upon items from the Spira collection to trace the history of photography from the camera obscura in the 16th century to the dawn of digital photography in the 21st. While the pictures of items from the collection dominate the pages, there is abundant well-researched text sketching the development of photography, including the many blind alleys along the way to a consensus of how images should be made. You can see the fascinating process by which a design, which initially varies all over the map as individual inventors try different approaches, converges upon a standard based on customer consensus and market forces. There is probably a lesson for biological evolution somewhere in this. With inventions which appear, in retrospect, as simple as photography, it's intriguing to wonder how much earlier they might have been discovered: could a Greek artificer have stumbled on the trick and left us, in some undiscovered cache, an image of Pericles making the declamation recorded by Thucydides? Well, probably not—the simplest photographic process, the daguerreotype, requires a plate of copper, silver, and mercury sensitised with iodine. While the metals were all known in antiquity (along with glass production sufficient to make a crude lens or, failing that, a pinhole), elemental iodine was not isolated until 1811, just 28 years before Daguerre applied it to photography. But still, you never know….

This book is out of print, but used copies are generally available for less than the cover price at its publication in 2001.

June 2010 Permalink

Spotts, Frederic. Hitler and the Power of Aesthetics. Woodstock, NY: Overlook Press, 2002. ISBN 1-58567-345-5.
A paperback edition is scheduled to be published in February 2004.

October 2003 Permalink

Spotts, Frederic. The Shameful Peace. New Haven, CT: Yale University Press, 2008. ISBN 978-0-300-13290-8.
Paris between the World Wars was an international capital of the arts such as the world had never seen. Artists from around the globe flocked to this cosmopolitan environment which was organised more around artistic movements than nationalities. Artists drawn to this cultural magnet included the Americans Ezra Pound, Ernest Hemingway, F. Scott Fitzgerald, Gertrude Stein, Henry Miller, e.e. cummings, Virgil Thomson, and John Dos Passos; Belgians René Magritte and Georges Simenon; the Irish James Joyce and Samuel Beckett; Russians Igor Stravinsky, Sergei Prokofiev, Vladimir Nabokov, and Marc Chagall; and Spaniards Pablo Picasso, Joan Miró, and Salvador Dali, only to mention some of the nationalities and luminaries.

The collapse of the French army and British Expeditionary Force following the German invasion in the spring of 1940, leading to the armistice between Germany and France on June 22nd, turned this world upside down. Paris found itself inside the Occupied Zone, administered directly by the Germans. Artists in the “Zone libre” found themselves subject to the Vichy government's cultural decrees, intended to purge the “decadence” of the interwar years.

The defeat and occupation changed the circumstances of Paris as an artistic capital overnight. Most of the foreign expatriates left (but not all: Picasso, among others, opted to stay), so the scene became much more exclusively French. But remarkably, or maybe not, within a month of the armistice, the cultural scene was back up and running pretty much as before. The theatres, cinemas, concert and music halls were open, the usual hostesses continued their regular soirées with the customary attendees, and the cafés continued to be filled with artists debating the same esoterica. There were changes, to be sure: the performing arts played to audiences with a large fraction of Wehrmacht officers, known Jews were excluded everywhere, and anti-German works were withdrawn by publishers and self-censored thereafter by both authors and publishers in the interest of getting their other work into print.

The artistic milieu, which had been overwhelmingly disdainful of the Third Republic, transferred their scorn to Vichy, but for the most part got along surprisingly well with the occupier. Many attended glittering affairs at the German Institute and Embassy, and fell right in with the plans of Nazi ambassador Otto Abetz to co-opt the cultural élite and render them, if not pro-German, at least neutral to the prospects of France being integrated into a unified Nazi Europe.

The writer and journalist Alfred Fabre-Luce was not alone in waxing with optimism over the promise of the new era, “This will not sanctify our defeat, but on the contrary overcome it. Rivalries between countries, that were such a feature of nineteenth-century Europe, have become passé. The future Europe will be a great economic zone where people, weary of incessant quarrels, will live in security”. Drop the “National” and keep the “Socialist”, and that's pretty much the same sentiment you hear today from similarly-placed intellectuals about the odious, anti-democratic European Union.

The reaction of intellectuals to the occupation varied from enthusiastic collaboration to apathetic self-censorship and an apolitical stance, but rarely did it cross the line into active resistance. There were some underground cultural publications, and some well-known figures did contribute to them (anonymously or under a pseudonym, bien sûr), but for the most part artists of all kinds got along, and adjusted their work to the changed circumstances so that they could continue to be published, shown, or performed. A number of prominent figures emigrated, mostly to the United States, and formed an expatriate French avant garde colony which would play a major part in the shift of the centre of the arts world toward New York after the war, but they were largely politically disengaged while the war was underway.

After the Liberation, the purge (épuration) of collaborators in the arts was haphazard and inconsistent. Artists found themselves defending their work and actions during the occupation before tribunals presided over by judges who had, after the armistice, sworn allegiance to Pétain. Some writers received heavy sentences, up to and including death, while their publishers, who had voluntarily drawn up lists of books to be banned, confiscated, and destroyed got off scot-free and kept right on running. A few years later, as the Trente Glorieuses began to pick up steam, most of those who had not been executed found their sentences commuted and went back to work, although the most egregious collaborators saw their reputations sullied for the rest of their lives. What could not be restored was the position of Paris as the world's artistic capital: the spotlight had moved on to the New World, and New York in particular.

This excellent book stirs much deeper thoughts than just those of how a number of artists came to terms with the occupation of their country. It raises fundamental questions as to how creative people behave, and should behave, when the institutions of the society in which they live are grossly at odds with the beliefs that inform their work. It's easy to say that one should rebel, resist, and throw one's body onto the gears to bring the evil machine to a halt, but it's entirely another thing to act in such a manner when you're living in a city where the Gestapo is monitoring every action of prominent people and you never know who may be an informer. Lovers of individual liberty who live in the ever-expanding welfare/warfare/nanny states which rule most “developed” countries today will find much to ponder in observing the actions of those in this narrative, and may think twice the next time they're advised to “be reasonable; go along: it can't get that bad”.

September 2009 Permalink

Spufford, Francis. Backroom Boys: The Secret Return of the British Boffin. London: Faber and Faber, 2003. ISBN 0-571-21496-7.
It is rare to encounter a book about technology and technologists which even attempts to delve into the messy real-world arena where science, engineering, entrepreneurship, finance, marketing, and government policy intersect, yet it is there, not solely in the technological domain, that the roots of both great successes and calamitous failures lie. Backroom Boys does just this and pulls it off splendidly, covering projects as disparate as the Black Arrow rocket, Concorde, mid 1980s computer games, mobile telephony, and sequencing the human genome. The discussion on pages 99 and 100 of the dynamics of new product development in the software business is as clear and concise a statement I've seen of the philosophy that's guided my own activities for the past 25 years. While celebrating the technological renaissance of post-industrial Britain, the author retains the characteristic British intellectual's disdain for private enterprise and economic liberty. In chapter 4, he describes Vodaphone's development of the mobile phone market: “It produced a blind, unplanned, self-interested search strategy, capitalism's classic method for exploring a new space in the market where profit may be found.” Well…yes…indeed, but that isn't just “capitalism's” classic method, but the very one employed with great success by life on Earth lo these four and a half billion years (see The Genius Within, April 2003). The wheels fall off in chapter 5. Whatever your position may have been in the battle between Celera and the public Human Genome Project, Spufford's collectivist bias and ignorance of economics (simply correcting the noncontroversial errors in basic economics in this chapter would require more pages than it fills) gets in the way of telling the story of how the human genome came to be sequenced five years before the original estimated date. A truly repugnant passage on page 173 describes “how science should be done”. Taxpayer-funded researchers, a fine summer evening, “floated back downstream carousing, with stubs of candle stuck to the prows, … and the voices calling to and fro across the water as the punts drifted home under the overhanging trees in the green, green, night.“ Back to the taxpayer-funded lab early next morning, to be sure, collecting their taxpayer-funded salaries doing the work they love to advance their careers. Nary a word here of the cab drivers, sales clerks, construction workers and, yes, managers of biotech start-ups, all taxed to fund this scientific utopia, who lack the money and free time to pass their own summer evenings so sublimely. And on the previous page, the number of cells in the adult body of C. elegans is twice given as 550. Gimme a break—everybody knows there are 959 somatic cells in the adult hermaphrodite, 1031 in the male; he's confusing adults with 558-cell newly-hatched L1 larvę.

May 2004 Permalink

Standage, Tom. The Victorian Internet. New York: Berkley, 1998. ISBN 0-425-17169-8.

September 2003 Permalink

Steil, Benn. The Battle of Bretton Woods. Princeton: Princeton University Press, 2013. ISBN 978-0-691-14909-7.
As the Allies advanced toward victory against the Axis powers on all fronts in 1944, in Allied capitals thoughts increasingly turned to the postwar world and the institutions which would define it. Plans were already underway to expand the “United Nations” (at the time used as a synonym for the Allied powers) into a postwar collective security organisation which would culminate in the April 1945 conference to draft the charter of that regrettable institution. Equally clamant was the need to define monetary mechanisms which would facilitate free trade.

The classical gold standard, which was not designed but evolved organically in the 19th century as international trade burgeoned, had been destroyed by World War I. Attempts by some countries to reestablish the gold standard after the end of the war led to economic dislocation (particularly in Great Britain), currency wars (competitive devaluations in an attempt to gain a competitive advantage in international trade), and trade wars (erecting tariff or other barriers to trade to protect domestic or imperial markets against foreign competition).

World War II left all of the major industrial nations with the sole exception of the United States devastated and effectively bankrupt. Despite there being respected and influential advocates for re-establishing the classical gold standard (in which national currencies were defined as a quantity of gold, with central banks issuing them willing to buy gold with their currency or exchange their currency for gold at the pegged rate), this was widely believed impossible. Although the gold standard had worked well when in effect prior to World War I, and provided negative feedback which tended to bring the balance of payments among trading partners back into equilibrium and provided a mechanism for countries in economic hard times to face reality and recover by devaluing their currencies against gold, there was one overwhelming practical difficulty in re-instituting the gold standard: the United States had almost all of the gold. In fact, by 1944 it was estimated that the U.S. Treasury held around 78% of all of the world's central bank reserve gold. It is essentially impossible to operate under a gold standard when a single creditor nation, especially one with its industry and agriculture untouched by the war and consequently sure to be the predominant exporter in the years after it ended, has almost all of the world's gold in its vaults already. Proposals to somehow reset the system by having the U.S. transfer its gold to other nations in exchange for their currencies was a non-starter in Washington, especially since many of those nations already owed large dollar-denominated debts to the U.S.

The hybrid gold-exchange standard put into place after World War I had largely collapsed by 1934, with Britain forced off the standard by 1931, followed quickly by 25 other nations. The 1930s were a period of economic depression, collapsing international trade, competitive currency devaluations, and protectionism, hardly a model for a postwar monetary system.

Also in contention as the war drew to its close was the location of the world's financial centre and which currency would dominate international trade. Before World War I, the vast majority of trade cleared through London and was denominated in sterling. In the interwar period, London and New York vied for preeminence, but while Wall Street prospered financing the booming domestic market in the 1920s, London remained dominant for trade between other nations and maintained a monopoly within the British Empire. Within the U.S., while all factions within the financial community wished for the U.S. to displace Britain as the world's financial hub, many New Dealers in Roosevelt's administration were deeply sceptical of Wall Street and “New York bankers” and wished to move decision making to Washington and keep it firmly under government control.

While ambitious plans were being drafted for a global monetary system, in reality there were effectively only two nations at the negotiating table when it came time to create one: Britain and the U.S. John Maynard Keynes, leader of the British delegation, referred to U.S. plans for a broad-based international conference on postwar monetary policy as “a major monkey-house”, with non-Anglo-Saxon delegations as the monkeys. On the U.S. side, there was a three way power struggle among the Treasury Department, the State Department, and the nominally independent Federal Reserve to take the lead in international currency policy.

All of this came to a head when delegates from 44 countries arrived at a New Hampshire resort hotel in July 1944 for the Bretton Woods Conference. The run-up to the conference had seen intensive back-and-forth negotiation between the U.S. and British delegations, both of whom arrived with their own plans, each drafted to give their side the maximum advantage.

For the U.S., Treasury secretary Henry Morgenthau, Jr. was the nominal head of the delegation, but having no head for nor interest in details, deferred almost entirely to his energetic and outspoken subordinate Harry Dexter White. The conference became a battle of wits between Keynes and White. While White was dwarfed by Keynes's intellect and reputation (even those who disagreed with his unorthodox economic theories were impressed with his wizardry in financing the British war efforts in both world wars), it was White who held all the good cards. Not only did the U.S. have most of the gold, Britain was entirely dependent upon Lend-Lease aid from the U.S., which might come to an abrupt end when the war was won, and owed huge debts which it could never repay without some concessions from the U.S. or further loans on attractive terms.

Morgenthau and White, with Roosevelt's enthusiastic backing, pressed their case relentlessly. Not only did Roosevelt concur that the world's financial centre should be Washington, he saw an opportunity to break the British Empire, which he detested. Roosevelt remarked to Morgenthau after a briefing, “I had no idea that England was broke. I will go over there and make a couple of talks and take over the British Empire.”

Keynes described an early U.S. negotiating position as a desire by the U.S. to make Britain “lose face altogether and appear to capitulate completely to dollar diplomacy.” And in the end, this is essentially what happened. Morgenthau remarked, “Now the advantage is ours here, and I personally think we should take it,” then later expanded, “If the advantage was theirs, they would take it.”

The system crafted at the conference was formidably complex: only a few delegates completely understood it, and, foreshadowing present-day politics in the U.S., most of the delegations which signed it at the conclusion of the conference had not read the final draft which was thrown together at the last minute. The Bretton Woods system which emerged prescribed fixed exchange rates, not against gold, but rather the U.S. dollar, which was, in turn, fixed to gold. Central banks would hold their reserves primarily in dollars, and could exchange excess dollars for gold upon demand. A new International Monetary Fund (IMF) would provide short-term financing to countries with trade imbalances to allow them to maintain their currency's exchange rate against the dollar, and a World Bank was created to provide loans to support reconstruction after the war and development in poor countries. Finally a General Agreement on Tariffs and Trade was adopted to reduce trade barriers and promote free trade.

The Bretton Woods system was adopted at a time when the reputation of experts and technocrats was near its peak. Keynes believed that central banking should “be regarded as a kind of beneficent technique of scientific control such as electricity and other branches of science are.” Decades of experience with the ever more centralised and intrusive administrative state has given people today a more realistic view of the capabilities of experts and intellectuals of all kinds. Thus it should be no surprise that the Bretton Woods system began to fall apart almost as soon as it was put in place. The IMF began operations in 1947, and within months a crisis broke out in the peg of sterling to the dollar. In 1949, Britain was forced to devalue the pound 30% against the dollar, and in short order thirty other countries also devalued. The Economist observed:

Not many people in this country believe the Communist thesis that it is the deliberate and conscious aim of American policy to ruin Britain and everything Britain stands for in the world. But the evidence can certainly be read that way. And if every time aid is extended, conditions are attached which make it impossible for Britain to ever escape the necessity of going back for still more aid, to be obtained with still more self-abasement and on still more crippling terms, then the result will certainly be what the Communists predict.

Dollar diplomacy had triumphed completely.

The Bretton Woods system lurched from crisis to crisis and began to unravel in the 1960s when the U.S., exploiting its position of issuing the world's reserve currency, began to flood the world with dollars to fund its budget and trade deficits. Central banks, increasingly nervous about their large dollar positions, began to exchange their dollars for gold, causing large gold outflows from the U.S. Treasury which were clearly unsustainable. In 1971, Nixon “closed the gold window”. Dollars could no longer be redeemed in gold, and the central underpinning of Bretton Woods was swept away. The U.S. dollar was soon devalued against gold (although it hardly mattered, since it was no longer convertible), and before long all of the major currencies were floating against one another, introducing uncertainty in trade and spawning the enormous global casino which is the foreign exchange markets.

A bizarre back-story to the creation of the postwar monetary system is that its principal architect, Harry Dexter White, was, during the entire period of its construction, a Soviet agent working undercover in his U.S. government positions, placing and promoting other agents in positions of influence, and providing a steady stream of confidential government documents to Soviet spies who forwarded them to Moscow. This was suspected since the 1930s, and White was identified by Communist Party USA defectors Whittaker Chambers and Elizabeth Bentley as a spy and agent of influence. While White was defended by the usual apologists, and many historical accounts try to blur the issue, mentions of White in the now-declassified Venona decrypts prove the issue beyond a shadow of a doubt. Still, it must be said that White was a fierce and effective advocate at Bretton Woods for the U.S. position as articulated by Morgenthau and Roosevelt. Whatever other damage his espionage may have done, his pro-Soviet sympathies did not detract from his forcefulness in advancing the U.S. cause.

This book provides an in-depth view of the protracted negotiations between Britain and the U.S., Lend-Lease and other war financing, and the competing visions for the postwar world which were decided at Bretton Woods. There is a tremendous amount of detail, and while some readers may find it difficult to assimilate, the economic concepts which underlie them are explained clearly and are accessible to the non-specialist. The demise of the Bretton Woods system is described, and a brief sketch of monetary history after its ultimate collapse is given.

Whenever a currency crisis erupts into the news, you can count on one or more pundits or politicians to proclaim that what we need is a “new Bretton Woods”. Before prescribing that medicine, they would be well advised to learn just how the original Bretton Woods came to be, and how the seeds of its collapse were built in from the start. U.S. advocates of such an approach might ponder the parallels between the U.S. debt situation today and Britain's in 1944 and consider that should a new conference be held, they may find themselves sitting the seats occupied by the British the last time around, with the Chinese across the table.

In the Kindle edition the table of contents, end notes, and index are all properly cross-linked to the text.

October 2013 Permalink

Stephenson, Neal. Cryptonomicon. New York: Perennial, 1999. ISBN 0-380-78862-4.
I've found that I rarely enjoy, and consequently am disinclined to pick up, these huge, fat, square works of fiction cranked out by contemporary super scribblers such as Tom Clancy, Stephen King, and J.K. Rowling. In each case, the author started out and made their name crafting intricately constructed, tightly plotted page-turners, but later on succumbed to a kind of mid-career spread which yields flabby doorstop novels that give you hand cramps if you read them in bed and contain more filler than thriller. My hypothesis is that when a talented author is getting started, their initial books receive the close attention of a professional editor and benefit from the discipline imposed by an individual whose job is to flense the flab from a manuscript. But when an author becomes highly successful—a “property” who can be relied upon to crank out best-seller after best-seller, it becomes harder for an editor to restrain an author's proclivity to bloat and bloviation. (This is not to say that all authors are so prone, but some certainly are.) I mean, how would you feel giving Tom Clancy advice on the art of crafting thrillers, even though Executive Orders could easily have been cut by a third and would probably have been a better novel at half the size.

This is why, despite my having tremendously enjoyed his earlier Snow Crash and The Diamond Age, Neal Stephenson's Cryptonomicon sat on my shelf for almost four years before I decided to take it with me on a trip and give it a try. Hey, even later Tom Clancy can be enjoyed as “airplane” books as long as they fit in your carry-on bag! While ageing on the shelf, this book was one of the most frequently recommended by visitors to this page, and friends to whom I mentioned my hesitation to dive into the book unanimously said, “You really ought to read it.” Well, I've finished it, so now I'm in a position to tell you, “You really ought to read it.” This is simply one of the best modern novels I have read in years.

The book is thick, but that's because the story is deep and sprawling and requires a large canvas. Stretching over six decades and three generations, and melding genera as disparate as military history, cryptography, mathematics and computing, business and economics, international finance, privacy and individualism versus the snooper state and intrusive taxation, personal eccentricity and humour, telecommunications policy and technology, civil and military engineering, computers and programming, the hacker and cypherpunk culture, and personal empowerment as a way of avoiding repetition of the tragedies of the twentieth century, the story defies classification into any neat category. It is not science fiction, because all of the technologies exist (or plausibly could have existed—well, maybe not the Galvanick Lucipher [p. 234; all page citations are to the trade paperback edition linked above. I'd usually cite by chapter, but they aren't numbered and there is no table of contents]—in the epoch in which they appear). Some call it a “techno thriller”, but it isn't really a compelling page-turner in that sense; this is a book you want to savour over a period of time, watching the story lines evolve and weave together over the decades, and thinking about the ideas which underlie the plot line.

The breadth of the topics which figure in this story requires encyclopedic knowledge. which the author demonstrates while making it look effortless, never like he's showing off. Stephenson writes with the kind of universal expertise for which Isaac Asimov was famed, but he's a better writer than the Good Doctor, and that's saying something. Every few pages you come across a gem such as the following (p. 207), which is the funniest paragraph I've read in many a year.

He was born Graf Heinrich Karl Wilhelm Otto Friedrich von Übersetzenseehafenstadt, but changed his name to Nigel St. John Gloamthorpby, a.k.a. Lord Woadmire, in 1914. In his photograph, he looks every inch a von Übersetzenseehafenstadt, and he is free of the cranial geometry problem so evident in the older portraits. Lord Woadmire is not related to the original ducal line of Qwghlm, the Moore family (Anglicized from the Qwghlmian clan name Mnyhrrgh) which had been terminated in 1888 by a spectacularly improbable combination of schistosomiasis, suicide, long-festering Crimean war wounds, ball lightning, flawed cannon, falls from horses, improperly canned oysters, and rogue waves.
On p. 352 we find one of the most lucid and concise explanations I've ever read of why it far more difficult to escape the grasp of now-obsolete technologies than most technologists may wish.
(This is simply because the old technology is universally understood by those who need to understand it, and it works well, and all kinds of electronic and software technology has been built and tested to work within that framework, and why mess with success, especially when your profit margins are so small that they can only be detected by using techniques from quantum mechanics, and any glitches vis-à-vis compatibility with old stuff will send your company straight into the toilet.)
In two sentences on p. 564, he lays out the essentials of the original concept for Autodesk, which I failed to convey (providentially, in retrospect) to almost every venture capitalist in Silicon Valley in thousands more words and endless, tedious meetings.
“ … But whenever a business plan first makes contact with the actual market—the real world—suddenly all kinds of stuff becomes clear. You may have envisioned half a dozen potential markets for your product, but as soon as you open your doors, one just explodes from the pack and becomes so instantly important that good business sense dictates that you abandon the others and concentrate all your efforts.”
And how many New York Times Best-Sellers contain working source code (p, 480) for a Perl program?

A 1168 page mass market paperback edition is now available, but given the unwieldiness of such an edition, how much you're likely to thumb through it to refresh your memory on little details as you read it, the likelihood you'll end up reading it more than once, and the relatively small difference in price, the trade paperback cited at the top may be the better buy. Readers interested in the cryptographic technology and culture which figure in the book will find additional information in the author's Cryptonomicon cypher-FAQ.

May 2006 Permalink

Stevenson, David. 1914–1918: The History of the First World War. London: Allen Lane, 2004. ISBN 0-14-026817-0.
I have long believed that World War I was the absolutely pivotal event of the twentieth century, and that understanding its causes and consequences was essential to comprehending subsequent history. Here is an excellent single-volume history of the war for those interested in this tragic and often-neglected epoch of modern history. The author, a professor of International History at the London School of Economics, attempts to balance all aspects of the war: politics, economics, culture, ideology, demographics, and technology, as well as the actual military history of the conflict. This results in a thick (727 page), heavy book which is somewhat heavy going and best read and digested over a period of time rather than in one frontal assault (I read the book over a period of about four months). Those looking for a detailed military history won't find it here; while there is a thorough discussion of grand strategy and evolving war aims and discussion of the horrific conditions of the largely static trench warfare which characterised most of the war, there is little or no tactical description of individual battles.

The high-level integrated view of the war (and subsequent peacemaking and its undoing) is excellent for understanding the place of the war in modern history. It was World War I which, more than any other event, brought the leviathan modern nation state to its malign maturity: mass conscription, direct taxation, fiat currency, massive public debt, propaganda aimed at citizens, manipulation of the news, rationing, wage and price controls, political intrusion into the economy, and attacks on noncombatant civilians. All of these horrors, which were to characterise the balance of the last century and continue to poison the present, appeared in full force in all the powers involved in World War I. Further, the redrawing of borders which occurred following the liquidation of the German, Austro-Hungarian, and Ottoman empires sowed the seeds of subsequent conflicts, some still underway almost a century later, to name a few: Yugoslavia, Rwanda, Palestine, and Iraq.

The U.S edition, titled Cataclysm: The First World War as Political Tragedy, is now available in paperback.

September 2005 Permalink

[Audiobook] Suetonius [Gaius Suetonius Tranquillus]. The Twelve Cęsars. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [A.D. 121, 1957] 2004. ISBN 978-1-929718-39-9.
Anybody who thinks the classics are dull, or that the cult of celebrity is a recent innovation, evidently must never have encountered this book. Suetonius was a member of the Roman equestrian order who became director of the Imperial archives under the emperor Trajan and then personal secretary to his successor, Hadrian. He took advantage of his access to the palace archives and other records to recount the history of Julius Cæsar and the 11 emperors who succeeded him, through Domitian, who was assassinated in A.D. 96, by which time Suetonius was an adult.

Not far into this book, I exclaimed to myself, “Good grief—this is like People magazine!” A bit further on, it became apparent that this Roman bureaucrat had penned an account of his employer's predecessors which was way too racy even for that down-market venue. Suetonius was a prolific writer (most of his work has not survived), and his style and target audience may be inferred from the titles of some of his other books: Lives of Famous Whores, Greek Terms of Abuse, and Physical Defects of Mankind.

Each of the twelve Cæsars is sketched in a quintessentially Roman systematic fashion: according to a template as consistent as a PowerPoint presentation (abbreviated for those whose reigns were short and inconsequential). Unlike his friend and fellow historian of the epoch Tacitus, whose style is, well, taciturn, Suetonius dives right into the juicy gossip and describes it in the most explicit and sensational language imaginable. If you thought the portrayal of Julius and Augustus Cæsar in the television series “Rome” was over the top, if Suetonius is to be believed, it was, if anything, airbrushed.

Whether Suetonius can be believed is a matter of some dispute. From his choice of topics and style, he clearly savoured scandal and intrigue, and may have embroidered upon the historical record in the interest of titillation. He certainly took omens, portents, prophecies, and dreams as seriously as battles and relates them, even those as dubious as marble statues speaking, as if they were documented historical events. (Well, maybe they were—perhaps back then the people running the simulation we're living in intervened more often, before they became bored and left it to run unattended. But I'm not going there, at least here and now….) Since this is the only extant complete history of the reigns of Caligula and Claudius, the books of Tacitus covering that period having been lost, some historians have argued that the picture of the decadence of those emperors may have been exaggerated due to Suetonius's proclivity for purple prose.

This audiobook is distributed in two parts, totalling 13 hours and 16 minutes. The 1957 Robert Graves translation is used, read by Charlton Griffin, whose narration of Julius Cæsar's Commentaries (August 2007) I so enjoyed. The Graves translation gives dates in B.C. and A.D. along with the dates by consulships used in the original Latin text. Audio CD and print editions of the same translation are available. The Latin text and a public domain English translation dating from 1913–1914 are available online.

February 2008 Permalink

Sullivan, Robert. Rats. New York: Bloomsbury, [2004] 2005. ISBN 1-58234-477-9.
Here we have one of the rarest phenomena in publishing: a thoroughly delightful best-seller about a totally disgusting topic: rats. (Before legions of rat fanciers write to berate me for bad-mouthing their pets, let me state at the outset that this book is about wild rats, not pet and laboratory rats which have been bred for docility for a century and a half. The new afterword to this paperback edition relates the story of a Brooklyn couple who caught a juvenile Bedford-Stuyvesant street rat to fill the empty cage of their recently deceased pet and, as it it matured, came to regard it with such fear that they were afraid even to release it in a park lest it turn and attack them when the cage was opened—the author suggested they might consider the strategy of “open the cage and run like hell” [p. 225–226]. One of the pioneers in the use of rats in medical research in the early years of the 20th century tried to use wild rats and concluded “they proved too savage to maintain in the laboratory” [p. 231].)

In these pages are more than enough gritty rat facts to get yourself ejected from any polite company should you introduce them into a conversation. Many misconceptions about rats are debunked, including the oft-cited estimate that the rat and human population is about the same, which would lead to an estimate of about eight million rats in New York City—in fact, the most authoritative estimate (p. 20) puts the number at about 250,000 which is still a lot of rats, especially once you begin to appreciate what a single rat can do. (But rat exaggeration gets folks' attention: here is a politician claiming there are fifty-six million rats in New York!) “Rat stories are war stories” (p. 34), and this book teems with them, including The Rat that Came Up the Toilet, which is not an urban legend but a well-documented urban nightmare. (I'd be willing to bet that the incidence of people keeping the toilet lid closed with a brick on the top is significantly greater among readers of this book.)

It's common for naturalists who study an animal to develop sympathy for it and defend it against popular aversion: snakes and spiders, for example, have many apologists. But not rats: the author sums up by stating that he finds them “disgusting”, and he isn't alone. The great naturalist and wildlife artist John James Audubon, one of the rare painters ever to depict rats, amused himself during the last years of his life in New York City by prowling the waterfront hunting rats, having received permission from the mayor “to shoot Rats in the Battery” (p. 4).

If you want to really get to know an animal species, you have to immerse yourself in its natural habitat, and for the Brooklyn-based author, this involved no more than a subway ride to Edens Alley in downtown Manhattan, just a few blocks from the site of the World Trade Center, which was destroyed during the year he spent observing rats there. Along with rat stories and observations, he sketches the history of New York City from a ratty perspective, with tales of the arrival of the brown rat (possibly on ships carrying Hessian mercenaries to fight for the British during the War of American Independence), the rise and fall of rat fighting as popular entertainment in the city, the great garbage strike of 1968 which transformed the city into something close to heaven if you happened to be a rat, and the 1964 Harlem rent strike in which rats were presented to politicians by the strikers to acquaint them with the living conditions in their tenements.

People involved with rats tend to be outliers on the scale of human oddness, and the reader meets a variety of memorable characters, present-day and historical: rat fight impresarios, celebrity exterminators, Queen Victoria's rat-catcher, and many more. Among numerous fascinating items in this rat fact packed narrative is just how recent the arrival of the mis-named brown rat, Rattus norvegicus, is. (The species was named in England in 1769, having been believed to have stowed away on ships carrying lumber from Norway. In fact, it appears to have arrived in Britain before it reached Norway.) There were no brown rats in Europe at all until the 18th century (the rats which caused the Black Death were Rattus rattus, the black rat, which followed Crusaders returning from the Holy Land). First arriving in America around the time of the Revolution, the brown rat took until 1926 to spread to every state in the United States, displacing the black rat except for some remaining in the South and West. The Canadian province of Alberta remains essentially rat-free to this day, thanks to a vigorous and vigilant rat control programme.

The number of rats in an area depends almost entirely upon the food supply available to them. A single breeding pair of rats, with an unlimited food supply and no predation or other causes of mortality, can produce on the order of fifteen thousand descendants in a single year. That makes it pretty clear that a rat population will grow until all available food is being consumed by rats (and that natural selection will favour the most aggressive individuals in a food-constrained environment). Poison or trapping can knock down the rat population in the case of a severe infestation, but without limiting the availability of food, will produce only a temporary reduction in their numbers (while driving evolution to select for rats which are immune to the poison and/or more wary of the bait stations and traps).

Given this fact, which is completely noncontroversial among pest control professionals, it is startling that in New York City, which frets over and regulates public health threats like second-hand tobacco smoke while its denizens suffer more than 150 rat bites a year, many to children, smoke-free restaurants dump their offal into rat-infested alleys in thin plastic garbage bags, which are instantly penetrated by rats. How much could it cost to mandate, or even provide, rat-proof steel containers for organic waste, compared to the budget for rodent control and the damages and health hazards of a large rat population? Rats will always be around—in 1936, the president of the professional society for exterminators persuaded the organisation to change the name of the occupation from “exterminator” to “pest control operator”, not because the word “exterminator” was distasteful, but because he felt it over-promised what could actually be achieved for the client (p. 98). But why not take some simple, obvious steps to constrain the rat population?

The book contains more than twenty pages of notes in narrative form, which contain a great deal of additional information you don't want to miss, including the origin of giant inflatable rats for labour rallies, and even a poem by exterminator guru Bobby Corrigan. There is no index.

August 2006 Permalink

Taheri, Amir. The Persian Night. New York: Encounter Books, 2009. ISBN 978-1-59403-240-0.
With Iran continuing its march toward nuclear weapons and long range missiles unimpeded by an increasingly feckless West, while simultaneously domestic discontent over the tyranny of the mullahs, economic stagnation, and stolen elections are erupting into bloody violence on the streets of major cities, this book provides a timely look at the history, institutions, personalities, and strategy of what the author dubs the “triple oxymoron”: the Islamic Republic of Iran which, he argues, espouses a bizarre flavour of Islam which is not only a heretical anathema to the Sunni majority, but also at variance with the mainstream Shiite beliefs which predominated in Iran prior to Khomeini's takeover; anything but a republic in any usual sense of the word; and motivated by a global messianic vision decoupled from the traditional interests of Iran as a nation state.

Khomeini's success in wresting control away from the ailing Shah without a protracted revolutionary struggle was made possible by support from “useful idiots” mostly on the political left, who saw Khomeini's appeal to the rural population as essential to gaining power and planned to shove him aside afterward. Khomeini, however, once in power, proved far more ruthless than his coalition partners, summarily putting to death all who opposed him, including many mullahs who dissented from his eccentric version of Islam.

Iran is often described as a theocracy, but apart from the fact that the all-powerful Supreme Guide is nominally a religious figure, the organisation of the government and distribution of power are very much along the lines of a fascist state. In fact, there is almost a perfect parallel between the institutions of Nazi Germany and those of Iran. In Germany, Hitler created duplicate party and state centres of power throughout the government and economy and arranged them in such a way as to ensure that decisions could not be made without his personal adjudication of turf battles between the two. In Iran, there are the revolutionary institutions and those of the state, operating side by side, often with conflicting agendas, with only the Supreme Guide empowered to resolve disputes. Just as Hitler set up the SS as an armed counterpoise to the Wehrmacht, Khomeini created the Islamic Revolutionary Guard Corps as the revolution's independent armed branch to parallel the state's armed forces.

Thus, the author stresses, in dealing with Iran, it is essential to be sure whether you're engaging the revolution or the nation state: over the history of the Islamic Republic, power has shifted back and forth between the two sets of institutions, and with it Iran's interaction with other players on the world stage. Iran as a nation state generally strives to become a regional superpower: in effect, re-establishing the Persian Empire from the Mediterranean to the Caspian Sea through vassal regimes. To that end it seeks weapons, allies, and economic influence in a fairly conventional manner. Iran the Islamic revolutionary movement, on the other hand, works to establish global Islamic rule and the return of the Twelfth Imam: an Islamic Second Coming which Khomeini's acolytes fervently believe is imminent. Because they brook no deviation from their creed, they consider Sunni Moslems, even the strict Wahabi sect of Saudi Arabia, as enemies which must be compelled to submit to Khomeini's brand of Islam.

Iran's troubled relationship with the United States cannot be understood without grasping the distinction between state and revolution. To the revolution, the U.S. is the Great Satan spewing foul corruption around the world, which good Muslims should curse, chanting “death to America” before every sura of the Koran. Iran the nation state, on the other hand, only wants Washington to stay out of its way as it becomes a regional power which, after all, was pretty much the state of affairs under the Shah, with the U.S. his predominant arms supplier. But the U.S. could never adopt such a strategy as long as the revolution has a hand in policy, nor will Iran's neighbours, terrified of its regional ambitions, encourage the U.S. to keep their hands off.

There is a great deal of conventional wisdom about Iran which is dead wrong, and this book dispels much of it. The supposed “CIA coup” against Mosaddegh in 1953, for which two U.S. presidents have since apologised, proves to have been nothing of the sort (although the CIA did, on occasion, claim credit for it as an example of a rare success amidst decades of blundering), with the U.S. largely supporting the nationalisation of the Iranian oil fields against fierce opposition from Britain. But cluelessness about Iran has never been in short supply among U.S. politicians. Speaking at the World Economic Forum, Bill Clinton said:

Iran today is, in a sense, the only country where progressive ideas enjoy a vast constituency. It is there that the ideas I subscribe to are defended by a majority.

Lest this be deemed a slip of the tongue due to intoxication by the heady Alpine air of Davos, a few days later on U.S. television he doubled down with:

[Iran is] the only one with elections, including the United States, including Israel, including you name it, where the liberals, or the progressives, have won two-thirds to 70 percent of the vote in six elections…. In every single election, the guys I identify with got two-thirds to 70 percent of the vote. There is no other country in the world I can say that about, certainly not my own.

I suppose if the U.S. had such an overwhelming “progressive” majority, it too would adopt “liberal” policies such as hanging homosexuals from cranes until they suffocate and stoning rape victims to death. But perhaps Clinton was thinking of Iran's customs of polygamy and “temporary marriage”.

Iran is a great nation which has been a major force on the world stage since antiquity, with a deep cultural heritage and vigorous population who, in exile from poor governance in the homeland, have risen to the top of demanding professions all around the world. Today (as well as much of the last century) Iran is saddled with a regime which squanders its patrimony on a messianic dream which runs the very real risk of igniting a catastrophic conflict in the Middle East. The author argues that the only viable option is regime change, and that all actions taken by other powers should have this as the ultimate goal. Does that mean going to war with Iran? Of course not—the very fact that the people of Iran are already pushing back against the mullahs is evidence they perceive how illegitimate and destructive the present regime is. It may even make sense to engage with institutions of the Iranian state, which will be the enduring foundation of the nation after the mullahs are sent packing, but it it essential that the Iranian people be sent the message that the forces of civilisation are on their side against those who oppress them, and to use the communication tools of this new century (Which country has the most bloggers? The U.S. Number two? Iran.) to bypass the repressive regime and directly address the people who are its victims.

Hey, I spent two weeks in Iran a decade ago and didn't pick up more than a tiny fraction of the insight available here. Events in Iran are soon to become a focus of world attention to an extent they haven't been for the last three decades. Read this book to understand how Iran figures in the contemporary Great Game, and how revolutionary change may soon confront the Islamic Republic.

January 2010 Permalink

Tarnoff, Ben. Moneymakers. New York: Penguin, 2011. ISBN 978-1-101-46732-9.
Many people think of early America as a time of virtuous people, hard work, and sound money, all of which have been debased in our decadent age. Well, there may have been plenty of the first two, but the fact is that from the colonial era through the War of Secession, the American economy was built upon a foundation of dodgy paper money issued by a bewildering variety of institutions. There were advocates of hard money during the epoch, but their voices went largely unheeded because there simply wasn't enough precious metal on the continent to coin or back currency in the quantity required by the burgeoning economy. Not until the discovery of gold in California and silver in Nevada and other western states in the middle of the 19th century did a metal-backed monetary system become feasible in America.

Now, whenever authorities, be they colonies, banks, states, or federal institutions, undertake the economic transubstantiation of paper into gold by printing something on it, there will always be enterprising individuals motivated to get into the business for themselves. This book tells the story of three of these “moneymakers” (as counterfeiters were called in early America).

Owen Sullivan was an Irish immigrant who, in the 1740s and '50s set up shop in a well-appointed cave on the border between New York and Connecticut and orchestrated a network of printers, distributors, and passers of bogus notes of the surrounding colonies. Sullivan was the quintessential golden-tongued confidence man, talking himself out of jam after jam, and even persuading his captors, when he was caught and sentenced to be branded with an “R” for “Rogue” to brand him above the hairline where he could comb over the mark of shame.

So painful had the colonial experience with paper money been that the U.S. Constitution forbade states to “emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts”. But as the long and sordid history of “limited government” demonstrates, wherever there is a constitutional constraint, there is always a clever way for politicians to evade it, and nothing in the Constitution prevented states from chartering banks which would then proceed to print their own paper money. When the charter of Alexander Hamilton's First Bank of the United States was allowed to expire, that's exactly what the states proceeded to do. In Pennsylvania alone, in the single year of 1814, the state legislature chartered forty-one new banks in addition to the six already existing. With each of these banks entitled to print its own paper money (backed, in theory, by gold and silver coin in their vaults, with the emphasis on in theory), and each of these notes having its own unique design, this created a veritable paradise for counterfeiters, and into this paradise stepped counterfeiting entrepreneur David Lewis and master engraver Philander Noble, who set up a distributed and decentralised gang to pass their wares which could only be brought to justice by the kind of patient, bottom-up detective work which was rare in an age where law enforcement was largely the work of amateurs.

Samuel Upham, a successful Philadelphia shopkeeper in the 1860s, saw counterfeiting as a new product line for his shop, along with stationery and Upham's Hair Dye. When the Philadelphia Inquirer printed a replica of the Confederate five dollar note, the edition was much in demand at Upham's shop, and he immediately got in touch with the newspaper and arranged to purchase the printing plate for the crude replica of the note and printed three thousand copies with a strip at the bottom identifying them as replicas with the name and address of his store. At a penny a piece they sold briskly, and Upham decided to upgrade and expand his product line. Before long he offered Confederate currency “curios” in all denominations, printed from high quality plates on banknote paper, advertised widely as available in retail and wholesale quantities for those seeking a souvenir of the war (or several thousand of them, if you like). These “facsimiles” were indistinguishable from the real thing to anybody but an expert, and Union troops heading South and merchants trading across the border found Upham's counterfeits easy to pass. Allegations were made that the Union encouraged, aided, and abetted Upham's business in the interest of economic warfare against the South, but no evidence of this was ever produced. Nonetheless, Upham and his inevitable competitors were allowed to operate with impunity, and the flood of bogus money they sent to the South certainly made a major contribution to the rampant inflation experienced in the South and made it more difficult for the Confederacy to finance its war effort.

This is an illuminating and entertaining exploration of banking, finance, and monetary history in what may seem a simpler age but was, in its own way, breathtakingly complicated—at the peak there were more than ten thousand different kinds of paper money circulating in North America. Readers with a sense of justice may find themselves wondering why small-scale operators such as Sullivan and Lewis were tracked down so assiduously and punished so harshly while contemporary manufacturers of funny money on the terabuck scale such as Ben Bernanke, Tim Geithner, and Mario Draghi are treated with respect and deference instead of being dispatched to the pillory and branding iron they so richly deserve for plundering the savings and future of those from whom their salaries are extorted under threat of force. To whom I say, just wait….

A Kindle edition is available, in which the table of contents is linked to the text, but the index is simply a list of terms, not linked to their occurrences in the text. The extensive end notes are keyed to page numbers in the print edition, which are preserved in the Kindle edition, making navigation possible, albeit clumsy.

December 2011 Permalink

[Audiobook] Thucydides. The Peloponnesian War. Vol. 1. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [c. 400 B.C.] 2005.
Not only is The Peloponnesian War the first true work of history to have come down to us from antiquity, in writing it Thucydides essentially invented the historical narrative as it is presently understood. Although having served as a general (στρατηγός) on the Athenian side in the war, he adopts a scrupulously objective viewpoint and presents the motivations, arguments, and actions of all sides in the conflict in an even-handed manner. Perhaps his having been exiled from Athens due to arriving too late to save Amphipolis from falling to the Spartans contributed both to his dispassionate recounting of the war as well as providing the leisure to write the work. Thucydides himself wrote:
It was also my fate to be an exile from my country for twenty years after my command at Amphipolis; and being present with both parties, and more especially with the Peloponnesians by reason of my exile, I had leisure to observe affairs somewhat particularly.

Unlike earlier war narratives in epic poetry, Thucydides based his account purely upon the actions of the human participants involved. While he includes the prophecies of oracles and auguries, he considers them important only to the extent they influenced decisions made by those who gave them credence. Divine intervention plays no part whatsoever in his description of events, and in his account of the Athenian Plague he even mocks how prophecies are interpreted to fit subsequent events. In addition to military and political affairs, Thucydides was a keen observer of natural phenomena: his account of the Athenian Plague reads like that of a modern epidemiologist, including his identifying overcrowding and poor sanitation as contributing factors and the observation that surviving the disease (as he did himself) conferred immunity. He further observes that solar eclipses appear to occur only at the new Moon, and may have been the first to identify earthquakes as the cause of tsunamis.

In the text, Thucydides includes lengthy speeches made by figures on all sides of the conflict, both in political assemblies and those of generals exhorting their troops to battle. He admits in the introduction that in many cases no contemporary account of these speeches exists and that he simply made up what he believed the speaker would likely have said given the circumstances. While this is not a technique modern historians would employ, Greeks, from their theatre and poetry, were accustomed to narratives presented in this form and Thucydides, inventing the concept of history as he wrote it, saw nothing wrong with inventing words in the absence of eyewitness accounts. What is striking is how modern everything seems. There are descriptions of the strategy of a sea power (Athens) confronted by a land power (Sparta), the dangers of alliances which invite weaker allies to take risks that involve their guarantors in unwanted and costly conflicts, the difficulties in mounting an amphibious assault on a defended shore, the challenge a democratic society has in remaining focused on a long-term conflict with an authoritarian opponent, and the utility of economic warfare (or, as Thucydides puts it [over and over again], “ravaging the countryside”) in sapping the adversary's capacity and will to resist. Readers with stereotyped views of Athens and Sparta may be surprised that many at the time of the war viewed Sparta as a liberator of independent cities from the yoke of the Athenian empire, and that Thucydides, an Athenian, often seems sympathetic to this view. Many of the speeches could have been given by present-day politicians and generals, except they would be unlikely to be as eloquent or argue their case so cogently. One understands why Thucydides was not only read over the centuries (at least prior to the present Dark Time, when the priceless patrimony of Western culture has been jettisoned and largely forgotten) for its literary excellence, but is still studied in military academies for its timeless insights into the art of war and the dynamics of societies at war. While modern readers may find the actual campaigns sporadic and the battles on a small scale by present day standards, from the Hellenic perspective, which saw their culture of city-states as “civilisation” surrounded by a sea of barbarians, this was a world war, and Thucydides records it as such a momentous event.

This is Volume 1 of the audiobook, which includes the first four of the eight books into which Thucydides's text is conventionally divided, covering the prior history of Greece and the first nine years of the war, through the Thracian campaigns of the Spartan Brasidas in 423 B.C. (Here is Volume 2, with the balance.) The audiobook is distributed in two parts, totalling 14 hours and 50 minutes with more than a hour of introductory essays including a biography of Thucydides and an overview of the work. The Benjamin Jowett translation is used, read by the versatile Charlton Griffin. A print edition of this translation is available.

May 2008 Permalink

[Audiobook] Thucydides. The Peloponnesian War. Vol. 2. (Audiobook, Unabridged). Thomasville, GA: Audio Connoisseur, [c. 400 B.C.] 2005.
This is the second volume of the audiobook edition of Thucydides's epic history of what was, for Hellenic civilisation, a generation-long world war, describing which the author essentially invented historical narrative as it has been understood ever since. For general comments about the work, see my notes for Volume I.

Although a work of history (albeit with the invented speeches Thucydides acknowledges as a narrative device), this is as much a Greek tragedy as any of the Athenian plays. The war, which began, like so many, over a peripheral conflict between two regional hegemonies, transformed both Athens and Sparta into “warfare states”, where every summer was occupied in military campaigns, and every winter in planning for the next season's conflict. The Melian dialogue, which appears in Book V of the history, is one of the most chilling exemplars of raw power politics ever expressed—even more than two millennia later, it makes the soul shiver and, considering its consequences, makes one sympathetic to those, then and now, who decry the excesses of direct democracy.

Perhaps the massacre of the Melians offended the gods (although Thucydides would never suggest divine influence in the affairs of men), or maybe it was just a symptom of imperial overreach heading directly for the abyss, but not long afterward Athens launched the disastrous Sicilian Expedition, which ultimately resulted in a defeat which, on the scale of classical conflict, was on the order of Stalingrad and resulted in the end of democracy in Athens and its ultimate subjugation by Sparta.

Weapons, technologies, and political institutions change, but the humans who invent them are invariant under time translation. There is wisdom in this narrative of a war fought so very long ago which contemporary decision makers on the global stage ignore only at the peril of the lives and fortune entrusted to them by their constituents. If I could put up a shill at the “town hall” meetings of aspiring politicians, I'd like to ask them “Have you read Thucydides?”, and when they predictably said they had, then “Do you approve of the Athenian democracy's judgement as regards the citizens of Melos?”

This recording includes the second four of the eight books into which Thucydides's text is conventionally divided. The audiobook is distributed in two parts, totalling 11 hours and 29 minutes with an epilogue describing the events which occurred after the extant text of Thucydides ends in mid-paragraph whilst describing events of 410 B.C., six years before the end of the war. The Benjamin Jowett translation is used, read by Charlton Griffin. A print edition of this translation is available.

August 2008 Permalink

Trevor-Roper, Hugh. Hitler's War Directives. Edinburgh: Birlinn, [1964] 2004. ISBN 978-1-84341-014-0.
This book, originally published in 1964, contains all of Adolf Hitler's official decrees on the prosecution of the European war, from preparations for the invasion of Poland in 1939 to his final exhortation to troops on the Eastern Front of 15th April 1945 to stand in place or die. The author introduces each of the translated orders with an explanation of the situation at the time, and describes subsequent events. A fifteen page introduction explains the context of these documents and the structure of the organisations to which they were directed.

For those familiar with the history of the period, there are few revelations to be gained from these documents. It is interesting to observe the extent to which Hitler was concerned with creating and substantiating the pretexts for his aggression in both the East and West, and also how when the tide turned and the Wehrmacht was rolled back from Stalingrad to Berlin, he focused purely upon tactical details, never seeming to appreciate (at least in these orders to the military, state, and party) the inexorable disaster consuming them all.

As these are decrees at the highest level, they are largely composed of administrative matters and only occasionally discuss operational items; as such one's eyes may glaze over reading too much in one sitting. The bizarre parallel structure of state and party created by Hitler is evident in a series of decrees issued during the defensive phase of the war in which essentially the same orders were independently issued to state and party leaders, subordinating each to military commanders in battle areas. As the Third Reich approached collapse, the formal numbering of orders was abandoned, and senior military commanders issued orders in Hitler's name. These are included here using a system of numbering devised by the author. Appendices include lists of code names for operations, abbreviations, and people whose names appear in the orders.

If you aren't well-acquainted with the history of World War II in Europe, you'll take away little from this work. While the author sketches the history of each order, you really need to know the big picture to understand the situation the Germans faced and what they knew at the time to comprehend the extent to which Hitler's orders evidenced cunning or denial. Still, one rarely gets the opportunity to read the actual operational orders issued during a major conflict which ended in annihilation for the person giving them and the nation which followed him, and this book provides a way to understand how ambition, delusion, and blind obedience can lead to tragic catastrophe.

January 2009 Permalink

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988] 2004. ISBN 978-0-345-47609-8.
In 1871 Helmuth von Moltke the Elder, chief of the Prussian General Staff and architect of modern German military strategy, wrote “no plan of operations extends with any certainty beyond the first contact with the main hostile force”, an observation which is often paraphrased as “No plan survives contact with the enemy”. This is doubtless the case, but as this classic history of the diplomatic run-up to World War I and the initial hostilities from the outbreak of the war through the First Battle of the Marne demonstrates, plans, treaties, and military and political structures put into place long before open conflict erupts can tie the hands of decision makers long after events have proven them obsolete.

I first read this book in the 1980s, and I found upon rereading it now with the benefit of having since read a number of other accounts of the period, both contemporary and historical, that I'd missed or failed to fully appreciate some important points on the first traverse.

The first is how crunchy and rigid the system of alliances among the Great Powers was in the years before the War, and also the plans of mobilisation of the land powers: France, Germany, Austria-Hungary, and Russia. Viewed from a prewar perspective many thought these arrangements were guarantors of security: creating a balance of power in which the ultimate harm to any aggressor was easily calculated to be far greater than any potential gain, especially as their economies became increasingly interlinked and dependent upon international trade. For economic reasons alone, any war was expected to be short—no power was believed to have the resources to sustain a protracted conflict once its trade was disrupted by war. And yet this system, while metastable near the local minimum it occupied since the 1890s, proved highly unstable to perturbations which dislodged it from that perch. The mobilisation plans of the land powers (Britain, characteristically, had no such plan and expected to muddle through based upon events, but as the preeminent sea power with global obligations it was, in a sense, perpetually mobilised for naval conflicts) were carefully choreographed at the level of detail of railroad schedules. Once the “execute” button was pushed, events would begin to occur on a nationwide scale: call-ups of troops, distribution of supplies from armories, movement of men and munitions to assembly points, rationing of key supplies, etc. Once one nation had begun to mobilise, its potential opponents ran an enormous risk if they did not also mobilise—every day they delayed was a day the enemy, once assembled in battle order, could attack them before their own preparations were complete.

This interlocking set of alliances and scripted mobilisation plans finally proved lethal in 1914. On July 28, Austria-Hungary declared war on Serbia and began mobilisation. Russia, as an ally of Serbia and seeing its position in the Balkans threatened, declared a partial mobilisation on July 29. Germany, allied to Austria-Hungary and threatened by the Russian mobilisation, decreed its own mobilisation on July 30. France, allied with Russia and threatened by Germany, began mobilisation on August 1st. Finally, Britain, allied with France and Russia, declared war on Germany on August 4th. Europe, at peace the morning of Tuesday, July 28th, was, by the evening of Tuesday, August 4th, at war with itself, almost entirely due to treaties and mobilisation plans concluded in peacetime with the best of intentions, and not overt hostilities between any of the main powers involved.

It is a commonplace that World War I surpassed all historical experience and expectations at its outbreak for the scale of destruction and the brutality of the conflict (a few prescient observers who had studied the second American war of secession and developments in weaponry since then were not surprised, but they were in the minority), but this is often thought to have emerged in the period of static trench warfare which predominated from 1915 until the very end of the war. But this account makes clear that even the initial “war of maneuver” in August and September 1914 was characterised by the same callous squandering of life by commanders who adhered to their pre-war plans despite overwhelming evidence from the field that the assumptions upon which they were based were completely invalid. Both French and German commanders sent wave after wave of troops armed only with bolt-action rifles and bayonets against fortified positions with artillery and machine guns, suffering tens of thousands of casualties (some units were almost completely wiped out) with no effect whatsoever. Many accounts of World War I portray the mindless brutality of the conflict as a product of the trenches, but it was there from the very start, inherent in the prevailing view that the citizen was the property of the state to expend as it wished at the will of the ruling class (with the exception of the British, all armies in the conflict were composed largely of conscripts).

Although originally published almost half a century ago, this book remains one of the definitive accounts of the origins of World War I and the first month of the conflict, and one of outstanding literary merit (it is a Pulitzer prize winner). John F. Kennedy read the book shortly after its publication, and it is said to have made such an impression upon him that it influenced his strategy during the Cuban Missile Crisis, seeking to avoid actions which could trigger the kind of reciprocal automatic responses which occurred in the summer of 1914. Those who bewail the soggy international institutions and arrangements of the present day, where nothing is precisely as it seems and every commitment is balanced with a dozen ways to wiggle out of it, may find this book a cautionary tale of the alternative, and how a crunchy system of alliances may be far more dangerous. While reading the narrative, however, I found myself thinking not so much about diplomacy and military matters but rather how much today's globalised economic and financial system resembles the structure of the European great powers in 1914. Once again we hear that conflict is impossible because the damage to both parties would be unacceptable; that the system can be stabilised by “interventions” crafted by wise “experts”; that entities which are “too big to fail”, simply by being so designated, will not; and that the system is ultimately stable against an unanticipated perturbation which brings down one part of the vast interlocking structure. These beliefs seem to me, like those of the political class in 1914, to be based upon hope rather than evidence, and anybody interested in protecting their assets should think at some length about the consequences should one or more of them prove wrong.

October 2011 Permalink

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988, 1994] 2004. ISBN 978-0-345-47609-8.
One hundred years ago the world was on the brink of a cataclysmic confrontation which would cause casualties numbered in the tens of millions, destroy the pre-existing international order, depose royalty and dissolve empires, and plant the seeds for tyrannical regimes and future conflicts with an even more horrific toll in human suffering. It is not exaggeration to speak of World War I as the pivotal event of the 20th century, since so much that followed can be viewed as sequelæ which can be traced directly to that conflict.

It is thus important to understand how that war came to be, and how in the first month after its outbreak the expectations of all parties to the conflict, arrived at through the most exhaustive study by military and political élites, were proven completely wrong and what was expected to be a short, conclusive war turned instead into a protracted blood-letting which would continue for more than four years of largely static warfare. This magnificent book, which covers the events leading to the war and the first month after its outbreak, provides a highly readable narrative history of the period with insight into both the grand folly of war plans drawn up in isolation and mechanically followed even after abundant evidence of their faults have caused tragedy, but also how contingency—chance, and the decisions of fallible human beings in positions of authority can tilt the balance of history.

The author is not an academic historian, and she writes for a popular audience. This has caused some to sniff at her work, but as she noted, Herodotus, Thucydides, Gibbon, and MacCauley did not have Ph.D.s. She immerses the reader in the world before the war, beginning with the 1910 funeral in London of Edward VII where nine monarchs rode in the cortège, most of whose nations would be at war four years hence. The system of alliances is described in detail, as is the mobilisation plans of the future combatants, all of which would contribute to fatal instability of the system to a small perturbation.

Germany, France, Russia, and Austria-Hungary had all drawn up detailed mobilisation plans for assembling, deploying, and operating their conscript armies in the event of war. (Britain, with an all-volunteer regular army which was tiny by continental standards, had no pre-defined mobilisation plan.) As you might expect, Germany's plan was the most detailed, specifying railroad schedules and the composition of individual trains. Now, the important thing to keep in mind about these plans is that, together, they created a powerful first-mover advantage. If Russia began to mobilise, and Germany hesitated in its own mobilisation in the hope of defusing the conflict, it might be at a great disadvantage if Russia had only a few days of advance in assembling its forces. This means that there was a powerful incentive in issuing the mobilisation order first, and a compelling reason for an adversary to begin his own mobilisation order once news of it became known.

Compounding this instability were alliances which compelled parties to them to come to the assistance of others. France had no direct interest in the conflict between Germany and Austria-Hungary and Russia in the Balkans, but it had an alliance with Russia, and was pulled into the conflict. When France began to mobilise, Germany activated its own mobilisation and the Schlieffen plan to invade France through Belgium. Once the Germans violated the neutrality of Belgium, Britain's guarantee of that neutrality required (after the customary ambiguity and dithering) a declaration of war against Germany, and the stage was set for a general war in Europe.

The focus here is on the initial phase of the war: where Germany, France, and Russia were all following their pre-war plans, all initially expecting a swift conquest of their opponents—the Battle of the Frontiers, which occupied most of the month of August 1914. An afterword covers the First Battle of the Marne where the German offensive on the Western front was halted and the stage set for the static trench warfare which was to ensue. At the conclusion of that battle, all of the shining pre-war plans were in tatters, many commanders were disgraced or cashiered, and lessons learned through the tragedy “by which God teaches the law to kings” (p. 275).

A century later, the lessons of the outbreak of World War I could not be more relevant. On the eve of the war, many believed that the interconnection of the soon-to-be belligerents through trade was such that war was unthinkable, as it would quickly impoverish them. Today, the world is even more connected and yet there are conflicts all around the margins, with alliances spanning the globe. Unlike 1914, when the world was largely dominated by great powers, now there are rogue states, non-state actors, movements dominated by religion, and neo-barbarism and piracy loose upon the stage, and some of these may lay their hands on weapons whose destructive power dwarf those of 1914–1918. This book, published more than fifty years ago, about a conflict a century old, could not be more timely.

July 2014 Permalink

Vallee, Jacques. Forbidden Science. Vol. 2. San Francisco: Documatica Research, 2008. ISBN 978-0-615-24974-2.
This, the second volume of Jacques Vallee's journals, chronicles the years from 1970 through 1979. (I read the first volume, covering 1957–1969, before I began this list.) Early in the narrative (p. 153), Vallee becomes a U.S. citizen, but although surrendering his French passport, he never gives up his Gallic rationalism and scepticism, both of which serve him well in the increasingly weird Northern California scene in the Seventies. It was in those locust years that the seeds for the personal computing and Internet revolutions matured, and Vallee was at the nexus of this technological ferment, working on databases, Doug Englebart's Augmentation project, and later systems for conferencing and collaborative work across networks. By the end of the decade he, like many in Silicon Valley of the epoch, has become an entrepreneur, running a company based upon the conferencing technology he developed. (One amusing anecdote which indicates how far we've come since the 70s in mindset is when he pitches his conferencing system to General Electric who, at the time, had the largest commercial data network to support their timesharing service. They said they were afraid to implement anything which looked too much like a messaging system for fear of running afoul of the Post Office.)

If this were purely a personal narrative of the formative years of the Internet and personal computing, it would be a valuable book—I was there, then, and Vallee gets it absolutely right. A journal is, in many ways, better than a history because you experience the groping for solutions amidst confusion and ignorance which is the stuff of real life, not the narrative of an historian who knows how it all came out. But in addition to being a computer scientist, entrepreneur, and (later) venture capitalist, Vallee is also one of the preeminent researchers into the UFO and related paranormal phenomena (the character Claude Lacombe, played by François Truffaut in Steven Spielberg's 1977 movie Close Encounters of the Third Kind was based upon Vallee). As the 1970s progress, the author becomes increasingly convinced that the UFO phenomenon cannot be explained by extraterrestrials and spaceships, and that it is rooted in the same stratum of the human mind and the universe we inhabit which has given rise to folklore about little people and various occult and esoteric traditions. Later in the decade, he begins to suspect that at least some UFO activity is the work of deliberate manipulators bent on creating an irrational, anti-science worldview in the general populace, a hypothesis expounded in his 1979 book, Messengers of Deception, which remains controversial three decades after its publication.

The Bay Area in the Seventies was a kind of cosmic vortex of the weird, and along with Vallee we encounter many of the prominent figures of the time, including Uri Geller (who Vallee immediately dismisses as a charlatan), Doug Engelbart, J. Allen Hynek, Anton LaVey, Russell Targ, Hal Puthoff, Ingo Swann, Ira Einhorn, Tim Leary, Tom Bearden, Jack Sarfatti, Melvin Belli, and many more. Always on a relentlessly rational even keel, he observes with dismay as many of his colleagues disappear into drugs, cults, gullibility, pseudoscience, and fads as that dark decade takes its toll. In May 1979 he feels himself to be at “the end of an age that defied all conventions but failed miserably to set new standards” (p. 463). While this is certainly spot on in the social and cultural context in which he meant it, it is ironic that so many of the standards upon which the subsequent explosion of computer and networking technology are based were created in those years by engineers patiently toiling away in Silicon Valley amidst all the madness.

An introduction and retrospective at the end puts the work into perspective from the present day, and 25 pages of end notes expand upon items in the journals which may be obscure at this remove and provide source citations for events and works mentioned. You might wonder what possesses somebody to read more than five hundred pages of journal entries by somebody else which date from thirty to forty years ago. Well, I took the time, and I'm glad I did: it perfectly recreated the sense of the times and of the intellectual and technological challenges of the age. Trust me: if you're too young to remember the Seventies, it's far better to experience those years here than to have actually lived through them.

October 2009 Permalink

Van Buren, Peter. We Meant Well. New York: Henry Holt, 2011. ISBN 978-0-8050-9436-7.
The author is a career Foreign Service Officer in the U.S. State Department. In 2009–2010 he spent a year in Iraq as leader of two embedded Provincial Reconstruction Teams (ePRT) operating out of Forward Operating Bases (FOB) which were basically crusader forts in a hostile Iraqi wilderness: America inside, trouble outside. Unlike “fobbits” who rarely ventured off base, the author and his team were charged with engaging the local population to carry out “Lines of Effort” dreamed up by pointy-heads back at the palatial embassy in Baghdad or in Washington to the end of winning the “hearts and minds” of the population and “nation building”. The Iraqis were so appreciative of these efforts that they regularly attacked the FOB with mortar fire and mounted improvised explosive device (IED) and sniper attacks on those who ventured out beyond the wire.

If the whole thing were not so tawdry and tragic, the recounting of the author's experiences would be hilariously funny. If you imagine it to be a Waugh novel and read it with a dark sense of humour, it is wickedly amusing, but then one remembers that real people are dying and suffering grievous injuries, the Iraqi population are being treated as props in public relation stunts by the occupiers and deprived of any hope of bettering themselves, and all of this vast fraudulent squandering of resources is being paid for by long-suffering U.S. taxpayers or money borrowed from China and Japan, further steering the imperial power toward a debt end.

The story is told in brief chapters, each recounting a specific incident or aspect of life in Iraq. The common thread, which stretches back over millennia, is that imperial powers attempting to do good by those they subjugate will always find themselves outwitted by wily oriental gentlemen whose ancestors have spent millennia learning how to game the systems imposed by the despotisms under which they have lived. As a result, the millions poured down the rathole of “Provincial Reconstruction” predictably flows into the pockets of the bosses in the communities who set up front organisations for whatever harebrained schemes the occupiers dream up. As long as the “project” results in a ribbon-cutting ceremony covered by the press (who may, of course, be given an incentive to show up by being paid) and an impressive PowerPoint presentation for the FOB commander to help him toward his next promotion, it's deemed a success and, hey, there's a new Line of Effort from the embassy that demands another project: let's teach widows beekeeping (p. 137)—it'll only cost US$1600 per person, and each widow can expect to make US$200 a year from the honey—what a deal!

The author is clearly a creature of the Foreign Service and scarcely conceals his scorn for the military who are tasked with keeping him alive in a war zone and the politicians who define the tasks he is charged with carrying out. Still, the raw folly of “nation building” and the obdurate somnambulant stupidity of those who believe that building milk processing plants or putting on art exhibitions in a war zone will quickly convert people none of whom have a single ancestor who has ever lived in a consensually-governed society with the rule of law to model citizens in a year or two is stunningly evident.

Why are empires always so dumb? When they attain a certain stage of overreach, they seem to always assume they can instill their own unique culture in those they conquer. And yet, as Kipling wrote in 1899:

Fill full the mouth of Famine
And bid the sickness cease;
And when your goal is nearest
The end for others sought,
Watch Sloth and heathen Folly
Bring all your hope to nought.

When will policy makers become as wise as the mindless mechanisms of biology? When an irritant invades an organism and it can't be eliminated, the usual reaction is to surround it with an inert barrier which keeps it from causing further harm. “Nation building” is folly; far better to bomb them if they misbehave, then build a wall around the whole godforsaken place and bomb them again if any of them get out and cause any further mischief. Call it “biomimetic foreign policy”—encyst upon it!

March 2012 Permalink

van Creveld, Martin. Hitler in Hell. Kouvola, Finland: Castalia House, 2017. ASIN B0738YPW2M.
Martin van Creveld is an Israeli military theorist and historian, professor emeritus at Hebrew University in Jerusalem, and author of seventeen books of military history and strategy, including The Transformation of War, which has been hailed as one of the most significant recent works on strategy. In this volume he turns to fiction, penning the memoirs of the late, unlamented Adolf Hitler from his current domicile in Hell, “the place to which the victors assign their dead opponents.” In the interest of concision, in the following discussion I will use “Hitler” to mean the fictional Hitler in this work.

Hitler finds Hell more boring than hellish—“in some ways it reminds me of Landsberg Prison”. There is no torture or torment, just a never-changing artificial light and routine in which nothing ever happens. A great disappointment is that neither Eva Braun nor Blondi is there to accompany him. As to the latter, apparently all dogs go to heaven. Rudolf Hess is there, however, and with that 1941 contretemps over the flight to Scotland put behind them, has resumed helping Hitler with his research and writing as he did during the former's 1924 imprisonment. Hell has broadband!—Hitler is even able to access the “Black Internetz” and read, listen to, and watch everything up to the present day. (That sounds pretty good—my own personal idea of Hell would be an Internet connection which only allows you to read Wikipedia.)

Hitler tells the story of his life: from childhood, his days as a struggling artist in Vienna and Munich, the experience of the Great War, his political awakening in the postwar years, rise to power, implementation of his domestic and foreign policies, and the war and final collapse of Nazi Germany. These events, and the people involved in them, are often described from the viewpoint of the present day, with parallels drawn to more recent history and figures.

What makes this book work so well is that van Creveld's Hitler makes plausible arguments supporting decisions which many historians argue were irrational or destructive: going to war over Poland, allowing the British evacuation from Dunkirk, attacking the Soviet Union while Britain remained undefeated in the West, declaring war on the U.S. after Pearl Harbor, forbidding an orderly retreat from Stalingrad, failing to commit armour to counter the Normandy landings, and fighting to the bitter end, regardless of the consequences to Germany and the German people. Each decision is justified with arguments which are plausible when viewed from what is known of Hitler's world view, the information available to him at the time, and the constraints under which he was operating.

Much is made of those constraints. Although embracing totalitarianism (“My only regret is that, not having enough time, we did not make it more totalitarian still”), he sees himself surrounded by timid and tradition-bound military commanders and largely corrupt and self-serving senior political officials, yet compelled to try to act through them, as even a dictator can only dictate, then hope others implement his wishes. “Since then, I have often wondered whether, far from being too ruthless, I had been too soft and easygoing.” Many apparent blunders are attributed to lack of contemporary information, sometimes due to poor intelligence, but often simply by not having the historians' advantage of omniscient hindsight.

This could have been a parody, but in the hands of a distinguished historian like the author, who has been thinking about Hitler for many years (he wrote his 1971 Ph.D. thesis on Hitler's Balkan strategy in World War II), it provides a serious look at how Hitler's policies and actions, far from being irrational or a madman's delusions, may make perfect sense when one starts from the witches' brew of bad ideas and ignorance which the real Hitler's actual written and spoken words abundantly demonstrate. The fictional Hitler illustrates this in many passages, including this particularly chilling one where, after dismissing those who claim he was unaware of the extermination camps, says “I particularly needed to prevent the resurgence of Jewry by exterminating every last Jewish man, woman, and child I could. Do you say they were innocent? Bedbugs are innocent! They do what nature has destined them to, no more, no less. But is that any reason to spare them?” Looking backward, he observes that notwithstanding the utter defeat of the Third Reich, the liberal democracies that vanquished it have implemented many of his policies in the areas of government supervision of the economy, consumer protection, public health (including anti-smoking policies), environmentalism, shaping the public discourse (then, propaganda, now political correctness), and implementing a ubiquitous surveillance state of which the Gestapo never dreamed.

In an afterword, van Creveld explains that, after on several occasions having started to write a biography of Hitler and then set the project aside, concluding he had nothing to add to existing works, in 2015 it occurred to him that the one perspective which did not exist was Hitler's own, and that the fictional device of a memoir from Hell, drawing parallels between historical and contemporary events, would provide a vehicle to explore the reasoning which led to the decisions Hitler made. The author concludes, “…my goal was not to set forth my own ideas. Instead, I tried to understand Hitler's actions, views, and thoughts as I think he, observing the past and the present from Hell, would have explained them. So let the reader judge how whether I have succeeded in this objective.” In the opinion of this reader, he has succeeded, and brilliantly.

This book is presently available only in a Kindle edition; it is free for Kindle Unlimited subscribers.

July 2017 Permalink

Wade, Nicholas. Before The Dawn. New York: Penguin Press, 2006. ISBN 1-59420-079-3.
Modern human beings, physically very similar to people alive today, with spoken language and social institutions including religion, trade, and warfare, had evolved by 50,000 years ago, yet written historical records go back only about 5,000 years. Ninety percent of human history, then, is “prehistory” which paleoanthropologists have attempted to decipher from meagre artefacts and rare discoveries of human remains. The degree of inference and the latitude for interpretation of this material has rendered conclusions drawn from it highly speculative and tentative. But in the last decade this has begun to change.

While humans only began to write the history of their species in the last 10% of their presence on the planet, the DNA that makes them human has been patiently recording their history in a robust molecular medium which only recently, with the ability to determine the sequence of the genome, humans have learnt to read. This has provided a new, largely objective, window on human history and origins, and has both confirmed results teased out of the archæological record over the centuries, and yielded a series of stunning surprises which are probably only the first of many to come.

Each individual's genome is a mix of genes inherited from their father and mother, plus a few random changes (mutations) due to errors in the process of transcription. The separate genome of the mitochondria (energy producing organelles) in their cells is inherited exclusively from the mother, and in males, the Y chromosome (except for the very tips) is inherited directly from the father, unmodified except for mutations. In an isolated population whose members breed mostly with one another, members of the group will come to share a genetic signature which reflects natural selection for reproductive success in the environment they inhabit (climate, sources of food, endemic diseases, competition with other populations, etc.) and the effects of random “genetic drift” which acts to reduce genetic diversity, particularly in small, isolated populations. Random mutations appear in certain parts of the genome at a reasonably constant rate, which allows them to be used as a “molecular clock” to estimate the time elapsed since two related populations diverged from their last common ancestor. (This is biology, so naturally the details are fantastically complicated, messy, subtle, and difficult to apply in practice, but the general outline is as described above.)

Even without access to the genomes of long-dead ancestors (which are difficult in the extreme to obtain and fraught with potential sources of error), the genomes of current populations provide a record of their ancestry, geographical origin, migrations, conquests and subjugations, isolation or intermarriage, diseases and disasters, population booms and busts, sources of food, and, by inference, language, social structure, and technologies. This book provides a look at the current state of research in the rapidly expanding field of genetic anthropology, and it makes for an absolutely compelling narrative of the human adventure. Obviously, in a work where the overwhelming majority of source citations are to work published in the last decade, this is a description of work in progress and most of the deductions made should be considered tentative pending further results.

Genomic investigation has shed light on puzzles as varied as the size of the initial population of modern humans who left Africa (almost certainly less than 1000, and possibly a single hunter-gatherer band of about 150), the date when wolves were domesticated into dogs and where it happened, the origin of wheat and rice farming, the domestication of cattle, the origin of surnames in England, and the genetic heritage of the randiest conqueror in human history, Genghis Khan, who, based on Y chromosome analysis, appears to have about 16 million living male descendants today.

Some of the results from molecular anthropology run the risk of being so at variance with the politically correct ideology of academic soft science that the author, a New York Times reporter, tiptoes around them with the mastery of prose which on other topics he deploys toward their elucidation. Chief among these is the discussion of the microcephalin and ASPM genes on pp. 97–99. (Note that genes are often named based on syndromes which result from deleterious mutations within them, and hence bear names opposite to their function in the normal organism. For example, the gene which triggers the cascade of eye formation in Drosophila is named eyeless.) Both of these genes appear to regulate brain size and, in particular, the development of the cerebral cortex, which is the site of higher intelligence in mammals. Specific alleles of these genes are of recent origin, and are unequally distributed geographically among the human population. Haplogroup D of Microcephalin appeared in the human population around 37,000 years ago (all of these estimates have a large margin of error); which is just about the time when quintessentially modern human behaviour such as cave painting appeared in Europe. Today, about 70% of the population of Europe and East Asia carry this allele, but its incidence in populations in sub-Saharan Africa ranges from 0 to 25%. The ASPM gene exists in two forms: a “new” allele which arose only about 5800 years ago (coincidentally[?] just about the time when cities, agriculture, and written language appeared), and an “old” form which predates this period. Today, the new allele occurs in about 50% of the population of the Middle East and Europe, but hardly at all in sub-Saharan Africa. Draw your own conclusions from this about the potential impact on human history when germline gene therapy becomes possible, and why opposition to it may not be the obvious ethical choice.

January 2007 Permalink

Wade, Nicholas. A Troublesome Inheritance. New York: Penguin Press, 2014. ISBN 978-1-59420-446-3.
Geographically isolated populations of a species (unable to interbreed with others of their kind) will be subject to natural selection based upon their environment. If that environment differs from that of other members of the species, the isolated population will begin to diverge genetically, as genetic endowments which favour survival and more offspring are selected for. If the isolated population is sufficiently small, the mechanism of genetic drift may cause a specific genetic variant to become almost universal or absent in that population. If this process is repeated for a sufficiently long time, isolated populations may diverge to such a degree they can no longer interbreed, and therefore become distinct species.

None of this is controversial when discussing other species, but in some circles to suggest that these mechanisms apply to humans is the deepest heresy. This well-researched book examines the evidence, much from molecular biology which has become available only in recent years, for the diversification of the human species into distinct populations, or “races” if you like, after its emergence from its birthplace in Africa. In this book the author argues that human evolution has been “recent, copious, and regional” and presents the genetic evidence to support this view.

A few basic facts should be noted at the outset. All humans are members of a single species, and all can interbreed. Humans, as a species, have an extremely low genetic diversity compared to most other animal species: this suggests that our ancestors went through a genetic “bottleneck” where the population was reduced to a very small number, causing the variation observed in other species to be lost through genetic drift. You might expect different human populations to carry different genes, but this is not the case—all humans have essentially the same set of genes. Variation among humans is mostly a result of individuals carrying different alleles (variants) of a gene. For example, eye colour in humans is entirely inherited: a baby's eye colour is determined completely by the alleles of various genes inherited from the mother and father. You might think that variation among human populations is then a question of their carrying different alleles of genes, but that too is an oversimplification. Human genetic variation is, in most cases, a matter of the frequency of alleles among the population.

This means that almost any generalisation about the characteristics of individual members of human populations with different evolutionary histories is ungrounded in fact. The variation among individuals within populations is generally much greater than that of populations as a whole. Discrimination based upon an individual's genetic heritage is not just abhorrent morally but scientifically unjustified.

Based upon these now well-established facts, some have argued that “race does not exist” or is a “social construct”. While this view may be motivated by a well-intentioned desire to eliminate discrimination, it is increasingly at variance with genetic evidence documenting the history of human populations.

Around 200,000 years ago, modern humans emerged in Africa. They spent more than three quarters of their history in that continent, spreading to different niches within it and developing a genetic diversity which today is greater than that of all humans in the rest of the world. Around 50,000 years before the present, by the genetic evidence, a small band of hunter-gatherers left Africa for the lands to the north. Then, some 30,000 years ago the descendants of these bands who migrated to the east and west largely ceased to interbreed and separated into what we now call the Caucasian and East Asian populations. These have remained the main three groups within the human species. Subsequent migrations and isolations have created other populations such as Australian and American aborigines, but their differentiation from the three main races is less distinct. Subsequent migrations, conquest, and intermarriage have blurred the distinctions between these groups, but the fact is that almost any child, shown a picture of a person of European, African, or East Asian ancestry can almost always effortlessly and correctly identify their area of origin. University professors, not so much: it takes an intellectual to deny the evidence of one's own eyes.

As these largely separated populations adapted to their new homes, selection operated upon their genomes. In the ancestral human population children lost the ability to digest lactose, the sugar in milk, after being weaned from their mothers' milk. But in populations which domesticated cattle and developed dairy farming, parents who passed on an allele which would allow their children to drink cow's milk their entire life would have more surviving offspring and, in a remarkably short time on the evolutionary scale, lifetime lactose tolerance became the norm in these areas. Among populations which never raised cattle or used them only for meat, lifetime lactose tolerance remains rare today.

Humans in Africa originally lived close to the equator and had dark skin to protect them from the ultraviolet radiation of the Sun. As human bands occupied northern latitudes in Europe and Asia, dark skin would prevent them from being able to synthesise sufficient Vitamin D from the wan, oblique sunlight of northern winters. These populations were under selection pressure for alleles of genes which gave them lighter skin, but interestingly Europeans and East Asians developed completely different genetic means to lighten their skin. The selection pressure was the same, but evolution blundered into two distinct pathways to meet the need.

Can genetic heritage affect behaviour? There's evidence it can. Humans carry a gene called MAO-A, which breaks down neurotransmitters that affect the transmission of signals within the brain. Experiments in animals have provided evidence that under-production of MAO-A increases aggression and humans with lower levels of MAO-A are found to be more likely to commit violent crime. MAO-A production is regulated by a short sequence of DNA adjacent to the gene: humans may have anywhere from two to five copies of the promoter; the more you have, the more the MAO-A, and hence the mellower you're likely to be. Well, actually, people with three to five copies are indistinguishable, but those with only two (2R) show higher rates of delinquency. Among men of African ancestry, 5.5% carry the 2R variant, while 0.1% of Caucasian males and 0.00067% of East Asian men do. Make of this what you will.

The author argues that just as the introduction of dairy farming tilted the evolutionary landscape in favour of those bearing the allele which allowed them to digest milk into adulthood, the transition of tribal societies to cities, states, and empires in Asia and Europe exerted a selection pressure upon the population which favoured behavioural traits suited to living in such societies. While a tribal society might benefit from producing a substantial population of aggressive warriors, an empire has little need of them: its armies are composed of soldiers, courageous to be sure, who follow orders rather than charging independently into battle. In such a society, the genetic traits which are advantageous in a hunter-gatherer or tribal society will be selected out, as those carrying them will, if not expelled or put to death for misbehaviour, be unable to raise as large a family in these settled societies.

Perhaps, what has been happening over the last five millennia or so is a domestication of the human species. Precisely as humans have bred animals to live with them in close proximity, human societies have selected for humans who are adapted to prosper within them. Those who conform to the social hierarchy, work hard, come up with new ideas but don't disrupt the social structure will have more children and, over time, whatever genetic predispositions there may be for these characteristics (which we don't know today) will become increasingly common in the population. It is intriguing that as humans settled into fixed communities, their skeletons became less robust. This same process of gracilisation is seen in domesticated animals compared to their wild congeners. Certainly there have been as many human generations since the emergence of these complex societies as have sufficed to produce major adaptation in animal species under selective breeding.

Far more speculative and controversial is whether this selection process has been influenced by the nature of the cultures and societies which create the selection pressure. East Asian societies tend to be hierarchical, obedient to authority, and organised on a large scale. European societies, by contrast, are fractious, fissiparous, and prone to bottom-up insurgencies. Is this in part the result of genetic predispositions which have been selected for over millennia in societies which work that way?

It is assumed by many right-thinking people that all that is needed to bring liberty and prosperity to those regions of the world which haven't yet benefited from them is to create the proper institutions, educate the people, and bootstrap the infrastructure, then stand back and watch them take off. Well, maybe—but the history of colonialism, the mission civilisatrice, and various democracy projects and attempts at nation building over the last two centuries may suggest it isn't that simple. The population of the colonial, conquering, or development-aid-giving power has the benefit of millennia of domestication and adaptation to living in a settled society with division of labour. Its adaptations for tribalism have been largely bred out. Not so in many cases for the people they're there to “help”. Withdraw the colonial administration or occupation troops and before long tribalism will re-assert itself because that's the society for which the people are adapted.

Suggesting things like this is anathema in academia or political discourse. But look at the plain evidence of post-colonial Africa and more recent attempts of nation-building, and couple that with the emerging genetic evidence of variation in human populations and connections to behaviour and you may find yourself thinking forbidden thoughts. This book is an excellent starting point to explore these difficult issues, with numerous citations of recent scientific publications.

December 2014 Permalink

Wade, Wyn Craig. The Titanic: End of a Dream. New York: Penguin, 1986. ISBN 0-14-016691-2.

September 2001 Permalink

Weightman, Gavin. The Frozen-Water Trade. New York: Hyperion, 2003. ISBN 0-7868-8640-4.
Those who scoff at the prospect of mining lunar Helium-3 as fuel for Earth-based fusion power plants might ponder the fact that, starting in 1833, British colonists in India beat the sweltering heat of the subcontinent with a steady, year-round supply of ice cut in the winter from ponds and rivers in Massachusetts and Maine and shipped in the holds of wooden sailing ships—a voyage of some 25,000 kilometres and 130 days. In 1870 alone, 17,000 tons of ice were imported by India in ships sailing from Boston. Frederic Tudor, who first conceived the idea of shipping winter ice, previously considered worthless, to the tropics, was essentially single-handedly responsible for ice and refrigeration becoming a fixture of daily life in Western communities around the world. Tudor found fortune and fame in creating an industry based on commodity which beforehand simply melted away every spring. No technological breakthrough was required or responsible—this is a classic case of creating a market by filling a need of which customers were previously unaware. In the process, Tudor suffered just about every adversity one can imagine and never gave up, an excellent illustration that the one essential ingredient of entrepreneurial success is the ability to “take a whacking and keep on hacking”.

April 2004 Permalink

Weightman, Gavin. The Frozen Water Trade. New York: Hyperion, [2003] 2004. ISBN 978-0-7868-8640-1.
In the summer of 1805, two brothers, Frederic and William Tudor, both living in the Boston area, came up with an idea for a new business which would surely make their fortune. Every winter, fresh water ponds in Massachusetts froze solid, often to a depth of a foot or more. Come spring, the ice would melt.

This cycle had repeated endlessly since before humans came to North America, unremarked upon by anybody. But the Tudor brothers, in the best spirit of Yankee ingenuity, looked upon the ice as an untapped and endlessly renewable natural resource. What if this commodity, considered worthless, could be cut from the ponds and rivers, stored in a way that would preserve it over the summer, and shipped to southern states and the West Indies, where plantation owners and prosperous city dwellers would pay a premium for this luxury in times of sweltering heat?

In an age when artificial refrigeration did not exist, that “what if” would have seemed so daunting as to deter most people from entertaining the notion for more than a moment. Indeed, the principles of thermodynamics, which underlie both the preservation of ice in warm climates and artificial refrigeration, would not be worked out until decades later. In 1805, Frederic Tudor started his “Ice House Diary” to record the progress of the venture, inscribing it on the cover, “He who gives back at the first repulse and without striking the second blow, despairs of success, has never been, is not, and never will be, a hero in love, war or business.” It was in this spirit that he carried on in the years to come, confronting a multitude of challenges unimagined at the outset.

First was the question of preserving the ice through the summer, while in transit, and upon arrival in the tropics until it was sold. Some farmers in New England already harvested ice from their ponds and stored it in ice houses, often built of stone and underground. This was sufficient to preserve a modest quantity of ice through the summer, but Frederic would need something on a much larger scale and less expensive for the trade he envisioned, and then there was the problem of keeping the ice from melting in transit. Whenever ice is kept in an environment with an ambient temperature above freezing, it will melt, but the rate at which it melts depends upon how it is stored. It is essential that the meltwater be drained away, since if the ice is allowed to stand in it, the rate of melting will be accelerated, since water conducts heat more readily than air. Melting ice releases its latent heat of fusion, and a sealed ice house will actually heat up as the ice melts. It is imperative the ice house be well ventilated to allow this heat to escape. Insulation which slows the flow of heat from the outside helps to reduce the rate of melting, but care must be taken to prevent the insulation from becoming damp from the meltwater, as that would destroy its insulating properties.

Based upon what was understood about the preservation of ice at the time and his own experiments, Tudor designed an ice house for Havana, Cuba, one of the primary markets he was targeting, which would become the prototype for ice houses around the world. The structure was built of timber, with double walls, the cavity between the walls filled with insulation of sawdust and peat. The walls and roof kept the insulation dry, and the entire structure was elevated to allow meltwater to drain away. The roof was ventilated to allow the hot air from the melting ice to dissipate. Tightly packing blocks of uniform size and shape allowed the outer blocks of ice to cool those inside, and melting would be primarily confined to blocks on the surface of the ice stored.

During shipping, ice was packed in the hold of ships, insulated by sawdust, and crews were charged with regularly pumping out meltwater, which could be used as an on-board source of fresh water or disposed of overboard. Sawdust was produced in great abundance by the sawmills of Maine, and was considered a waste product, often disposed of by dumping it in rivers. Frederic Tudor had invented a luxury trade whose product was available for the price of harvesting it, and protected in shipping by a material considered to be waste.

The economics of the ice business exploited an imbalance in Boston's shipping business. Massachusetts produced few products for export, so ships trading with the West Indies would often leave port with nearly empty holds, requiring rock ballast to keep the ship stable at sea. Carrying ice to the islands served as ballast, and was a cargo which could be sold upon arrival. After initial scepticism was overcome (would the ice all melt and sink the ship?), the ice trade outbound from Boston was an attractive proposition to ship owners.

In February 1806, the first cargo of ice sailed for the island of Martinique. The Boston Gazette reported the event as follows.

No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.

The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper.

The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts.

On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved.

Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston.

With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry.

Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone.

While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines.

In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory.

Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality.

Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out.

When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic.

I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.

July 2016 Permalink

Weinberg, Steven. Facing Up. Cambridge, MA: Harvard University Press, 2001. ISBN 0-674-01120-1.
This is a collection of non-technical essays written between 1985 and 2000 by Nobel Prize winning physicist Steven Weinberg. Many discuss the “science wars”—the assault by postmodern academics on the claim that modern science is discovering objective truth (well, duh), but many other topics are explored, including string theory, Zionism, Alan Sokal's hoax at the expense of the unwitting (and witless) editors of Social Text, Thomas Kuhn's views on scientific revolutions, science and religion, and the comparative analysis of utopias. Weinberg applies a few basic principles to most things he discusses—I counted six separate defences of reductionism in modern science, most couched in precisely the same terms. You may find this book more enjoyable a chapter at a time over an extended period rather than in one big cover-to-cover gulp.

January 2005 Permalink

Weiner, Tim. Legacy of Ashes. New York: Doubleday, 2007. ISBN 978-0-385-51445-3.
I've always been amused by those overwrought conspiracy theories which paint the CIA as the spider at the centre of a web of intrigue, subversion, skullduggery, and ungentlemanly conduct stretching from infringements of the rights of U.S. citizens at home to covert intrusion into internal affairs in capitals around the globe. What this outlook, however entertaining, seemed to overlook in my opinion is that the CIA is a government agency, and millennia of experience demonstrate that long-established instruments of government (the CIA having begun operations in 1947) rapidly converge upon the intimidating, machine-like, and ruthless efficiency of the Post Office or the Department of Motor Vehicles. How probable was it that a massive bureaucracy, especially one which operated with little Congressional oversight and able to bury its blunders by classifying documents for decades, was actually able to implement its cloak and dagger agenda, as opposed to the usual choke and stagger one expects from other government agencies of similar staffing and budget? Defenders of the CIA and those who feared its menacing, malign competence would argue that while we find out about the CIA's blunders when operations are blown, stings end up getting stung, and moles and double agents are discovered, we never know about the successes, because they remain secret forever, lest the CIA's sources and methods be disclosed.

This book sets the record straight. The Pulitzer prize-winning author has covered U.S. intelligence for twenty years, most recently for the New York Times. Drawing on a wealth of material declassified since the end of the Cold War, most from the latter half of the 1990s and afterward, and extensive interviews with every living Director of Central Intelligence and numerous other agency figures, this is the first comprehensive history of the CIA based on the near-complete historical record. It is not a pretty picture.

Chartered to collect and integrate information, both from its own sources and those of other intelligence agencies, thence to present senior decision-makers with the data they need to formulate policy, from inception the CIA neglected its primary mission in favour of ill-conceived and mostly disastrous paramilitary and psychological warfare operations deemed “covert”, but which all too often became painfully overt when they blew up in the faces of those who ordered them. The OSS heritage of many of the founders of the CIA combined with the proclivity of U.S. presidents to order covert operations which stretched the CIA's charter to its limits and occasionally beyond combined to create a litany of blunders and catastrophe which would be funny were it not so tragic for those involved, and did it not in many cases cast long shadows upon the present-day world.

While the clandestine service was tripping over its cloaks and impaling itself upon its daggers, the primary intelligence gathering mission was neglected and bungled to such an extent that the agency provided no warning whatsoever of Stalin's atomic bomb, the Korean War, the Chinese entry into that conflict, the Suez crisis, the Hungarian uprising, the building of the Berlin Wall, the Yom Kippur war of 1973, the Iranian revolution, the Soviet invasion of Afghanistan, the Iran/Iraq War, the fall of the Berlin Wall, the collapse of the Soviet Union, Iraq's invasion of Kuwait, the nuclear tests by India and Pakistan in 1998, and more. The spider at the centre of the web appears to have been wearing a blindfold and earplugs. (Oh, they did predict both the outbreak and outcome of the Six Day War—well, that's one!)

Not only have the recently-declassified documents shone a light onto the operations of the CIA, they provide a new perspective on the information from which decision-makers were proceeding in many of the pivotal events of the latter half of the twentieth century including Korea, the Cuban missile crisis, Vietnam, and the past and present conflicts in Iraq. This book completely obsoletes everything written about the CIA before 1995; the source material which has become available since then provides the first clear look into what was previously shrouded in secrecy. There are 154 pages of end notes in smaller type—almost a book in itself—which expand, often at great length, upon topics in the main text; don't pass them up. Given the nature of the notes, I found it more convenient to read them as an appendix rather than as annotations.

February 2008 Permalink

Wellum, Geoffrey. First Light. London: Penguin Books, 2002. ISBN 0-14-100814-8.
A U.S edition is available, but as of this date only in hardcover.

January 2004 Permalink

White, Andrew Dickson. Fiat Money Inflation in France. Bayonne, NJ: Blackbird Books, [1876, 1896, 1912, 1914] 2011. ISBN 978-1-61053-004-0.
One of the most sure ways to destroy the economy, wealth, and morals of a society is monetary inflation: an inexorable and accelerating increase in the supply of money, which inevitably (if not always immediately) leads to ever-rising prices, collapse in saving and productive investment, and pauperisation of the working classes in favour of speculators and those with connections to the regime issuing the money.

In ancient times, debasement of the currency was accomplished by clipping coins or reducing their content of precious metal. Ever since Marco Polo returned from China with news of the tremendous innovation of paper money, unbacked paper currency (or fiat money) has been the vehicle of choice for states to loot their productive and thrifty citizens.

Between 1789 and 1796, a period encompassing the French Revolution, the French National Assembly issued assignats, paper putatively backed by the value of public lands seized from the Roman Catholic Church in the revolution. Assignats could theoretically be used to purchase these lands, and initially paid interest—they were thus a hybrid between a currency and a bond. The initial issue revived the French economy and rescued the state from bankruptcy but, as always happens, was followed by a second, third, and then a multitude of subsequent issues totally decoupled from the value of the land which was supposed to back them. This sparked an inflationary and eventually hyperinflationary spiral with savers wiped out, manufacturing and commerce grinding to a halt (due to uncertainty, inability to invest, and supply shortages) which caused wages to stagnate even as prices were running away to the upside, an enormous transfer of wealth from the general citizenry to speculators and well-connected bankers, and rampant corruption within the political class. The sequelæ of monetary debasement all played out as they always have and always will: wage and price controls, shortages, rationing, a rush to convert paper money into tangible assets as quickly as possible, capital and foreign exchange controls, prohibition on the ownership of precious metals and their confiscation, and a one-off “wealth tax” until the second, and the third, and so on. Then there was the inevitable replacement of the discredited assignats with a new paper currency, the mandats, which rapidly blew up. Then came Napoleon, who restored precious metal currency; hyperinflation so often ends up with a dictator in power.

What is remarkable about this episode is that it happened in a country which had experienced the disastrous John Law paper money bubble in 1716–1718, within the living memory of some in the assignat era and certainly in the minds of the geniuses who decided to try paper money again because “this time is different”. When it comes to paper money, this time is never different.

This short book (or long pamphlet—the 1896 edition is just 92 pages) was originally written in 1876 by the author, a president of Cornell University, as a cautionary tale against advocates of paper money and free silver in the United States. It was subsequently revised and republished on each occasion the U.S. veered further toward unbacked or “elastic” paper money. It remains one of the most straightforward accounts of a hyperinflationary episode ever written, with extensive citations of original sources. For a more detailed account of the Weimar Republic inflation in 1920s Germany, see When Money Dies (May 2011); although the circumstances were very different, the similarities will be apparent, confirming that the laws of economics manifest here are natural laws just as much as gravitation and electromagnetism, and ignoring them never ends well.

If you are looking for a Kindle edition of this book, be sure to download a free sample of the book before purchasing. As the original editions of this work are in the public domain, anybody is free to produce an electronic edition, and there are some hideous ones available; look before you buy.

April 2013 Permalink

White, Rowland. Vulcan 607. London: Corgi Books, 2006. ISBN 978-0-552-15229-7.
The Avro Vulcan bomber was the backbone of Britain's nuclear deterrent from the 1950s until the end of the 1960s, when ballistic missile submarines assumed the primary deterrent mission. Vulcans remained in service thereafter as tactical nuclear weapons delivery platforms in support of NATO forces. In 1982, the aging Vulcan force was months from retirement when Argentina occupied the Falkland Islands, and Britain summoned all of its armed services to mount a response. The Royal Navy launched a strike force, but given the distance (about 8000 miles from Britain to the Falklands) it would take about two weeks to arrive. The Royal Air Force surveyed their assets and concluded that only the Vulcan, supported by the Handley Page Victor, a bomber converted to an aerial refueling tanker, would permit it to project power to such a distant theatre.

But there were difficulties—lots of them. First of all, the Vulcan had been dedicated to the nuclear mission for decades: none of the crews had experience dropping conventional bombs, and the bomb bay racks to dispense them had to be hunted down in scrap yards. No Vulcan had performed aerial refueling since 1971, since its missions were assumed to be short range tactical sorties, and the refueling hardware had been stoppered. Crews were sent out to find and remove refueling probes from museum specimens to install on the bombers chosen for the mission. Simply navigating to a tiny island in the southern hemisphere in this pre-GPS era was a challenge—Vulcan crews had been trained to navigate by radar returns from the terrain, and there was no terrain whatsoever between their launch point on Ascension Island and landfall in the Falklands, so boffins figured out how to adapt navigation gear from obsolete VC10 airliners to the Vulcan and make it work. The Vulcan had no modern electronic countermeasures (ECM), rendering it vulnerable to Argentinian anti-aircraft defences, so an ECM pod from another aircraft was grafted onto its wing, fastening to a hardpoint which had never been used by a Vulcan. Finding it, and thereby knowing where to drill the holes required dismantling the wing of another Vulcan.

If the preparations were remarkable, especially since they were thrown together in just a few weeks, the mission plan was audacious—so much so that one expects it would have been rejected as absurd if proposed as the plot of a James Bond film. Executing the mission to bomb the airfield on the Falkland Islands would involve two Vulcan bombers, one Nimrod marine patrol aircraft, thirteen Victor tankers, nineteen refuelings (including Victor to Victor and Victor to Vulcan), 1.5 million pounds of fuel, and ninety aircrew. And all of these resources, assembled and deployed in a single mission, managed to put just one crater in the airstrip in the Falkland Islands, denying it to Argentine fast jets, but allowing C-130 transports to continue to operate from it.

From a training, armament, improvisation, and logistics standpoint this was a remarkable achievement, and the author argues that its consequences, direct and indirect, effectively took the Argentine fast air fighter force and navy out of the conflict, and hence paved the way for the British reconquista of the islands. Today it seems quaint; you'd just launch a few cruise missiles at the airfield, cratering it and spreading area denial munitions and that would be that, without risking a single airman. But they didn't have that option then, and so they did their best with what was available, and this epic story recounts how they pulled it off with hardware on the edge of retirement, re-purposed for a mission its designers never imagined, mounted with a plan with no margin for error, on a schedule nobody could have imagined absent wartime exigency. This is a tale of the Vulcan mission; if you're looking for a comprehensive account of the Falklands War, you'll have to look elsewhere. The Vulcan raid on the Falklands was one of those extraordinary grand gestures, like the Doolittle Raid on Japan, which cast a longer shadow in history than their direct consequences implied. After the Vulcan raid, nobody doubted the resolve of Britain, and the resulting drawback of the Argentine forces almost certainly reduced the cost of retaking the islands from the invader.

May 2010 Permalink

White, Rowland. Into the Black. New York: Touchstone, 2016. ISBN 978-1-5011-2362-7.
On April 12, 1981, coincidentally exactly twenty years after Yuri Gagarin became the first man to orbit the Earth in Vostok 1, the United States launched one of the most ambitious and risky manned space flights ever attempted. The flight of Space Shuttle Orbiter Columbia on its first mission, STS-1, would be the first time a manned spacecraft was launched with a crew on its first flight. (All earlier spacecraft were tested in unmanned flights before putting a crew at risk.) It would also be the first manned spacecraft to be powered by solid rocket boosters which, once lit, could not be shut down but had to be allowed to burn out. In addition, it would be the first flight test of the new Space Shuttle Main Engines, the most advanced and high performance rocket engines ever built, which had a record of exploding when tested on the ground. The shuttle would be the first space vehicle to fly back from space using wings and control surfaces to steer to a pinpoint landing. Instead of a one-shot ablative heat shield, the shuttle was covered by fragile silica tiles and reinforced carbon-carbon composite to protect its aluminium structure from reentry heating which, without thermal protection, would melt it in seconds. When returning to Earth, the shuttle would have to maneuver in a hypersonic flight regime in which no vehicle had ever flown before, then transition to supersonic and finally subsonic flight before landing. The crew would not control the shuttle directly, but fly it through redundant flight control computers which had never been tested in flight. Although the orbiter was equipped with ejection seats for the first four test flights, they could only be used in a small part of the flight envelope: for most of the mission everything simply had to work correctly for the ship and crew to return safely. Main engine start—ignition of the solid rocket boosters—and liftoff!

Even before the goal of landing on the Moon had been accomplished, it was apparent to NASA management that no national consensus existed to continue funding a manned space program at the level of Apollo. Indeed, in 1966, NASA's budget reached a peak which, as a fraction of the federal budget, has never been equalled. The Saturn V rocket was ideal for lunar landing missions, but expended each mission, was so expensive to build and operate as to be unaffordable for suggested follow-on missions. After building fifteen Saturn V flight vehicles, only thirteen of which ever flew, Saturn V production was curtailed. With the realisation that the “cost is no object” days of Apollo were at an end, NASA turned its priorities to reducing the cost of space flight, and returned to a concept envisioned by Wernher von Braun in the 1950s: a reusable space ship.

You don't have to be a rocket scientist or rocket engineer to appreciate the advantages of reusability. How much would an airline ticket cost if they threw away the airliner at the end of every flight? If space flight could move to an airline model, where after each mission one simply refueled the ship, performed routine maintenance, and flew again, it might be possible to reduce the cost of delivering payload into space by a factor of ten or more. But flying into space is much more difficult than atmospheric flight. With the technologies and fuels available in the 1960s (and today), it appeared next to impossible to build a launcher which could get to orbit with just a single stage (and even if one managed to accomplish it, its payload would be negligible). That meant any practical design would require a large booster stage and a smaller second stage which would go into orbit, perform the mission, then return.

Initial design concepts envisioned a very large (comparable to a Boeing 747) winged booster to which the orbiter would be attached. At launch, the booster would lift itself and the orbiter from the pad and accelerate to a high velocity and altitude where the orbiter would separate and use its own engines and fuel to continue to orbit. After separation, the booster would fire its engines to boost back toward the launch site, where it would glide to a landing on a runway. At the end of its mission, the orbiter would fire its engines to de-orbit, then reenter the atmosphere and glide to a landing. Everything would be reusable. For the next mission, the booster and orbiter would be re-mated, refuelled, and readied for launch.

Such a design had the promise of dramatically reducing costs and increasing flight rate. But it was evident from the start that such a concept would be very expensive to develop. Two separate manned spacecraft would be required, one (the booster) much larger than any built before, and the second (the orbiter) having to operate in space and survive reentry without discarding components. The orbiter's fuel tanks would be bulky, and make it difficult to find room for the payload and, when empty during reentry, hard to reinforce against the stresses they would encounter. Engineers believed all these challenges could be met with an Apollo era budget, but with no prospect of such funds becoming available, a more modest design was the only alternative.

Over a multitude of design iterations, the now-familiar architecture of the space shuttle emerged as the only one which could meet the mission requirements and fit within the schedule and budget constraints. Gone was the flyback booster, and with it full reusability. Two solid rocket boosters would be used instead, jettisoned when they burned out, to parachute into the ocean and be fished out by boats for refurbishment and reuse. The orbiter would not carry the fuel for its main engines. Instead, it was mounted on the side of a large external fuel tank which, upon reaching orbit, would be discarded and burn up in the atmosphere. Only the orbiter, with its crew and payload, would return to Earth for a runway landing. Each mission would require either new or refurbished solid rocket boosters, a new external fuel tank, and the orbiter.

The mission requirements which drove the design were not those NASA would have chosen for the shuttle were the choice theirs alone. The only way NASA could “sell” the shuttle to the president and congress was to present it as a replacement for all existing expendable launch vehicles. That would assure a flight rate sufficient to achieve the economies of scale required to drive down costs and reduce the cost of launch for military and commercial satellite payloads as well as NASA missions. But that meant the shuttle had to accommodate the large and heavy reconnaissance satellites which had been launched on Titan rockets. This required a huge payload bay (15 feet wide by 59 feet long) and a payload to low Earth orbit of 60,000 pounds. Further Air Force requirements dictated a large cross-range (ability to land at destinations far from the orbital ground track), which in turn required a hot and fast reentry very demanding on the thermal protection system.

The shuttle represented, in a way, the unification of NASA with the Air Force's own manned space ambitions. Ever since the start of the space age, the Air Force sought a way to develop its own manned military space capability. Every time it managed to get a program approved: first Dyna-Soar and then the Manned Orbiting Laboratory, budget considerations and Pentagon politics resulted in its cancellation, orphaning a corps of highly-qualified military astronauts with nothing to fly. Many of these pilots would join the NASA astronaut corps in 1969 and become the backbone of the early shuttle program when they finally began to fly more than a decade later.

All seemed well on board. The main engines shut down. The external fuel tank was jettisoned. Columbia was in orbit. Now weightless, commander John Young and pilot Bob Crippen immediately turned to the flight plan, filled with tasks and tests of the orbiter's systems. One of their first jobs was to open the payload bay doors. The shuttle carried no payload on this first flight, but only when the doors were open could the radiators that cooled the shuttle's systems be deployed. Without the radiators, an emergency return to Earth would be required lest electronics be damaged by overheating. The doors and radiators functioned flawlessly, but with the doors open Young and Crippen saw a disturbing sight. Several of the thermal protection tiles on the pods containing the shuttle's maneuvering engines were missing, apparently lost during the ascent to orbit. Those tiles were there for a reason: without them the heat of reentry could melt the aluminium structure they protected, leading to disaster. They reported the missing tiles to mission control, adding that none of the other tiles they could see from windows in the crew compartment appeared to be missing.

The tiles had been a major headache during development of the shuttle. They had to be custom fabricated, carefully applied by hand, and were prone to falling off for no discernible reason. They were extremely fragile, and could even be damaged by raindrops. Over the years, NASA struggled with these problems, patiently finding and testing solutions to each of them. When STS-1 launched, they were confident the tile problems were behind them. What the crew saw when those payload bay doors opened was the last thing NASA wanted to see. A team was set to analysing the consequences of the missing tiles on the engine pods, and quickly reported back that they should pose no problem to a safe return. The pods were protected from the most severe heating during reentry by the belly of the orbiter, and the small number of missing tiles would not affect the aerodynamics of the orbiter in flight.

But if those tiles were missing, mightn't other tiles also have been lost? In particular, what about those tiles on the underside of the orbiter which bore the brunt of the heating? If some of them were missing, the structure of the shuttle might burn through and the vehicle and crew would be lost. There was no way for the crew to inspect the underside of the orbiter. It couldn't be seen from the crew cabin, and there was no way to conduct an EVA to examine it. Might there be other, shall we say, national technical means, of inspecting the shuttle in orbit? Now STS-1 truly ventured into the black, a story never told until many years after the mission and documented thoroughly for a popular audience here for the first time.

In 1981, ground-based surveillance of satellites in orbit was rudimentary. Two Department of Defense facilities, in Hawaii and Florida, normally used to image Soviet and Chinese satellites, were now tasked to try to image Columbia in orbit. This was a daunting task: the shuttle was in a low orbit, which meant waiting until an orbital pass would cause it to pass above one of the telescopes. It would be moving rapidly so there would be only seconds to lock on and track the target. The shuttle would have to be oriented so its belly was aimed toward the telescope. Complicating the problem, the belly tiles were black, so there was little contrast against the black of space. And finally, the weather had to cooperate: without a perfectly clear sky, there was no hope of obtaining a usable image. Several attempts were made, all unsuccessful.

But there were even deeper black assets. The National Reconnaissance Office (whose very existence was a secret at the time) had begun to operate the KH-11 KENNEN digital imaging satellites in the 1970s. Unlike earlier spysats, which exposed film and returned it to the Earth for processing and interpretation, the KH-11 had a digital camera and the ability to transmit imagery to ground stations shortly after it was captured. There were few things so secret in 1981 as the existence and capabilities of the KH-11. Among the people briefed in on this above top secret program were the NASA astronauts who had previously been assigned to the Manned Orbiting Laboratory program which was, in fact, a manned reconnaissance satellite with capabilities comparable to the KH-11.

Dancing around classification, compartmentalisation, bureaucratic silos, need to know, and other barriers, people who understood what was at stake made it happen. The flight plan was rewritten so that Columbia was pointed in the right direction at the right time, the KH-11 was programmed for the extraordinarily difficult task of taking a photo of one satellite from another, when their closing velocities are kilometres per second, relaying the imagery to the ground and getting it to the NASA people who needed it without the months of security clearance that would normally entail. The shuttle was a key national security asset. It was to launch all reconnaissance satellites in the future. Reagan was in the White House. They made it happen. When the time came for Columbia to come home, the very few people who mattered in NASA knew that, however many other things they had to worry about, the tiles on the belly were not among them.

(How different it was in 2003 when the same Columbia suffered a strike on its left wing from foam shed from the external fuel tank. A thoroughly feckless and bureaucratised NASA rejected requests to ask for reconnaissance satellite imagery which, with two decades of technological improvement, would have almost certainly revealed the damage to the leading edge which doomed the orbiter and crew. Their reason: “We can't do anything about it anyway.” This is incorrect. For a fictional account of a rescue, based upon the report [PDF, scroll to page 173] of the Columbia Accident Investigation Board, see Launch on Need [February 2012].)

This is a masterful telling of a gripping story by one of the most accomplished of aerospace journalists. Rowan White is the author of Vulcan 607 (May 2010), the definitive account of the Royal Air Force raid on the airport in the Falkland Islands in 1982. Incorporating extensive interviews with people who were there, then, and sources which were classified until long after the completion of the mission, this is a detailed account of one of the most consequential and least appreciated missions in U.S. manned space history which reads like a techno-thriller.

September 2016 Permalink

Williams, Andrew. The Battle of the Atlantic. New York: Basic Books, 2003. ISBN 0-465-09153-9.

May 2003 Permalink

Winchester, Simon. The Map that Changed the World. New York: HarperCollins, 2001. ISBN 0-06-093180-9.
This is the story of William Smith, the son of an Oxfordshire blacksmith, who, with almost no formal education but keen powers of observation and deduction, essentially single-handedly created the modern science of geology in the last years of the 18th and the beginning of the 19th century, culminating in the 1815 publication of Smith's masterwork: a large scale map of the stratigraphy of England, Wales, and part of Scotland, which is virtually identical to the most modern geological maps. Although fossil collecting was a passion of the aristocracy in his time, Smith was the first to observe that particular fossil species were always associated with the same stratum of rock and hence, conversely, that rock containing the same population of fossils was the same stratum, wherever it was found. This permitted him to decode the layering of strata and their relative ages, and predict where coal and other minerals were likely to be found, which was a matter of great importance at the dawn of the industrial revolution. In his long life, in addition to inventing modern geology (he coined the word “stratigraphical”), he surveyed mines, built canals, operated a quarry, was the victim of plagiarism, designed a museum, served time in debtor's prison, was denied membership in the newly-formed Geological Society of London due to his humble origins, yet years later was the first recipient of its highest award, the Wollaston Medal, presented to him as the “Father of English Geology”. Smith's work transformed geology from a pastime for fossil collectors and spinners of fanciful theories to a rigorous empirical science and laid the bedrock (if you'll excuse the term) for Darwin and the modern picture of the history of the Earth. The author is very fond of superlatives. While Smith's discoveries, adventures, and misadventures certainly merit them, they get a little tedious after a hundred pages or so. Winchester seems to have been traumatised by his childhood experiences in a convent boarding-school (chapter 11), and he avails himself of every possible opportunity to express his disdain for religion, the religious, and those (the overwhelming majority of learned people in Smith's time) who believed in the Biblical account of creation and the flood. This is irrelevant to and a distraction from the story. Smith's career marked the very beginning of scientific investigation of natural history; when Smith's great geological map was published in 1815, Charles Darwin was six years old. Smith never suffered any kind of religious persecution or opposition to his work, and several of his colleagues in the dawning days of earth science were clergymen. Simon Winchester is also the author of The Professor and the Madman, the story of the Oxford English Dictionary.

August 2004 Permalink

Wolfe, Tom. The Kingdom of Speech. New York: Little, Brown, 2016. ISBN 978-0-316-40462-4.
In this short (192) page book, Tom Wolfe returns to his roots in the “new journalism”, of which he was a pioneer in the 1960s. Here the topic is the theory of evolution; the challenge posed to it by human speech (because no obvious precursor to speech occurs in other animals); attempts, from Darwin to Noam Chomsky to explain this apparent discrepancy and preserve the status of evolution as a “theory of everything”; and the evidence collected by linguist and anthropologist Daniel Everett among the Pirahã people of the Amazon basin in Brazil, which appears to falsify Chomsky's lifetime of work on the origin of human language and the universality of its structure. A second theme is contrasting theorists and intellectuals such as Darwin and Chomsky with “flycatchers” such as Alfred Russel Wallace, Darwin's rival for priority in publishing the theory of evolution, and Daniel Everett, who work in the field—often in remote, unpleasant, and dangerous conditions—to collect the data upon which the grand thinkers erect their castles of hypothesis.

Doubtless fearful of the reaction if he suggested the theory of evolution applied to the origin of humans, in his 1859 book On the Origin of Species, Darwin only tiptoed close to the question two pages from the end, writing, “In the distant future, I see open fields for far more important researches. Psychology will be securely based on a new foundation, that of the necessary acquirement of each mental power and capacity of gradation. Light will be thrown on the origin of man and his history.” He needn't have been so cautious: he fooled nobody. The very first review, five days before publication, asked, “If a monkey has become a man—…?”, and the tempest was soon at full force.

Darwin's critics, among them Max Müller, German-born professor of languages at Oxford, and Darwin's rival Alfred Wallace, seized upon human characteristics which had no obvious precursors in the animals from which man was supposed to have descended: a hairless body, the capacity for abstract thought, and, Müller's emphasis, speech. As Müller said, “Language is our Rubicon, and no brute will dare cross it.” How could Darwin's theory, which claimed to describe evolution from existing characteristics in ancestor species, explain completely novel properties which animals lacked?

Darwin responded with his 1871 The Descent of Man, and Selection in Relation to Sex, which explicitly argued that there were precursors to these supposedly novel human characteristics among animals, and that, for example, human speech was foreshadowed by the mating songs of birds. Sexual selection was suggested as the mechanism by which humans lost their hair, and the roots of a number of human emotions and even religious devotion could be found in the behaviour of dogs. Many found these arguments, presented without any concrete evidence, unpersuasive. The question of the origin of language had become so controversial and toxic that a year later, the Philological Society of London announced it would no longer accept papers on the subject.

With the rediscovery of Gregor Mendel's work on genetics and subsequent research in the field, a mechanism which could explain Darwin's evolution was in hand, and the theory became widely accepted, with the few discrepancies set aside (as had the Philological Society) as things we weren't yet ready to figure out.

In the years after World War II, the social sciences became afflicted by a case of “physics envy”. The contribution to the war effort by their colleagues in the hard sciences in areas such as radar, atomic energy, and aeronautics had been handsomely rewarded by prestige and funding, while the more squishy sciences remained in a prewar languor along with the departments of Latin, Medieval History, and Drama. Clearly, what was needed was for these fields to adopt a theoretical approach grounded in mathematics which had served so well for chemists, physicists, engineers, and appeared to be working for the new breed of economists.

It was into this environment that in the late 1950s a young linguist named Noam Chomsky burst onto the scene. Over its century and a half of history, much of the work of linguistics had been cataloguing and studying the thousands of languages spoken by people around the world, much as entomologists and botanists (or, in the pejorative term of Darwin's age, flycatchers) travelled to distant lands to discover the diversity of nature and try to make sense of how it was all interrelated. In his 1957 book, Syntactic Structures, Chomsky, then just twenty-eight years old and working in the building at MIT where radar had been developed during the war, said all of this tedious and messy field work was unnecessary. Humans had evolved (note, “evolved”) a “language organ”, an actual physical structure within the brain—the “language acquisition device”—which children used to learn and speak the language they heard from their parents. All human languages shared a “universal grammar”, on top of which all the details of specific languages so carefully catalogued in the field were just fluff, like the specific shape and colour of butterflies' wings. Chomsky invented the “Martian linguist” which was to come to feature in his lectures, who he claimed, arriving on Earth, would quickly discover the unity underlying all human languages. No longer need the linguist leave his air conditioned office. As Wolfe writes in chapter 4, “Now, all the new, Higher Things in a linguist's life were to be found indoors, at a desk…looking at learned journals filled with cramped type instead of at a bunch of hambone faces in a cloud of gnats.”

Given the alternatives, most linguists opted for the office, and for the prestige that a theory-based approach to their field conferred, and by the 1960s, Chomsky's views had taken over linguistics, with only a few dissenters, at whom Chomsky hurled thunderbolts from his perch on academic Olympus. He transmuted into a general-purpose intellectual, pronouncing on politics, economics, philosophy, history, and whatever occupied his fancy, all with the confidence and certainty he brought to linguistics. Those who dissented he denounced as “frauds”, “liars”, or “charlatans”, including B. F. Skinner, Alan Dershowitz, Jacques Lacan, Elie Wiesel, Christopher Hitchens, and Jacques Derrida. (Well, maybe I agree when it comes to Derrida and Lacan.) In 2002, with two colleagues, he published a new theory claiming that recursion—embedding one thought within another—was a universal property of human language and component of the universal grammar hard-wired into the brain.

Since 1977, Daniel Everett had been living with and studying the Pirahã in Brazil, originally as a missionary and later as an academic linguist trained and working in the Chomsky tradition. He was the first person to successfully learn the Pirahã language, and documented it in publications. In 2005 he published a paper in which he concluded that the language, one of the simplest ever described, contained no recursion whatsoever. It also contained neither a past nor future tense, description of relations beyond parents and siblings, gender, numbers, and many additional aspects of other languages. But the absence of recursion falsified Chomsky's theory, which pronounced it a fundamental part of all human languages. Here was a field worker, a flycatcher, braving not only gnats but anacondas, caimans, and just about every tropical disease in the catalogue, knocking the foundation from beneath the great man's fairy castle of theory. Naturally, Chomsky and his acolytes responded with their customary vituperation, (this time, the adjective of choice for Everett was “charlatan”). Just as they were preparing the academic paper which would drive a stake through this nonsense, Everett published Don't Sleep, There Are Snakes, a combined account of his thirty years with the Pirahã and an analysis of their language. The book became a popular hit and won numerous awards. In 2012, Everett followed up with Language: The Cultural Tool, which rejects Chomsky's view of language as an innate and universal human property in favour of the view that it is one among a multitude of artifacts created by human societies as a tool, and necessarily reflects the characteristics of those societies. Chomsky now refuses to discuss Everett's work.

In the conclusion, Wolfe comes down on the side of Everett, and argues that the solution to the mystery of how speech evolved is that it didn't evolve at all. Speech is simply a tool which humans used their big brains to invent to help them accomplish their goals, just as they invented bows and arrows, canoes, and microprocessors. It doesn't make any more sense to ask how evolution produced speech than it does to suggest it produced any of those other artifacts not made by animals. He further suggests that the invention of speech proceeded from initial use of sounds as mnemonics for objects and concepts, then progressed to more complex grammatical structure, but I found little evidence in his argument to back the supposition, nor is this a necessary part of viewing speech as an invented artifact. Chomsky's grand theory, like most theories made up without grounding in empirical evidence, is failing both by being falsified on its fundamentals by the work of Everett and others, and also by the failure, despite half a century of progress in neurophysiology, to identify the “language organ” upon which it is based.

It's somewhat amusing to see soft science academics rush to Chomsky's defence, when he's arguing that language is biologically determined as opposed to being, as Everett contends, a social construct whose details depend upon the cultural context which created it. A hunter-gatherer society such as the Pirahã living in an environment where food is abundant and little changes over time scales from days to generations, doesn't need a language as complicated as those living in an agricultural society with division of labour, and it shouldn't be a surprise to find their language is more rudimentary. Chomsky assumed that all human languages were universal (able to express any concept), in the sense David Deutsch defined universality in The Beginning of Infinity, but why should every people have a universal language when some cultures get along just fine without universal number systems or alphabets? Doesn't it make a lot more sense to conclude that people settle on a language, like any other tools, which gets the job done? Wolfe then argues that the capacity of speech is the defining characteristic of human beings, and enables all of the other human capabilities and accomplishments which animals lack. I'd consider this not proved. Why isn't the definitive human characteristic the ability to make tools, and language simply one among a multitude of tools humans have invented?

This book strikes me as one or two interesting blog posts struggling to escape from a snarknado of Wolfe's 1960s style verbal fireworks, including Bango!, riiippp, OOOF!, and “a regular crotch crusher!”. At age 85, he's still got it, but I wonder whether he, or his editor, questioned whether this style of journalism is as effective when discussing evolutionary biology and linguistics as in mocking sixties radicals, hippies, or pretentious artists and architects. There is some odd typography, as well. Grave accents are used in words like “learnèd”, presumably to indicate it's to be pronounced as two syllables, but then occasionally we get an acute accent instead—what's that supposed to mean? Chapter endnotes are given as superscript letters while source citations are superscript numbers, neither of which are easy to select on a touch-screen Kindle edition. There is no index.

January 2017 Permalink

Woodbury, David O. The Glass Giant of Palomar. New York: Dodd, Mead, [1939, 1948] 1953. LCCN 53000393.
I originally read this book when I was in junior high school—it was one of the few astronomy titles in the school's library. It's one of the grains of sand dropping on the pile which eventually provoked the avalanche that persuaded me I was living in the golden age of engineering and that I'd best spend my life making the most of it.

Seventy years after it was originally published (the 1948 and 1953 updates added only minor information on the final commissioning of the telescope and a collection of photos taken through it), this book still inspires respect for those who created the 200 inch Hale Telescope on Mount Palomar, and the engineering challenges they faced and overcame in achieving that milestone in astronomical instrumentation. The book is as much a biography of George Ellery Hale as it is a story of the giant telescope he brought into being. Hale was a world class scientist: he invented the spectroheliograph, discovered the magnetic fields of sunspots, founded the Astrophysical Journal and to a large extent the field of astrophysics itself, but he also excelled as a promoter and fund-raiser for grand-scale scientific instrumentation. The Yerkes, Mount Wilson, and Palomar observatories would, in all likelihood, not have existed were it not for Hale's indefatigable salesmanship. And this was an age when persuasiveness was all. With the exception of the road to the top of Palomar, all of the observatories and their equipment promoted by Hale were funded without a single penny of taxpayer money. For the Palomar 200 inch, he raised US$6 million in gold-backed 1930 dollars, which in present-day paper funny-money amounts to US$78 million.

It was a very different America which built the Palomar telescope. Not only was it never even thought of that money coercively taken from taxpayers would be diverted to pure science, anybody who wanted to contribute to the project, regardless of their academic credentials, was judged solely on their merits and given a position based upon their achievements. The chief optician who ground, polished, and figured the main mirror of the Palomar telescope (so perfectly that its potential would not be realised until recently thanks to adaptive optics) had a sixth grade education and was first employed at Mount Wilson as a truck driver. You can make of yourself what you have within yourself in America, so they say—so it was for Marcus Brown (p. 279). Milton Humason who, with Edwin Hubble, discovered the expansion of the universe, dropped out of school at the age of 14 and began his astronomical career driving supplies up Mount Wilson on mule trains. You can make of yourself what you have within yourself in America, or at least you could then. Now we go elsewhere.

Is there anything Russell W. Porter didn't do? Arctic explorer, founder of the hobby of amateur telescope making, engineer, architect…his footprints and brushstrokes are all over technological creativity in the first half of the twentieth century. And he is much in evidence here: recruited in 1927, he did the conceptual design for most of the buildings of the observatory, and his cutaway drawings of the mechanisms of the telescope demonstrate to those endowed with contemporary computer graphics tools that the eye of the artist is far more important than the technology of the moment.

This book has been out of print for decades, but used copies (often, sadly, de-accessioned by public libraries) are generally available at prices (unless you're worried about cosmetics and collectability) comparable to present-day hardbacks. It's as good a read today as it was in 1962.

October 2009 Permalink

Young, Anthony. The Saturn V F-1 Engine. Chichester, UK: Springer Praxis, 2009. ISBN 978-0-387-09629-2.
The F-1 rocket engine which powered the first (S-IC) stage of the Saturn V booster, which launched all of the Apollo missions to the Moon and, as a two stage variant, the Skylab space station, was one of the singular engineering achievements of the twentieth century, which this magnificent book chronicles in exquisite detail. When the U.S. Air Force contracted with Rocketdyne in 1958 for the preliminary design of a single chamber engine with between 1 and 1.5 million pounds of thrust, the largest existing U.S. rocket engine had less than a quarter the maximum thrust of the proposed new powerplant, and there was no experience base to provide confidence that problems such as ignition transients and combustion instability which bedevil liquid rockets would not prove insuperable when scaling an engine to such a size. (The Soviets were known to have heavy-lift boosters, but at the time nobody knew their engine configuration. In fact, when their details came to be known in the West, they were discovered to use multiple combustion chambers and/or clustering of engines precisely to avoid the challenges of very large engines.)

When the F-1 development began, there was no rocket on the drawing board intended to use it, nor any mission defined which would require it. The Air Force had simply established that such an engine would be adequate to accomplish any military mission in the foreseeable future. When NASA took over responsibility for heavy launchers from the Air Force, the F-1 engine became central to the evolving heavy lifters envisioned for missions beyond Earth orbit. After Kennedy's decision to mount a manned lunar landing mission, NASA embarked on a furious effort to define how such a mission could be accomplished and what hardware would be required to perform it. The only alternative to heavy lift would be a large number of launches which assembled the Moon ship in Earth orbit, which was a daunting prospect at a time when not only were rockets famously unreliable and difficult to launch on time, but nobody had ever so much as attempted rendezvous in space, no less orbital assembly or refuelling operations.

With the eventual choice of lunar orbit rendezvous as the mission mode, it became apparent that it would be possible to perform the lunar landing mission with a single launch of a booster with 7.5 million pounds of sea level thrust, which could be obtained from a cluster of five F-1 engines (which by that time NASA had specified as 1.5 million pounds of thrust). From the moment the preliminary design of the Saturn V was defined until Apollo 11 landed on the Moon, the definition, design, testing, and manufacturing of the F-1 engine was squarely on the critical path of the Apollo project. If the F-1 did not work, or was insufficiently reliable to perform in a cluster of five and launch on time in tight lunar launch windows, or could not have been manufactured in the quantities required, there would be no lunar landing. If the schedule of the F-1 slipped, the Apollo project would slip day-for-day along with its prime mover.

This book recounts the history, rationale, design, development, testing, refinement, transition to serial production, integration into test articles and flight hardware, and service history of this magnificent machine. Sadly, at this remove, some of the key individuals involved in this project are no longer with us, but the author tracked down those who remain and discovered interviews done earlier by other researchers with the departed, and he stands back and lets them speak, in lengthy quotations, not just about the engineering and management challenges they faced and how they were resolved, but what it felt like to be there, then. You get the palpable sense from these accounts that despite the tension, schedule and budget pressure, long hours, and frustration as problem after problem had to be diagnosed and resolved, these people were having the time of their lives, and that they knew it at the time and cherish it even at a half century's remove. The author has collected more than a hundred contemporary photographs, many in colour, which complement the text.

A total of sixty-five F-1 engines powered 13 Saturn V flight vehicles. They performed with 100% reliability.

January 2012 Permalink

Zabel, Bryce. Surrounded by Enemies. Minneapolis: Mill City Press, 2013. ISBN 978-1-62652-431-6.
What if John F. Kennedy had survived the assassination attempt in Dallas? That is the point of departure for this gripping alternative history novel by reporter, author, and screenwriter Bryce Zabel. Spared an assassin's bullet by a heroic Secret Service agent, a shaken Kennedy returns to Washington and convenes a small group of his most trusted inner circle led by his brother Robert, the attorney general, to investigate who might have launched such an attack and what steps could be taken both to prevent a second attempt and to bring the perpetrators to justice.

Surveying the landscape, they conclude it might be easier to make a list of powerful forces who might not wish to kill the president. Kennedy's actions in office had given actors ranging from Cuba, anti-Castro groups in the U.S., the Mafia, FBI, CIA, senior military commanders, the Secret Service, Texas oil interests, and even Vice President Johnson potential motivations to launch or condone an attack. At the same time, while pursuing their own quiet inquiry, they must try to avert a Congressional investigation which might turn into a partisan circus, diverting attention from their strategy for Kennedy's 1964 re-election campaign.

But in the snake pit which is Washington, there is more than one way to assassinate a man, and Kennedy's almost grotesque womanising and drug use (both he and his wife were regular patients of Max Jacobson, “Dr. Feelgood”, whose “tissue regenerator” injections were laced with amphetamines) provided the ammunition his enemies needed to try to bring him down by assassinating his character in the court of public opinion.

A shadowy figure begins passing FBI files to two reporters of Top Story, a recently-launched news magazine struggling in the shadow of Time and Newsweek. After investigating the allegations and obtaining independent corroboration for some of them, Top Story runs a cover story on “The Secret Life of the President”, creating a firestorm of scrutiny of the president's private life by media who never before considered such matters worthy of investigation or reporting.

The political implications quickly assume the dimensions of a constitutional crisis, where the parties involved are forced to weigh appropriate sanctions for a president whose behaviour may have put the national security at risk versus taking actions which may give those who plotted to kill the president what they tried to achieve in Dallas with a bullet.

The plot deftly weaves historical events from the epoch with twists and turns which all follow logically from the point of departure, and the result is a very different history of the 1960s and 1970s which, to this reader who lived through those decades, seems entirely plausible. The author, who identifies himself in the introduction as “a lifelong Democrat”, brings no perceptible ideological or political agenda to the story—the characters are as complicated as the real people were, and behave in ways which are believable given the changed circumstances.

The story is told in a clever way: as a special issue of Top Story commemorating the 50th anniversary of the assassination attempt. Written in weekly news magazine style, this allows it to cite memoirs, recollections by those involved in years after the events described, and documents which became available much later. There are a few goofs regarding historical events in the sixties which shouldn't have been affected by the alternative timeline, but readers who notice them can just chuckle and get on with the story. The book is almost entirely free of copy-editing errors.

This is a superb exemplar of alternative history, and Harry Turtledove, the cosmic grand master of the genre, contributes a foreword to the novel.

November 2013 Permalink