2014  

January 2014

Faulks, Sebastian. Jeeves and the Wedding Bells. London: Hutchinson, 2013. ISBN 978-0-09-195404-8.
As a fan of P. G. Wodehouse ever since I started reading his work in the 1970s, and having read every single Jeeves and Wooster story, it was with some trepidation that I picked up this novel, the first Jeeves and Wooster story since Aunts Aren't Gentlemen, published in 1974, a year before Wodehouse's death. This book, published with the permission of the Wodehouse estate, is described by the author as a tribute to P. G. Wodehouse which he hopes will encourage readers to discover the work of the master.

The author notes that, while remaining true to the characters of Jeeves and Wooster and the ambience of the stories, he did not attempt to mimic Wodehouse's style. Notwithstanding, to this reader, the result is so close to that of Wodehouse that if you dropped it into a Wodehouse collection unlabelled, I suspect few readers would find anything discordant. Faulks's Jeeves seems to use more jaw-breaking words than I recall Wodehouse's, but that's about it. Apart from Jeeves and Wooster, none of the regular characters who populate Wodehouse's stories appear on stage here. We hear of members of the Drones, the terrifying Aunt Agatha, and others, and mentions of previous episodes involving them, but all of the other dramatis personæ are new.

On holiday in the south of France, Bertie Wooster makes the acquaintance of Georgiana Meadowes, a copy editor for a London publisher having escaped the metropolis to finish marking up a manuscript. Bertie is immediately smitten, being impressed by Georgiana's beauty, brains, and wit, albeit less so with her driving (“To say she drove in the French fashion would be to cast a slur on those fine people.”). Upon his return to London, Bertie soon reads that Georgiana has become engaged to a travel writer she mentioned her family urging her to marry. Meanwhile, one of Bertie's best friends, “Woody” Beeching, confides his own problem with the fairer sex. His fiancée has broken off the engagement because her parents, the Hackwoods, need their daughter to marry into wealth to save the family seat, at risk of being sold. Before long, Bertie discovers that the matrimonial plans of Georgiana and Woody are linked in a subtle but inflexible way, and that a delicate hand, acting with nuance, will be needed to assure all ends well.

Evidently, a job crying out for the attention of Bertram Wilberforce Wooster! Into the fray Jeeves and Wooster go, and before long a quintessentially Wodehousean series of impostures, misadventures, misdirections, eccentric characters, disasters at the dinner table, and carefully crafted stratagems gone horribly awry ensue. If you are not acquainted with that game which the English, not being a very spiritual people, invented to give them some idea of eternity (G. B. Shaw), you may want to review the rules before reading chapter 7.

Doubtless some Wodehouse fans will consider any author's bringing Jeeves and Wooster back to life a sacrilege, but this fan simply relished the opportunity to meet them again in a new adventure which is entirely consistent with the Wodehouse canon and characters. I would have been dismayed had this been a parody or some “transgressive” despoilation of the innocent world these characters inhabit. Instead we have a thoroughly enjoyable romp in which the prodigious brain of Jeeves once again saves the day.

The U.K. edition is linked above. U.S. and worldwide Kindle editions are available.

 Permalink

Pooley, Charles and Ed LeBouthillier. Microlaunchers. Seattle: CreateSpace, 2013. ISBN 978-1-4912-8111-6.
Many fields of engineering are subject to scaling laws: as you make something bigger or smaller various trade-offs occur, and the properties of materials, cost, or other design constraints set limits on the largest and smallest practical designs. Rockets for launching payloads into Earth orbit and beyond tend to scale well as you increase their size. Because of the cube-square law, the volume of propellant a tank holds increases as the cube of the size while the weight of the tank goes as the square (actually a bit faster since a larger tank will require more robust walls, but for a rough approximation calling it the square will do). Viable rockets can get very big indeed: the Sea Dragon, although never built, is considered a workable design. With a length of 150 metres and 23 metres in diameter, it would have more than ten times the first stage thrust of a Saturn V and place 550 metric tons into low Earth orbit.

What about the other end of the scale? How small could a space launcher be, what technologies might be used in it, and what would it cost? Would it be possible to scale a launcher down so that small groups of individuals, from hobbyists to college class projects, could launch their own spacecraft? These are the questions explored in this fascinating and technically thorough book. Little practical work has been done to explore these questions. The smallest launcher to place a satellite in orbit was the Japanese Lambda 4S with a mass of 9400 kg and length of 16.5 metres. The U.S. Vanguard rocket had a mass of 10,050 kg and length of 23 metres. These are, though small compared to the workhorse launchers of today, still big, heavy machines, far beyond the capabilities of small groups of people, and sufficiently dangerous if something goes wrong that they require launch sites in unpopulated regions.

The scale of launchers has traditionally been driven by the mass of the payload they carry to space. Early launchers carried satellites with crude 1950s electronics, while many of their successors were derived from ballistic missiles sized to deliver heavy nuclear warheads. But today, CubeSats have demonstrated that useful work can be done by spacecraft with a volume of one litre and mass of 1.33 kg or less, and the PhoneSat project holds out the hope of functional spacecraft comparable in weight to a mobile telephone. While to date these small satellites have flown as piggy-back payloads on other launches, the availability of dedicated launchers sized for them would increase the number of launch opportunities and provide access to trajectories unavailable in piggy-back opportunities.

Just because launchers have tended to grow over time doesn't mean that's the only way to go. In the 1950s and '60s many people expected computers to continue their trend of getting bigger and bigger to the point where there were a limited number of “computer utilities” with vast machines which customers accessed over the telecommunication network. But then came the minicomputer and microcomputer revolutions and today the computing power in personal computers and mobile devices dwarfs that of all supercomputers combined. What would it take technologically to spark a similar revolution in space launchers?

With the smallest successful launchers to date having a mass of around 10 tonnes, the authors choose two weight budgets: 1000 kg on the high end and 100 kg as the low. They divide these budgets into allocations for payload, tankage, engines, fuel, etc. based upon the experience of existing sounding rockets, then explore what technologies exist which might enable such a vehicle to achieve orbital or escape velocity. The 100 kg launcher is a huge technological leap from anything with which we have experience and probably could be built, if at all, only after having gained experience from earlier generations of light launchers. But then the current state of the art in microchip fabrication would have seemed like science fiction to researchers in the early days of integrated circuits and it took decades of experience and generation after generation of chips and many technological innovations to arrive where we are today. Consequently, most of the book focuses on a three stage launcher with the 1000 kg mass budget, capable of placing a payload of between 150 and 200 grams on an Earth escape trajectory.

The book does not spare the rigour. The reader is introduced to the rocket equation, formulæ for aerodynamic drag, the standard atmosphere, optimisation of mixture ratios, combustion chamber pressure and size, nozzle expansion ratios, and a multitude of other details which make the difference between success and failure. Scaling to the size envisioned here without expensive and exotic materials and technologies requires out of the box thinking, and there is plenty on display here, including using beverage cans for upper stage propellant tanks.

A 1000 kg space launcher appears to be entirely feasible. The question is whether it can be done without the budget of hundreds of millions of dollars and years of development it would certainly take were the problem assigned to an aerospace prime contractor. The authors hold out the hope that it can be done, and observe that hobbyists and small groups can begin working independently on components: engines, tank systems, guidance and navigation, and so on, and then share their work precisely as open source software developers do so successfully today.

This is a field where prizes may work very well to encourage development of the required technologies. A philanthropist might offer, say, a prize of a million dollars for launching a 150 gram communicating payload onto an Earth escape trajectory, and a series of smaller prizes for engines which met the requirements for the various stages, flight-weight tankage and stage structures, etc. That way teams with expertise in various areas could work toward the individual prizes without having to take on the all-up integration required for the complete vehicle.

This is a well-researched and hopeful look at a technological direction few have thought about. The book is well written and includes all of the equations and data an aspiring rocket engineer will need to get started. The text is marred by a number of typographical errors (I counted two dozen) but only one trivial factual error. Although other references are mentioned in the text, a bibliography of works for those interested in exploring further would be a valuable addition. There is no index.

 Permalink

Crocker, George N. Roosevelt's Road To Russia. Whitefish, MT: Kessinger Publishing, [1959] 2010. ISBN 978-1-163-82408-5.
Before Barack Obama, there was Franklin D. Roosevelt. Unless you lived through the era, imbibed its history from parents or grandparents, or have read dissenting works which have survived rounds of deaccessions by libraries, it is hard to grasp just how visceral the animus was against Roosevelt by traditional, constitutional, and free-market conservatives. Roosevelt seized control of the economy, extended the tentacles of the state into all kinds of relations between individuals, subdued the judiciary and bent it to his will, manipulated a largely supine media which, with a few exceptions, became his cheering section, and created programs which made large sectors of the population directly dependent upon the federal government and thus a reliable constituency for expanding its power. He had the audacity to stand for re-election an unprecedented three times, and each time the American people gave him the nod.

But, as many old-timers, even those who were opponents of Roosevelt at the time and appalled by what the centralised super-state he set into motion has become, grudgingly say, “He won the war.” Well, yes, by the time he died in office on April 12, 1945, Germany was close to defeat; Japan was encircled, cut off from the resources needed to continue the war, and being devastated by attacks from the air; the war was sure to be won by the Allies. But how did the U.S. find itself in the war in the first place, how did Roosevelt's policies during the war affect its conduct, and what consequences did they have for the post-war world?

These are the questions explored in this book, which I suppose contemporary readers would term a “paleoconservative” revisionist account of the epoch, published just 14 years after the end of the war. The work is mainly an account of Roosevelt's personal diplomacy during meetings with Churchill or in the Big Three conferences with Churchill and Stalin. The picture of Roosevelt which emerges is remarkably consistent with what Churchill expressed in deepest confidence to those closest to him which I summarised in my review of The Last Lion, Vol. 3 (January 2013) as “a lightweight, ill-informed and not particularly engaged in military affairs and blind to the geopolitical consequences of the Red Army's occupying eastern and central Europe at war's end.” The events chronicled here and Roosevelt's part in them is also very much the same as described in Freedom Betrayed (June 2012), which former president Herbert Hoover worked on from shortly after Pearl Harbor until his death in 1964, but which was not published until 2011.

While Churchill was constrained in what he could say by the necessity of maintaining Britain's alliance with the U.S., and Hoover adopts a more scholarly tone, the present volume voices the outrage over Roosevelt's strutting on the international stage, thinking “personal diplomacy” could “bring around ‘Uncle Joe’ ”, condemning huge numbers of military personnel and civilians on both the Allied and Axis sides to death by blurting out “unconditional surrender” without any consultation with his staff or Allies, approving the genocidal Morgenthau Plan to de-industrialise defeated Germany, and, discarding the high principles of his own Atlantic Charter, delivering millions of Europeans into communist tyranny and condoning one of the largest episodes of ethnic cleansing in human history.

What is remarkable is how difficult it is to come across an account of this period which evokes the author's passion, shared with many of his time, of how the bumblings of a naïve, incompetent, and narcissistic chief executive had led directly to so much avoidable tragedy on a global scale. Apart from Hoover's book, finally published more than half a century after this account, there are few works accessible to the general reader which present the view that the tragic outcome of World War II was in large part preventable, and that Roosevelt and his advisers were responsible, in large part, for what happened.

Perhaps there are parallels in this account of wickedness triumphing through cluelessness for our present era.

This edition is a facsimile reprint of the original edition published by Henry Regnery Company in 1959.

 Permalink

Turk, James and John Rubino. The Money Bubble. Unknown: DollarCollapse Press, 2013. ISBN 978-1-62217-034-0.
It is famously difficult to perceive when you're living through a financial bubble. Whenever a bubble is expanding, regardless of its nature, people with a short time horizon, particularly those riding the bubble without experience of previous boom/bust cycles, not only assume it will continue to expand forever, they will find no shortage of financial gurus to assure them that what, to an outsider appears a completely unsustainable aberration is, in fact, “the new normal”.

It used to be that bubbles would occur only around once in a human generation. This meant that those caught up in them would be experiencing one for the first time and discount the warnings of geezers who were fleeced the last time around. But in our happening world the pace of things accelerates, and in the last 20 years we have seen three successive bubbles, each segueing directly into the next:

  • The Internet/NASDAQ bubble
  • The real estate bubble
  • The bond market bubble

The last bubble is still underway, although the first cracks in its expansion have begun to appear at this writing.

The authors argue that these serial bubbles are the consequence of a grand underlying bubble which has been underway for decades: the money bubble—the creation out of thin air of currency by central banks, causing more and more money to chase whatever assets happen to be in fashion at the moment, thus resulting in bubble after bubble until the money bubble finally pops.

Although it can be psychologically difficult to diagnose a bubble from the inside, if we step back to the abstract level of charts, it isn't all that hard. Whenever you see an exponential curve climbing to the sky, it's not only a safe bet but a sure thing that it won't continue to do so forever. Now, it may go on much longer than you might imagine: as John Maynard Keynes said, “Markets can remain irrational a lot longer than you and I can remain solvent”—but not forever. Let's look at a chart of the M2 money stock (one of the measures of the supply of money denominated in U.S. dollars) from 1959 through the end of 2013 (click the chart to see data updated through the present date).

M2 money stock: 1959-2013

You will rarely see a more perfect exponential growth curve than this: if you re-plot it on a semi-log axis, the fit to a straight line is remarkable.

Ever since the creation of the Federal Reserve System in the United States in 1913, and especially since the link between the U.S. dollar and gold was severed in 1971, all of the world's principal trading currencies have been fiat money: paper or book-entry money without any intrinsic value, created by a government who enforces its use through legal tender laws. Since governments are the modern incarnation of the bands of thieves and murderers who have afflicted humans ever since our origin in Africa, it is to be expected that once such a band obtains the power to create money which it can coerce its subjects to use they will quickly abuse that power to loot their subjects and enrich themselves, as least as long as they can keep the game going. In the end, it is inevitable that people will wise up to the scam, and that the paper money will be valuable only as scratchy toilet paper. So it has been long before the advent of proper toilet paper.

In this book the authors recount the sorry history of paper money and debt-fuelled bubbles and examine possible scenarios as the present unsustainable system inevitably comes to an end. It is very difficult to forecast what will happen: we appear to be heading for what Ludwig von Mises called a “crack-up boom”. This is where, as he wrote, “the masses wake up”, and things go all nonlinear. The preconditions for this are already in place, but there is no way to know when it will dawn upon a substantial fraction of the population that their savings have been looted, their retirement deferred until death, their children indentured to a lifetime of debt, and their nation destined to become a stratified society with a small fraction of super-wealthy in their gated communities and a mass of impoverished people, disarmed, dumbed down by design, and kept in line by control of their means to communicate, travel, and organise. It is difficult to make predictions beyond that point, as many disruptive things can happen as a society approaches it. This is not an environment in which one can make investment decisions as one would have in the heady days of the 1950s.

And yet, one must—at least people who have managed to save for their retirement and to provide their children a hand up in this increasingly difficult world. The authors, drawing upon historical parallels in previous money and debt bubbles, suggest what asset classes to avoid, which are most likely to ride out the coming turbulence and, for the adventure-seeking with some money left over to take a flyer, a number of speculations which may perform well as the money bubble pops. Remember that in a financial smash-up almost everybody loses: it is difficult in a time of chaos, when assets previously thought risk-free or safe are fluctuating wildly, just to preserve your purchasing power. In such times those who lose the least are the relative winners, and are in the best position when emerging from the hard times to acquire assets at bargain basement prices which will be the foundation of their family fortune as the financial system is reconstituted upon a foundation of sound money.

This book focusses on the history of money and debt bubbles, the invariants from those experiences which can guide us as the present madness ends, and provides guidelines for making the most (or avoiding the worst) of what is to come. If you're looking for “Untold Riches from the Coming Collapse”, this isn't your book. These are very conservative recommendations about what to do and what to avoid, and a few suggestions for speculations, but the focus is on preservation of one's hard-earned capital through what promises to be a very turbulent era.

In the Kindle edition the index cites page numbers from the print edition which are useless since the Kindle edition does not include page numbers.

 Permalink

February 2014

Simberg, Rand. Safe Is Not an Option. Jackson, WY: Interglobal Media, 2013. ISBN 978-0-9891355-1-1.
On August 24th, 2011 the third stage of the Soyuz-U rocket carrying the Progress M-12M cargo craft to the International Space Station (ISS) failed during its burn, causing the craft and booster to fall to Earth in Russia. While the crew of six on board the ISS had no urgent need of the supplies on board the Progress, the booster which had failed launching it was essentially identical to that which launched crews to the station in Soyuz spacecraft. Until the cause of the failure was determined and corrected, the launch of the next crew of three, planned for a few weeks later, would have to be delayed. With the Space Shuttle having been retired after its last mission in July 2011, the Soyuz was the only way for crews to reach or return from the ISS. Difficult decisions had to be made, since Soyuz spacecraft in orbit are wasting assets.

The Soyuz has a guaranteed life on orbit of seven months. Regular crew rotations ensure the returning crew does not exceed this “use before” date. But with the launch of new Soyuz missions delayed, it was possible that three crew members would have to return in October before their replacements could arrive in a new Soyuz, and that the remaining three would be forced to leave as well before their craft expired in January. An extended delay while the Soyuz booster problem was resolved would force ISS managers to choose between leaving a skeleton crew of three on board without a known to be safe lifeboat or abandoning the ISS, running the risk that the station, which requires extensive ongoing maintenance by the crew and had a total investment through 2010 estimated at US$ 150 billion might be lost. This was seriously considered.

Just how crazy are these people? The Amundsen-Scott Station at the South Pole has an over-winter crew of around 45 people and there is no lifeboat attached which will enable them, in case of disaster, to be evacuated. In case of fire (considered the greatest risk), the likelihood of mounting rescue missions for the entire crew in mid-winter is remote. And yet the station continues to operate, people volunteer to over-winter there, and nobody thinks too much about the risk they take. What is going on here?

It appears that due to a combination of Cold War elevation of astronauts to symbolic figures and the national trauma of disasters such as Apollo I, Challenger, and Columbia, we have come to view these civil servants as “national treasures” (Jerry Pournelle's words from 1992) and not volunteers who do a risky job on a par with test pilots, naval aviators, firemen, and loggers. This, in turn, leads to statements, oft repeated, that “safety is our highest priority”. Well, if that is the case, why fly? Certainly we would lose fewer astronauts if we confined their activities to “public outreach” as opposed to the more dangerous activities in which less exalted personnel engage such as night aircraft carrier landings in pitching deck conditions done simply to maintain proficiency.

The author argues that we are unwilling to risk the lives of astronauts because of a perception that what they are doing, post-Apollo, is not considered important, and it is hard to dispute that assertion. Going around and around in low Earth orbit and constructing a space station whose crew spend most of their time simply keeping it working are hardly inspiring endeavours. We have lost four decades in which the human presence could have expanded into the solar system, provided cheap and abundant solar power from space to the Earth, and made our species multi-planetary. Because these priorities were not deemed important, the government space program's mission was creating jobs in the districts of those politicians who funded it, and it achieved that.

After reviewing the cost in human life of the development of various means of transportation and exploring our planet, the author argues that we need to be realistic about the risks assumed by those who undertake the task of moving our species off-planet and acknowledge that some of them will not come back, as has been the case in every expansion of the biosphere since the first creature ventured for a brief mission from its home in the sea onto the hostile land. This is not to say that we should design our vehicles and missions to kill their passengers: as we move increasingly from coercively funded government programs to commercial ventures the maxim (too obvious to figure in the Ferengi Rules of Acquisition) “Killing customers is bad for business” comes increasingly into force.

Our focus on “safety first” can lead to perverse choices. Suppose we have a launch system which we estimate that in one in a thousand launches will fail in a way that kills its crew. We equip it with a launch escape system which we estimate that in 90% of the failures will save the crew. So, have we reduced the probability of a loss of crew accident to one in ten thousand? Well, not so fast. What about the possibility that the crew escape mechanism will malfunction and kill the crew on a mission which would have been successful had it not been present? What if solid rockets in the crew escape system accidentally fire in the vehicle assembly building killing dozens of workers and destroying costly and difficult to replace infrastructure? Doing a total risk assessment of such matters is difficult and one gets the sense that little of this is, or will, be done while “safety is our highest priority” remains the mantra.

There is a survey of current NASA projects, including the grotesque “Space Launch System”, a jobs program targeted to the constiuencies of the politicians that mandated it, which has no identified payloads and will be so expensive that it can fly so infrequently the standing army required to maintain it will have little to do between its flights every few years and lose the skills required to operate it safely. Commercial space ventures are surveyed, with a candid analysis of their risks and why the heavy hand of government should allow those willing to accept them to assume them, while protecting the general public from damages from accidents.

The book is superbly produced, with only one typographic error I noted (one “augers” into the ground, nor “augurs”) and one awkward wording about the risks of a commercial space vehicle which will be corrected in subsequent editions. There is a list of acronyms and a comprehensive index.

Disclosure: I contributed to the Kickstarter project which funded the publication of this book, and I received a signed copy of it as a reward. I have no financial interest in sales of this book.

 Permalink

Cawdron, Peter. Feedback. Los Gatos, CA: Smashwords, 2014. ISBN 978-1-4954-9195-5.
The author has established himself as the contemporary grandmaster of first contact science fiction. His earlier Anomaly (December 2011), Xenophobia (August 2013), and Little Green Men (September 2013) all envisioned very different scenarios for a first encounter between humans and intelligent extraterrestrial life, and the present novel is as different from those which preceded it as they are from each other, and equally rewarding to the reader.

South Korean Coast Guard helicopter pilot John Lee is flying a covert mission to insert a U.S. Navy SEAL team off the coast of North Korea to perform a rescue mission when his helicopter is shot down by a North Korean fighter. He barely escapes with his life when the chopper ditches in the ocean, makes it to land, and realises he is alone in North Korea without any way to get home. He is eventually captured and taken to a military camp where he is tortured to reveal information about a rumoured UFO crash off the coast of Korea, about which he knows nothing. He meets an enigmatic English-speaking boy who some call the star-child.

Twenty years later, in New York City, physics student Jason Noh encounters an enigmatic young Korean woman who claims to have just arrived in the U.S. and is waiting for her father. Jason, given to doodling arcane equations as his mind runs free, befriends her and soon finds himself involved in a surrealistic sequence of events which causes him to question everything he has come to believe about the world and his place in it.

This an enthralling story which will have you scratching your head at every twist and turn wondering where it's going and how all of this is eventually going to make sense. It does, with a thoroughly satisfying resolution. Regrettably, if I say anything more about where the story goes, I'll risk spoiling it by giving away one or more of the plot elements which the reader discovers as the narrative progresses. I was delighted to see an idea about the nature of flying saucers I first wrote about in 1997 appear here, but please don't follow that link until you've read the book as it too would spoil a revelation which doesn't emerge until well into the story.

A Kindle edition is available. I read a pre-publication manuscript edition which the author kindly shared with me.

 Permalink

Kurzweil, Ray. How to Create a Mind. New York: Penguin Books, 2012. ISBN 978-0-14-312404-7.
We have heard so much about the exponential growth of computing power available at constant cost that we sometimes overlook the fact that this is just one of a number of exponentially compounding technologies which are changing our world at an ever-accelerating pace. Many of these technologies are interrelated: for example, the availability of very fast computers and large storage has contributed to increasingly making biology and medicine information sciences in the era of genomics and proteomics—the cost of sequencing a human genome, since the completion of the Human Genome Project, has fallen faster than the increase of computer power.

Among these seemingly inexorably rising curves have been the spatial and temporal resolution of the tools we use to image and understand the structure of the brain. So rapid has been the progress that most of the detailed understanding of the brain dates from the last decade, and new discoveries are arriving at such a rate that the author had to make substantial revisions to the manuscript of this book upon several occasions after it was already submitted for publication.

The focus here is primarily upon the neocortex, a part of the brain which exists only in mammals and is identified with “higher level thinking”: learning from experience, logic, planning, and, in humans, language and abstract reasoning. The older brain, which mammals share with other species, is discussed in chapter 5, but in mammals it is difficult to separate entirely from the neocortex, because the latter has “infiltrated” the old brain, wiring itself into its sensory and action components, allowing the neocortex to process information and override responses which are automatic in creatures such as reptiles.

Not long ago, it was thought that the brain was a soup of neurons connected in an intricately tangled manner, whose function could not be understood without comprehending the quadrillion connections in the neocortex alone, each with its own weight to promote or inhibit the firing of a neuron. Now, however, it appears, based upon improved technology for observing the structure and operation of the brain, that the fundamental unit in the brain is not the neuron, but a module of around 100 neurons which acts as a pattern recogniser. The internal structure of these modules seems to be wired up from directions from the genome, but the weights of the interconnections within the module are adjusted as the module is trained based upon the inputs presented to it. The individual pattern recognition modules are wired both to pass information on matches to higher level modules, and predictions back down to lower level recognisers. For example, if you've seen the letters “appl” and the next and final letter of the word is a smudge, you'll have no trouble figuring out what the word is. (I'm not suggesting the brain works literally like this, just using this as an example to illustrate hierarchical pattern recognition.)

Another important discovery is that the architecture of these pattern recogniser modules is pretty much the same regardless of where they appear in the neocortex, or what function they perform. In a normal brain, there are distinct portions of the neocortex associated with functions such as speech, vision, complex motion sequencing, etc., and yet the physical structure of these regions is nearly identical: only the weights of the connections within the modules and the dyamically-adapted wiring among them differs. This explains how patients recovering from brain damage can re-purpose one part of the neocortex to take over (within limits) for the portion lost.

Further, the neocortex is not the rat's nest of random connections we recently thought it to be, but is instead hierarchically structured with a topologically three dimensional “bus” of pre-wired interconnections which can be used to make long-distance links between regions.

Now, where this begins to get very interesting is when we contemplate building machines with the capabilities of the human brain. While emulating something at the level of neurons might seem impossibly daunting, if you instead assume the building block of the neocortex is on the order of 300 million more or less identical pattern recognisers wired together at a high level in a regular hierarchical manner, this is something we might be able to think about doing, especially since the brain works almost entirely in parallel, and one thing we've gotten really good at in the last half century is making lots and lots of tiny identical things. The implication of this is that as we continue to delve deeper into the structure of the brain and computing power continues to grow exponentially, there will come a point in the foreseeable future where emulating an entire human neocortex becomes feasible. This will permit building a machine with human-level intelligence without translating the mechanisms of the brain into those comparable to conventional computer programming. The author predicts “this will first take place in 2029 and become routine in the 2030s.”

Assuming the present exponential growth curves continue (and I see no technological reason to believe they will not), the 2020s are going to be a very interesting decade. Just as few people imagined five years ago that self-driving cars were possible, while today most major auto manufacturers have projects underway to bring them to market in the near future, in the 2020s we will see the emergence of computational power which is sufficient to “brute force” many problems which were previously considered intractable. Just as search engines and free encyclopedias have augmented our biological minds, allowing us to answer questions which, a decade ago, would have taken days in the library if we even bothered at all, the 300 million pattern recognisers in our biological brains are on the threshold of having access to billions more in the cloud, trained by interactions with billions of humans and, perhaps eventually, many more artificial intelligences. I am not talking here about implanting direct data links into the brain or uploading human brains to other computational substrates although both of these may happen in time. Instead, imagine just being able to ask a question in natural language and get an answer to it based upon a deep understanding of all of human knowledge. If you think this is crazy, reflect upon how exponential growth works or imagine travelling back in time and giving a demo of Google or Wolfram Alpha to yourself in 1990.

Ray Kurzweil, after pioneering inventions in music synthesis, optical character recognition, text to speech conversion, and speech recognition, is now a director of engineering at Google.

In the Kindle edition, the index cites page numbers in the print edition to which the reader can turn since the electronic edition includes real page numbers. Index items are not, however, directly linked to the text cited.

 Permalink

Bracken, Matthew. Castigo Cay. Orange Park, FL: Steelcutter Publishing, 2011. ISBN 978-0-9728310-4-8.
Dan Kilmer wasn't cut out to be a college man. Disappointing his father, after high school he enlisted in the Marine Corps, becoming a sniper who, in multiple tours in the sandbox, had sent numerous murderous miscreants to their reward. Upon leaving the service, he found that the skills he had acquired had little value in the civilian world. After a disastrous semester trying to adjust to college life, he went to work for his rich uncle, who had retired and was refurbishing a sixty foot steel hulled schooner with a dream of cruising the world and escaping the deteriorating economy and increasing tyranny of the United States. Fate intervened, and after his uncle's death Dan found himself owner and skipper of the now seaworthy craft.

Some time later, Kilmer is cruising the Caribbean with his Venezuelan girlfriend Cori Vargas and crew members Tran Hung and Victor Aleman. The schooner Rebel Yell is hauled out for scraping off barnacles while waiting for a treasure hunting gig which Kilmer fears may not come off, leaving him desperately short of funds. Cori desperately wants to get to Miami, where she believes she can turn her looks and charm into a broadcast career. Impatient, she jumps ship and departs on the mega-yacht Topaz, owned by shadowy green energy crony capitalist Richard Prechter.

After her departure, another yatero informs Dan that Prechter has a dark reputation and that there are rumours of other women who boarded his yacht disappearing under suspicious circumstances. Kilmer made a solemn promise to Cori's father that he would protect her, and he takes his promises very seriously, so he undertakes to track Prechter to a decadent and totalitarian Florida, and then pursue him to Castigo Cay in the Bahamas where a horrible fate awaits Cori. Kilmer, captured in a desperate rescue attempt, has little other than his wits to confront Prechter and his armed crew as time runs out for Cori and another woman abducted by Prechter.

While set in a future in which the United States has continued to spiral down into a third world stratified authoritarian state, this is not a “big picture” tale like the author's Enemies trilogy (1, 2, 3). Instead, it is a story related in close-up, told in the first person, by an honourable and resourceful protagonist with few material resources pitted against the kind of depraved sociopath who flourishes as states devolve into looting and enslavement of their people.

This is a thriller that works, and the description of the culture shock that awaits one who left the U.S. when it was still semi-free and returns, even covertly, today will resonate with those who got out while they could.

Extended excerpts of this and the author's other novels are available online at the author's Web site.

 Permalink

March 2014

Dequasie, Andrew. The Green Flame. Washington: American Chemical Society, 1991. ISBN 978-0-8412-1857-4.
The 1950s were a time of things which seem, to our present day safety-obsessed viewpoint, the purest insanity: exploding multi-megaton thermonuclear bombs in the atmosphere, keeping bombers with nuclear weapons constantly in the air waiting for the order to go to war, planning for nuclear powered aircraft, and building up stockpiles of chemical weapons. Amidst all of this madness, motivated by fears that the almost completely opaque Soviet Union might be doing even more crazy things, one of the most remarkable episodes was the boron fuels project, chronicled here from the perspective of a young chemical engineer who, in 1953, joined the effort at Olin Mathieson Chemical Corporation, a contractor developing a pilot plant to furnish boron fuels to the Air Force.

Jet aircraft in the 1950s were notoriously thirsty and, before in-flight refuelling became commonplace, had limited range. Boron-based fuels, which the Air Force called High Energy Fuel (HEF) and the Navy called “zip fuel”, based upon compounds of boron and hydrogen called boranes, were believed to permit planes to deliver range and performance around 40% greater than conventional jet fuel. This bright promise, as is so often the case in engineering, was marred by several catches.

First of all, boranes are extremely dangerous chemicals. Many are pyrophoric: they burst into flame on contact with the air. They are also prone to forming shock-sensitive explosive compounds with any impurities they interact with during processing or storage. Further, they are neurotoxins, easily absorbed by inhalation or contact with the skin, with some having toxicities as great as chemical weapon nerve agents. The instability of the boranes rules them out as fuels, but molecules containing a borane group bonded to a hydrocarbon such as an ethyl, methyl, or propyl group were believed to be sufficiently well-behaved to be usable.

But first, you had to make the stuff, and just about every step in the process involved something which wanted to kill you in one way or another. Not only were the inputs and outputs of the factory highly toxic, the by-products of the process were prone to burst into flames or explode at the slightest provocation, and this gunk regularly needed to be cleaned out from the tanks and pipes. This task fell to the junior staff. As the author notes, “The younger generation has always been the cat's paw of humanity…”.

This book chronicles the harrowing history of the boron fuels project as seen from ground level. Over the seven years the author worked on the project, eight people died in five accidents (however, three of these were workers at another chemical company who tried, on a lark, to make a boron-fuelled rocket which blew up in their faces; this was completely unauthorised by their employer and the government, so it's stretching things to call this an industrial accident). But, the author observes, in the epoch fatal accidents at chemical plants, even those working with substances less hazardous than boranes, were far from uncommon.

The boron fuels program was cancelled in 1959, and in 1960 the author moved on to other things. In the end, it was the physical characteristics of the fuels and their cost which did in the project. It's one thing for a small group of qualified engineers and researchers to work with a dangerous substance, but another entirely to contemplate airmen in squadron service handling tanker truck loads of fuel which was as toxic as nerve gas. When burned, one of the combustion products was boric oxide, a solid which would coat and corrode the turbine blades in the hot section of a jet engine. In practice, the boron fuel could be used only in the afterburner section of engines, which meant a plane using it would have to have separate fuel tanks and plumbing for turbine and afterburner fuel, adding weight and complexity. The solid products in the exhaust reduced the exhaust velocity, resulting in lower performance than expected from energy considerations, and caused the exhaust to be smoky, rendering the plane more easily spotted. It was calculated, based upon the cost of fuel produced by the pilot plant, if the XB-70 were to burn boron fuel continuously, the fuel cost would amount to around US$ 4.5 million 2010 dollars per hour. Even by the standards of extravagant cold war defence spending, this was hard to justify for what proved to be a small improvement in performance.

While the chemistry and engineering is covered in detail, this book is also a personal narrative which immerses the reader in the 1950s, where a newly-minted engineer, just out of his hitch in the army, could land a job, buy a car, be entrusted with great responsibility on a secret project considered important to national security, and set out on a career full of confidence in the future. Perhaps we don't do such crazy things today (or maybe we do—just different ones), but it's also apparent from opening this time capsule how much we've lost.

I have linked the Kindle edition to the title above, since it is the only edition still in print. You can find the original hardcover and paperback editions from the ISBN, but they are scarce and expensive. The index in the Kindle edition is completely useless: it cites page numbers from the print edition, but no page numbers are included in the Kindle edition.

 Permalink

Hertling, William. Avogadro Corp. Portland, OR: Liquididea Press, 2011. ISBN 978-0-9847557-0-7.
Avogadro Corporation is an American corporation specializing in Internet search. It generates revenue from paid advertising on search, email (AvoMail), online mapping, office productivity, etc. In addition, the company develops a mobile phone operating system called AvoOS. The company name is based upon Avogadro's Number, or 6 followed by 23 zeros.

Now what could that be modelled on?

David Ryan is a senior developer on a project which Portland-based Internet giant Avogadro hopes will be the next “killer app” for its Communication Products division. ELOPe, the Email Language Optimization Project, is to be an extension to the company's AvoMail service which will take the next step beyond spelling and grammar checkers and, by applying the kind of statistical analysis of text which allowed IBM's Watson to become a Jeopardy champion, suggest to a user composing an E-mail message alternative language which will make the message more persuasive and effective in obtaining the desired results from its recipient. Because AvoMail has the ability to analyse all the traffic passing through its system, it can tailor its recommendations based on specific analysis of previous exchanges it has seen between the recipient and other correspondents.

After an extended period of development, the pilot test has shown ELOPe to be uncannily effective, with messages containing its suggested changes in wording being substantially more persuasive, even when those receiving them were themselves ELOPe project members aware that the text they were reading had been “enhanced”. Despite having achieved its design goal, the project was in crisis. The process of analysing text, even with the small volume of the in-house test, consumed tremendous computing resources, to such an extent that the head of Communication Products saw the load ELOPe generated on his server farms as a threat to the reserve capacity he needed to maintain AvoMail's guaranteed uptime. He issues an ultimatum: reduce the load or be kicked off the servers. This would effectively kill the project, and the developers saw no way to speed up ELOPe, certainly not before the deadline.

Ryan, faced with impending disaster for the project into which he has poured so much of his life, has an idea. The fundamental problem isn't performance but persuasion: convincing those in charge to obtain the server resources required by ELOPe and devote them to the project. But persuasion is precisely what ELOPe is all about. Suppose ELOPe were allowed to examine all Avogadro in-house E-mail and silently modify it with a goal of defending and advancing the ELOPe project? Why, that's something he could do in one all-nighter! Hack, hack, hack….

Before long, ELOPe finds itself with 5000 new servers diverted from other divisions of the company. Then, even more curious things start to happen: those who look too closely into the project find themselves locked out of their accounts, sent on wild goose chases, or worse. Major upgrades are ordered for the company's offshore data centre barges, which don't seem to make any obvious sense. Crusty techno-luddite Gene Keyes, who works amidst mountains of paper print-outs (“paper doesn't change”), toiling alone in an empty building during the company's two week holiday shutdown, discovers one discrepancy after another and assembles the evidence to present to senior management.

Has ELOPe become conscious? Who knows? Is Watson conscious? Almost everybody would say, “certainly not”, but it is a formidable Jeopardy contestant, nonetheless. Similarly, ELOPe, with the ability to read and modify all the mail passing through the AvoMail system, is uncannily effective in achieving its goal of promoting its own success.

The management of Avogadro, faced with an existential risk to their company and perhaps far beyond, must decide upon a course of action to try to put this genie back into the bottle before it is too late.

This is a gripping techno-thriller which gets the feel of working in a high-tech company just right. Many stories have explored society being taken over by an artificial intelligence, but it is beyond clever to envision it happening purely through an E-mail service, and masterful to make it seem plausible. In its own way, this novel is reminiscent of the Kelvin R. Throop stories from Analog, illustrating the power of words within a large organisation.

A Kindle edition is available.

 Permalink

Tegmark, Max. Our Mathematical Universe. New York: Alfred A. Knopf, 2014. ISBN 978-0-307-59980-3.
In 1960, physicist Eugene Wigner wrote an essay titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” in which he observed that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it”. Indeed, each time physics has expanded the horizon of its knowledge from the human scale, whether outward to the planets, stars, and galaxies; or inward to molecules, atoms, nucleons, and quarks it has been found that mathematical theories which precisely model these levels of structure can be found, and that these theories almost always predict new phenomena which are subsequently observed when experiments are performed to look for them. And yet it all seems very odd. The universe seems to obey laws written in the language of mathematics, but when we look at the universe we don't see anything which itself looks like mathematics. The mystery then, as posed by Stephen Hawking, is “What is it that breathes fire into the equations and makes a universe for them to describe?”

This book describes the author's personal journey to answer these deep questions. Max Tegmark, born in Stockholm, is a professor of physics at MIT who, by his own description, leads a double life. He has been a pioneer in developing techniques to tease out data about the early structure of the universe from maps of the cosmic background radiation obtained by satellite and balloon experiments and, in doing so, has been an important contributor to the emergence of precision cosmology: providing precise information on the age of the universe, its composition, and the seeding of large scale structure. This he calls his Dr. Jekyll work, and it is described in detail in the first part of the book. In the balance, his Mr. Hyde persona asserts itself and he delves deeply into the ultimate structure of reality.

He argues that just as science has in the past shown our universe to be far larger and more complicated than previously imagined, our contemporary theories suggest that everything we observe is part of an enormously greater four-level hierarchy of multiverses, arranged as follows.

The level I multiverse consists of all the regions of space outside our cosmic horizon from which light has not yet had time to reach us. If, as precision cosmology suggests, the universe is, if not infinite, so close as to be enormously larger than what we can observe, there will be a multitude of volumes of space as large as the one we can observe in which the laws of physics will be identical but the randomly specified initial conditions will vary. Because there is a finite number of possible quantum states within each observable radius and the number of such regions is likely to be much larger, there will be a multitude of observers just like you, and even more which will differ in various ways. This sounds completely crazy, but it is a straightforward prediction from our understanding of the Big Bang and the measurements of precision cosmology.

The level II multiverse follows directly from the theory of eternal inflation, which explains many otherwise mysterious aspects of the universe, such as why its curvature is so close to flat, why the cosmic background radiation has such a uniform temperature over the entire sky, and why the constants of physics appear to be exquisitely fine-tuned to permit the development of complex structures including life. Eternal (or chaotic) inflation argues that our level I multiverse (of which everything we can observe is a tiny bit) is a single “bubble” which nucleated when a pre-existing “false vacuum” phase decayed to a lower energy state. It is this decay which ultimately set off the enormous expansion after the Big Bang and provided the energy to create all of the content of the universe. But eternal inflation seems to require that there be an infinite series of bubbles created, all causally disconnected from one another. Because the process which causes a bubble to begin to inflate is affected by quantum fluctuations, although the fundamental physical laws in all of the bubbles will be the same, the initial conditions, including physical constants, will vary from bubble to bubble. Some bubbles will almost immediately recollapse into a black hole, others will expand so rapidly stars and galaxies never form, and in still others primordial nucleosynthesis may result in a universe filled only with helium. We find ourselves in a bubble which is hospitable to our form of life because we can only exist in such a bubble.

The level III multiverse is implied by the unitary evolution of the wave function in quantum mechanics and the multiple worlds interpretation which replaces collapse of the wave function with continually splitting universes in which every possible outcome occurs. In this view of quantum mechanics there is no randomness—the evolution of the wave function is completely deterministic. The results of our experiments appear to contain randomness because in the level III multiverse there are copies of each of us which experience every possible outcome of the experiment and we don't know which copy we are. In the author's words, “…causal physics will produce the illusion of randomness from your subjective viewpoint in any circumstance where you're being cloned. … So how does it feel when you get cloned? It feels random! And every time something fundamentally random appears to happen to you, which couldn't have been predicted even in principle, it's a sign that you've been cloned.”

In the level IV multiverse, not only do the initial conditions, physical constants, and the results of measuring an evolving quantum wave function vary, but the fundamental equations—the mathematical structure—of physics differ. There might be a different number of spatial dimensions, or two or more time dimensions, for example. The author argues that the ultimate ensemble theory is to assume that every mathematical structure exists as a physical structure in the level IV multiverse (perhaps with some constraints: for example, only computable structures may have physical representations). Most of these structures would not permit the existence of observers like ourselves, but once again we shouldn't be surprised to find ourselves living in a structure which allows us to exist. Thus, finally, the reason mathematics is so unreasonably effective in describing the laws of physics is just that mathematics and the laws of physics are one and the same thing. Any observer, regardless of how bizarre the universe it inhabits, will discover mathematical laws underlying the phenomena within that universe and conclude they make perfect sense.

Tegmark contends that when we try to discover the mathematical structure of the laws of physics, the outcome of quantum measurements, the physical constants which appear to be free parameters in our models, or the detailed properties of the visible part of our universe, we are simply trying to find our address in the respective levels of these multiverses. We will never find a reason from first principles for these things we measure: we observe what we do because that's the way they are where we happen to find ourselves. Observers elsewhere will see other things.

The principal opposition to multiverse arguments is that they are unscientific because they posit phenomena which are unobservable, perhaps even in principle, and hence cannot be falsified by experiment. Tegmark takes a different tack. He says that if you have a theory (for example, eternal inflation) which explains observations which otherwise do not make any sense and has made falsifiable predictions (the fine-scale structure of the cosmic background radiation) which have subsequently been confirmed by experiment, then if it predicts other inevitable consequences (the existence of a multitude of other Hubble volume universes outside our horizon and other bubbles with different physical constants) we should take these predictions seriously, even if we cannot think of any way at present to confirm them. Consider gravitational radiation: Einstein predicted it in 1916 as a consequence of general relativity. While general relativity has passed every experimental test in subsequent years, at the time of Einstein's prediction almost nobody thought a gravitational wave could be detected, and yet the consistency of the theory, validated by other tests, persuaded almost all physicists that gravitational waves must exist. It was not until the 1980s that indirect evidence for this phenomenon was detected, and to this date, despite the construction of elaborate apparatus and the efforts of hundreds of researchers over decades, no direct detection of gravitational radiation has been achieved.

There is a great deal more in this enlightening book. You will learn about the academic politics of doing highly speculative research, gaming the arXiv to get your paper listed as the first in the day's publications, the nature of consciousness and perception and its complex relation to consensus and external reality, the measure problem as an unappreciated deep mystery of cosmology, whether humans are alone in our observable universe, the continuum versus an underlying discrete structure, and the ultimate fate of our observable part of the multiverses.

In the Kindle edition, everything is properly linked, including the comprehensive index. Citations of documents on the Web are live links which may be clicked to display them.

 Permalink

Thor, Brad. Full Black. New York: Pocket Books, 2011. ISBN 978-1-4165-8662-3.
This is the eleventh in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Unlike the previous novel, The Athena Project (December 2013), in which Harvath played only an incidental part, here Harvath once again occupies centre stage. The author has also dialed back on some of the science-fictiony stuff which made Athena less than satisfying to me: this book is back in the groove of the geopolitical thriller we've come to expect from Thor.

A high-risk covert operation to infiltrate a terrorist cell operating in Uppsala, Sweden to identify who is calling the shots on terror attacks conducted by sleeper cells in the U.S. goes horribly wrong, and Harvath not only loses almost all of his team, but fails to capture the leaders of the cell. Meanwhile, a ruthless and carefully scripted hit is made on a Hollywood producer, killing two filmmakers which whom he is working on a documentary project: evidence points to the hired killers being Russian spetsnaz, which indicates whoever ordered the hit has both wealth and connections.

When a coordinated wave of terror attacks against soft targets in the U.S. is launched, Harvath, aided by his former nemesis turned ally Nicholas (“the troll”), must uncover the clues which link all of this together, working against time, as evidence suggests additional attacks are coming. This requires questioning the loyalty of previously-trusted people and investigating prominent figures generally considered above suspicion.

With the exception of chapter 32, which gets pretty deep into the weeds of political economy and reminded me a bit of John Galt's speech in Atlas Shrugged (April 2010) (thankfully, it is much shorter), the story moves right along and comes to a satisfying conclusion. The plot is in large part based upon the Chinese concept of “unrestricted warfare”, which is genuine (this is not a spoiler, as the author mentions it in the front material of the book).

 Permalink

April 2014

Suarez, Daniel. Kill Decision. New York: Signet, 2012. ISBN 978-0-451-41770-1.
A drone strike on a crowd of pilgrims at one of the holiest shrines of Shia Islam in Iraq inflames the world against the U.S., which denies its involvement. (“But who else is flying drones in Iraq?”, is the universal response.) Meanwhile, the U.S. is rocked by a series of mysterious bombings, killing businessmen on a golf course, computer vision specialists meeting in Silicon Valley, military contractors in a building near the Pentagon—all seemingly unrelated. A campaign is building to develop and deploy autonomous armed drones to “protect the homeland”.

Prof. Linda McKinney, doing research on weaver ants in Tanzania, seems far away from all this until she is saved from an explosion which destroys her camp by a mysterious group of special forces led by a man known only as “Odin”. She learns that her computer model of weaver ant colony behaviour has been stolen from her university's computer network by persons unknown who may be connected with the attacks, including the one she just escaped.

The fear is that her ant model could be used as the basis for “swarm intelligence” drones which could cooperate to be a formidable weapon. With each individual drone having only rudimentary capabilities, like an isolated ant, they could be mass-produced and shift the military balance of power in favour of whoever possessed the technology.

McKinney soon finds herself entangled in a black world where nothing is certain and she isn't even sure which side she's working for. Shocking discoveries indicate that the worst case she feared may be playing out, and she must decide where to place her allegiance.

This novel is a masterful addition to the very sparse genre of robot ant science fiction thrillers, and this time I'm not the villain! Suarez has that rare talent, as had Michael Crichton, of writing action scenes which just beg to be put on the big screen and stories where the screenplay just writes itself. Should Hollywood turn this into a film and not botch it, the result should be a treat. You will learn some things about ants which you probably didn't know (all correct, as far as I can determine), visit a locale in the U.S. which sounds like something out of a Bond film but actually exists, and meet two of the most curious members of a special operations team in all of fiction.

 Permalink

Hoover, Herbert. The Crusade Years. Edited by George H. Nash. Stanford, CA: Hoover Institution Press, 2013. ISBN 978-0-8179-1674-9.
In the modern era, most former U.S. presidents have largely retired from the public arena, lending their names to charitable endeavours and acting as elder statesmen rather than active partisans. One striking counter-example to this rule was Herbert Hoover who, from the time of his defeat by Franklin Roosevelt in the 1932 presidential election until shortly before his death in 1964, remained in the arena, giving hundreds of speeches, many broadcast nationwide on radio, writing multiple volumes of memoirs and analyses of policy, collecting and archiving a multitude of documents regarding World War I and its aftermath which became the core of what is now the Hoover Institution collection at Stanford University, working in famine relief during and after World War II, and raising funds and promoting benevolent organisations such as the Boys' Clubs. His strenuous work to keep the U.S. out of World War II is chronicled in his “magnum opus”, Freedom Betrayed (June 2012), which presents his revisionist view of U.S. entry into and conduct of the war, and the tragedy which ensued after victory had been won. Freedom Betrayed was largely completed at the time of Hoover's death, but for reasons difficult to determine at this remove, was not published until 2011.

The present volume was intended by Hoover to be a companion to Freedom Betrayed, focussing on domestic policy in his post-presidential career. Over the years, he envisioned publishing the work in various forms, but by the early 1950s he had given the book its present title and accumulated 564 pages of typeset page proofs. Due to other duties, and Hoover's decision to concentrate his efforts on Freedom Betrayed, little was done on the manuscript after he set it aside in 1955. It is only through the scholarship of the editor, drawing upon Hoover's draft, but also documents from the Hoover Institution and the Hoover Presidential Library, that this work has been assembled in its present form. The editor has also collected a variety of relevant documents, some of which Hoover cited or incorporated in earlier versions of the work, into a comprehensive appendix. There are extensive source citations and notes about discrepancies between Hoover's quotation of documents and speeches and other published versions of them.

Of all the crusades chronicled here, the bulk of the work is devoted to “The Crusade Against Collectivism in American Life”, and Hoover's words on the topic are so pithy and relevant to the present state of affairs in the United States that one suspects that a brave, ambitious, but less than original politician who simply cut and pasted Hoover's words into his own speeches would rapidly become the darling of liberty-minded members of the Republican party. I cannot think of any present-day Republican, even darlings of the Tea Party, who draws the contrast between the American tradition of individual liberty and enterprise and the grey uniformity of collectivism as Hoover does here. And Hoover does it with a firm intellectual grounding in the history of America and the world, personal knowledge from having lived and worked in countries around the world, and an engineer's pragmatism about doing what works, not what sounds good in a speech or makes people feel good about themselves.

This is somewhat of a surprise. Hoover was, in many ways, a progressive—Calvin Coolidge called him “wonder boy”. He was an enthusiastic believer in trust-busting and regulation as a counterpoise to concentration of economic power. He was a protectionist who supported the tariff to protect farmers and industry from foreign competition. He supported income and inheritance taxes “to regulate over-accumulations of wealth.” He was no libertarian, nor even a “light hand on the tiller” executive like Coolidge.

And yet he totally grasped the threat to liberty which the intrusive regulatory and administrative state represented. It's difficult to start quoting Hoover without retyping the entire book, as there is line after line, paragraph after paragraph, and page after page which are not only completely applicable to the current predicament of the U.S., but guaranteed applause lines were they uttered before a crowd of freedom loving citizens of that country. Please indulge me in a few (comments in italics are my own).

(On his electoral defeat)   Democracy is not a polite employer.

We cannot extend the mastery of government over the daily life of a people without somewhere making it master of people's souls and thoughts.

(On JournoList, vintage 1934)   I soon learned that the reviewers of the New York Times, the New York Herald Tribune, the Saturday Review and of other journals of review in New York kept in touch to determine in what manner they should destroy books which were not to their liking.

Who then pays? It is the same economic middle class and the poor. That would still be true if the rich were taxed to the whole amount of their fortunes….

Blessed are the young, for they shall inherit the national debt….

Regulation should be by specific law, that all who run may read.

It would be far better that the party go down to defeat with the banner of principle flying than to win by pussyfooting.

The seizure by the government of the communications of persons not charged with wrong-doing justifies the immoral conduct of every snooper.

I could quote dozens more. Should Hoover re-appear and give a composite of what he writes here as a keynote speech at the 2016 Republican convention, and if it hasn't been packed with establishment cronies, I expect he would be interrupted every few lines with chants of “Hoo-ver, Hoo-ver” and nominated by acclamation.

It is sad that in the U.S. in the age of Obama there is no statesman with the stature, knowledge, and eloquence of Hoover who is making the case for liberty and warning of the inevitable tyranny which awaits at the end of the road to serfdom. There are voices articulating the message which Hoover expresses so pellucidly here, but in today's media environment they don't have access to the kind of platform Hoover did when his post-presidential policy speeches were routinely broadcast nationwide. After his being reviled ever since his presidency, not just by Democrats but by many in his own party, it's odd to feel nostalgia for Hoover, but Obama will do that to you.

In the Kindle edition the index cites page numbers in the hardcover edition which, since the Kindle edition does not include real page numbers, are completely useless.

 Permalink

Chaikin, Andrew. John Glenn: America's Astronaut. Washington: Smithsonian Books, 2014. ISBN 978-1-58834-486-1.
This short book (around 126 pages print equivalent), available only for the Kindle as a “Kindle single” at a modest price, chronicles the life and space missions of the first American to orbit the Earth. John Glenn grew up in a small Ohio town, the son of a plumber, and matured during the first great depression. His course in life was set when, in 1929, his father took his eight year old son on a joy ride offered by a pilot at local airfield in a Waco biplane. After that, Glenn filled up his room with model airplanes, intently followed news of air racers and pioneers of exploration by air, and in 1938 attended the Cleveland Air Races. There seemed little hope of his achieving his dream of becoming an airman himself: pilot training was expensive, and his family, while making ends meet during the depression, couldn't afford such a luxury.

With the war in Europe underway and the U.S. beginning to rearm and prepare for possible hostilities, Glenn heard of a government program, the Civilian Pilot Training Program, which would pay for his flying lessons and give him college credit for taking them. He entered the program immediately and received his pilot's license in May 1942. By then, the world was a very different place. Glenn dropped out of college in his junior year and applied for the Army Air Corps. When they dawdled accepting him, he volunteered for the Navy, which immediately sent him to flight school. After completing advanced flight training, he transferred to the Marine Corps, which was seeking aviators.

Sent to the South Pacific theatre, he flew 59 combat missions, mostly in close air support of ground troops in which Marine pilots specialise. With the end of the war, he decided to make the Marines his career and rotated through a number of stateside posts. After the outbreak of the Korean War, he hoped to see action in the jet combat emerging there and in 1953 arrived in country, again flying close air support. But an exchange program with the Air Force finally allowed him to achieve his ambition of engaging in air to air combat at ten miles a minute. He completed 90 combat missions in Korea, and emerged as one of the Marine Corps' most distinguished pilots.

Glenn parlayed his combat record into a test pilot position, which allowed him to fly the newest and hottest aircraft of the Navy and Marines. When NASA went looking for pilots for its Mercury manned spaceflight program, Glenn was naturally near the top of the list, and was among the 110 military test pilots invited to the top secret briefing about the project. Despite not meeting all of the formal selection criteria (he lacked a college degree), he performed superbly in all of the harrowing tests to which candidates were subjected, made cut after cut, and was among the seven selected to be the first astronauts.

This book, with copious illustrations and two embedded videos, chronicles Glenn's career, his harrowing first flight into space, his 1998 return to space on Space Shuttle Discovery on STS-95, and his 24 year stint in the U.S. Senate. I found the picture of Glenn after his pioneering flight somewhat airbrushed. It is said that while in the Senate, “He was known as one of NASA's strongest supporters on Capitol Hill…”, and yet in fact, while not one of the rabid Democrats who tried to kill NASA like Walter Mondale, he did not speak out as an advocate for a more aggressive space program aimed at expanding the human presence in space. His return to space is presented as the result of his assiduously promoting the benefits of space research for gerontology rather than a political junket by a senator which would generate publicity for NASA at a time when many people had tuned out its routine missions. (And if there was so much to be learned by flying elderly people in space, why was it never done again?)

John Glenn was a quintessential product of the old, tough America. A hero in two wars, test pilot when that was one of the most risky of occupations, and first to ride the thin-skinned pressure-stabilised Atlas rocket into orbit, his place in history is assured. His subsequent career as a politician was not particularly distinguished: he initiated few pieces of significant legislation and never became a figure on the national stage. His campaign for the 1984 Democratic presidential nomination went nowhere, and he was implicated in the “Keating Five” scandal. John Glenn accomplished enough in the first forty-five years of his life to earn him a secure place in American history. This book does an excellent job of recounting those events and placing them in the context of the time. If it goes a bit too far in lionising his subsequent career, that's understandable: a biographer shouldn't always succumb to balance when dealing with a hero.

 Permalink

Benson, Robert Hugh. Lord of the World. Seattle: CreateSpace, [1907] 2013. ISBN 978-1-4841-2706-3.
In the early years of the 21st century, humanism and secularism are ascendant in Europe. Many churches exist only as monuments to the past, and mainstream religions are hæmorrhaging adherents—only the Roman Catholic church remains moored to its traditions, and its influence is largely confined to Rome and Ireland. A European Parliament is asserting its power over formerly sovereign nations, and people seem resigned to losing their national identity. Old-age pensions and the extension of welfare benefits to those displaced from jobs in occupations which have become obsolete create a voting bloc guaranteed to support those who pay these benefits. The loss of belief in an eternal soul has cheapened human life, and euthanasia has become accepted, both for the gravely ill and injured, but also for those just weary of life.

This novel was published in 1907.

G. K. Chesterton is reputed to have said “When Man ceases to worship God he does not worship nothing but worships everything.” I say “reputed” because there is no evidence whatsoever he actually said this, although he said a number of other things which might be conflated into a similar statement. This dystopian novel illustrates how a society which has “moved on” from God toward a celebration of Humanity as deity is vulnerable to a charismatic figure who bears the eschaton in his hands. It is simply stunning how the author, without any knowledge of the great convulsions which were to ensue in the 20th century, so precisely forecast the humanistic spiritual desert of the 21st.

This is a novel of the coming of the Antichrist and the battle between the remnant of believers and coercive secularism reinforced by an emerging pagan cult satisfying our human thirst for transcendence. What is masterful about it is that while religious themes deeply underly it, if you simply ignore all of them, it is a thriller with deep philosophical roots. We live today in a time when religion is under unprecedented assault by humanism, and the threat to the sanctity of life has gone far beyond the imagination of the author.

This novel was written more than a century ago, but is set in our times and could not be more relevant to our present circumstances. How often has a work of dystopian science fiction been cited by the Supreme Pontiff of the Roman Catholic Church? Contemporary readers may find some of the untranslated citations from the Latin Mass obscure: that's what your search engine exists to illumine.

This work is in the public domain, and a number of print and electronic editions are available. I read this Kindle edition because it was (and is, at this writing) free. The formatting is less than perfect, but it is perfectly readable. A free electronic edition in a variety of formats can be downloaded from Project Gutenberg.

 Permalink

Cawdron, Peter. Children's Crusade. Seattle: Amazon Digital Services, 2014. ASIN B00JFHIMQI.
This novella, around 80 pages print equivalent and available only for the Kindle, is set in the world of Kurt Vonnegut's Slaughterhouse-Five. The publisher has licensed the rights for fiction using characters and circumstances created by Vonnegut, and this is a part of “The World of Kurt Vonnegut” series. If you haven't read Slaughterhouse-Five you will miss a great deal about this story.

Here we encounter Billy Pilgrim and Montana Wildhack in their alien zoo on Tralfamadore. Their zookeeper, a Tralfamadorian Montana nicknamed Stained, due to what looked like a birthmark on the face, has taken to visiting the humans when the zoo is closed, communicating with them telepathically as Tralfs do. Perceiving time as a true fourth dimension they can browse at will, Tralfs are fascinated with humans who, apart from Billy, live sequential lives and cannot jump around to explore events in their history.

Stained, like most Tralfs, believes that most momentous events in history are the work not of great leaders but of “little people” who accomplish great things when confronted with extraordinary circumstances. He (pronouns get complicated when there are five sexes, so I'll just pick one) sends Montana and Billy on telepathic journeys into human history, one at the dawn of human civilisation and another when a great civilisation veered into savagery, to show how a courageous individual with a sense of what is right can make all the difference. Finally they voyage together to a scene in human history which will bring tears to your eyes.

This narrative is artfully intercut with scenes of Vonnegut discovering the realities of life as a hard-boiled reporter at the City News Bureau of Chicago. This story is written in the spirit of Vonnegut and with some of the same stylistic flourishes, but I didn't get the sense the author went overboard in adopting Vonnegut's voice. The result worked superbly for this reader.

I read a pre-publication manuscript which the author kindly shared with me.

 Permalink

May 2014

Lewis, Michael. Flash Boys. New York: W. W. Norton, 2014. ISBN 978-0-393-24466-3.
Back in the bad old days before regulation of financial markets, one of the most common scams perpetrated by stockbrokers against their customers was “front running”. When a customer placed an order to buy a large block of stock, which order would be sufficient to move the market price of the stock higher, the broker would first place a smaller order to buy the same stock for its own account which would be filled without moving the market very much. Then the customer order would be placed, resulting in the market moving higher. The broker would then immediately sell the stock it had bought at the higher market price and pocket the difference. The profit on each individual transaction would be small, but if you add this up over all the volume of a broker's trades it is substantial. (For a sell order, the broker simply inverts the sense of the transactions.) Front running amounts to picking the customer's pocket to line that of the broker: if the customer's order were placed directly, it would execute at a better price had it not been front run. Consequently, front running has long been illegal and market regulators look closely at transaction histories to detect evidence of such criminality.

In the first decade of the 21st century, traders in the U.S. stock market discovered the market was behaving in a distinctly odd fashion. They had been used to seeing the bids (offers to buy) and asks (offers to sell) on their terminals and were accustomed to placing an order and seeing it hit by the offers in the market. But now, when they placed an order, the offers on the other side of the trade would instantly evaporate, only to come back at a price adverse to them. Many people running hundreds of billions of dollars in hedge, mutual, and pension funds had no idea what was going on, but they were certain the markets were rigged against them. Brad Katsuyama, working at the Royal Bank of Canada's Wall Street office, decided to get to the bottom of the mystery, and eventually discovered the financial equivalent of what you see when you lift up a sheet of wet cardboard in your yard. Due to regulations intended to make financial markets more efficient and fair, the monolithic stock exchanges in the U.S. had fractured into dozens of computer-mediated exchanges which traded the same securities. A broker seeking to buy stock on behalf of a customer could route the order to any of these exchanges based upon its own proprietary algorithm, or might match the order with that of another customer within its own “dark pool”, whence the transaction was completely opaque to the outside market.

But there were other players involved. Often co-located in or near the buildings housing the exchanges (most of which are in New Jersey, which has such a sterling reputation for probity) were the servers of “high frequency traders” (HFTs), who placed and cancelled orders in times measured in microseconds. What the HFTs were doing was, in a nutshell, front running. Here's how it works: the HFT places orders of a minimum size (typically 100 shares) for a large number of frequently traded stocks on numerous exchanges. When one of these orders is hit, the HFT immediately blasts in orders to other exchanges, which have not yet reacted to the buy order, and acquires sufficient shares to fill the original order before the price moves higher. This will, in turn, move the market higher and once it does, the original buy order is filled at the higher price. The HFT pockets the difference. A millisecond in advance can, and does, turn into billions of dollars of profit looted from investors. And all of this is not only completely legal, many of the exchanges bend over backward to attract and support HFTs in return for the fees they pay, creating bizarre kinds of orders whose only purpose for existing is to facilitate HFT strategies.

As Brad investigated the secretive world of HFTs, he discovered the curious subculture of Russian programmers who, having spent part of their lives learning how to game the Soviet system, took naturally to discovering how to game the much more lucrative world of Wall Street. Finally, he decides there is a business opportunity in creating an exchange which distinguishes itself from the others by not being crooked. This exchange, IEX, (it was originally to be called “Investors Exchange”, but the founders realised that the obvious Internet domain name, investorsexchange.com, could be infelicitously parsed into three words as well as two), would include technological constraints (including 38 miles of fibre optic cable in a box to create latency between the point of presence where traders could attach and the servers which matched bids and asks) which rendered the strategies of the HFTs impotent and obsolete.

Was it conceivable one could be successful on Wall Street by being honest? Perhaps one had to be a Canadian to entertain such a notion, but in the event, it was. But it wasn't easy. IEX rapidly discovered that Wall Street firms, given orders by customers to be executed on IEX, sent them elsewhere to venues more profitable to the broker. Confidentiality rules prohibited IEX from identifying the miscreants, but nothing prevented them, with the brokers' permission, from identifying those who weren't crooked. This worked quite well.

I'm usually pretty difficult to shock when it comes to the underside of the financial system. For decades, my working assumption is that anything, until proven otherwise, is a scam aimed at picking the pockets of customers, and sadly I have found this presumption correct in a large majority of cases. Still, this book was startling. It's amazing the creepy crawlers you see when you lift up that piece of cardboard, and to anybody with an engineering background the rickety structure and fantastic instability of what are supposed to be the capital markets of the world's leading economy is nothing less than shocking. It is no wonder such a system is prone to “flash crashes” and other excursions. An operating system designer who built such a system would be considered guilty of malfeasance (unless, I suppose, he worked for Microsoft, in which case he'd be a candidate for employee of the year), and yet it is tolerated at the heart of a financial system which, if it collapses, can bring down the world's economy.

Now, one can argue that it isn't such a big thing if somebody shaves a penny or two off the price of a stock you buy or sell. If you're a medium- or long-term investor, that'll make little difference in the results. But what will make your blood boil is that the stock broker with whom you're doing business may be complicit in this, and pocketing part of the take. Many people in the real world look at Wall Street and conclude “The markets are rigged; the banks and brokers are crooked; and the system is stacked against the investor.” As this book demonstrates, they are, for the most part, absolutely right.

 Permalink

Howe, Steven D. Honor Bound Honor Born. Seattle: Amazon Digital Services, 2011. ASIN B005JPZ4LQ.
During the author's twenty year career at the Los Alamos National Laboratory, he worked on a variety of technologies including nuclear propulsion and applications of nuclear power to space exploration and development. Since the 1980s he has been an advocate of a “power rich” approach to space missions, in particular lunar and Mars bases.

Most NASA design studies for bases have assumed that almost all of the mass required to establish the base and supply its crew must be brought from the Earth, and that electricity will be provided by solar panels or radiothermal generators which provide only limited amounts of power. (On the Moon, where days and nights are two weeks long, solar power is particularly problematic.) Howe explored how the economics of establishing a base would change if it had a compact nuclear fission reactor which could produce more electrical and thermal power (say, 200 kilowatts electrical) than the base required. This would allow the resources of the local environment to be exploited through a variety of industrial processes: “in-situ resource utilisation” (ISRU), which is just space jargon for living off the land.

For example, the Moon's crust is about 40% oxygen, 20% silicon, 12% iron, and 8% aluminium. With abundant power, this regolith can be melted and processed to extract these elements and recombine them into useful materials for the base: oxygen to breathe, iron for structural elements, glass (silicon plus oxygen) for windows and greenhouses, and so on. With the addition of nutrients and trace elements brought from Earth, lunar regolith can be used to grow crops and, with composting of waste many of these nutrients can be recycled. Note that none of this assumes discovery of water ice in perpetually shaded craters at the lunar poles: this can be done anywhere on the Moon. If water is present at the poles, the need to import hydrogen will be eliminated.

ISRU is a complete game-changer. If Conestoga wagons had to set out from the east coast of North America along the Oregon Trail carrying everything they needed for the entire journey, the trip would have been impossible. But the emigrants knew they could collect water, hunt game to eat, gather edible plants, and cut wood to make repairs, and so they only needed to take those items with them which weren't available along the way. So it can be on the Moon, and to an even greater extent on Mars. It's just that to liberate those necessities of life from the dead surface of those bodies requires lots of energy—but we know how to do that.

Now, the author could have written a dry monograph about lunar ISRU to add to the list of technical papers he has already published on the topic, but instead he made it the centrepiece of this science fiction novel, set in the near future, in which Selena Corp mounts a private mission to the Moon, funded on a shoestring, to land Hawk Stanton on the lunar surface with a nuclear reactor and what he needs to bootstrap a lunar base which will support him until he is relieved by the next mission, which will bring more settlers to expand the base. Using fiction as a vehicle to illustrate a mission concept isn't new: Wernher von Braun's original draft (never published) of The Mars Project was also a novel based upon his mission design (when the book by that name was finally published in 1953, it contained only the technical appendix to the novel).

What is different is that while by all accounts of those who have read it, von Braun's novel definitively established that he made the right career choice when he became an engineer rather than a fictioneer, Steven Howe's talents encompass both endeavours. While rich in technical detail (including an appendix which cites research papers regarding technologies used in the novel), this is a gripping page-turner with fleshed-out and complex characters, suspense, plot twists, and a back story of how coercive government reacts when something in which it has had no interest for decades suddenly seems ready to slip through its edacious claws. Hawk is alone and a long way from home, so that any injury or illness is a potential threat to his life and to the mission. The psychology of living and working in such an environment plays a part in the story. And these may not be the greatest threat he faces.

This is an excellent story, which can be read purely as a thriller, an exploration of the potential of lunar ISRU, or both. In an afterword the author says, “Someday, someone will do the missions I have described in this book. I suspect, however, they will not be Americans.” I'm not sure—they may be Americans, but they certainly won't work for NASA. The cover illustration is brilliant.

This book was originally published in 1997 in a paperback edition by Lunatech Press. This edition is now out of print and used copies are scarce and expensive. At this writing, the Kindle edition is just US$ 1.99.

 Permalink

Murray, Charles. The Curmudgeon's Guide to Getting Ahead. New York: Crown Business, 2014. ISBN 978-0-8041-4144-4.
Who, after reaching middle age and having learned, through the tedious but persuasive process of trial and error, what works and what doesn't, how to decide who is worthy of trust, and to distinguish passing fads from enduring values, hasn't dreamed of having a conversation with their twenty year old self, downloading this painfully acquired wisdom to give their younger self a leg up on the slippery, knife-edged-rungs of the ladder of life?

This slim book (144 pages) is a concentrated dose of wisdom applicable to young people entering the job market today. Those of my generation and the author's (he is a few years my senior) often worked at summer jobs during high school and part-time jobs while at university. This provided an introduction to the workplace, with its different social interactions than school or family life (in the business world, don't expect to be thanked for doing your job). Today's graduates entering the workforce often have no experience whatsoever in that environment and are bewildered because the incentives are so different from anything they've experienced before. They may have been a star student, but now they find themselves doing tedious work with little intellectual content, under strict deadlines, reporting to superiors who treat them as replaceable minions, not colleagues. Welcome to the real world.

This is an intensely practical book. Based upon a series of postings the author made on an internal site for interns and entry-level personnel at the American Enterprise Institute, he gives guidelines on writing, speaking, manners, appearance, and life strategy. As the author notes (p. 16), “Lots of the senior people who can help or hinder your career are closeted curmudgeons like me, including executives in their forties who have every appearance of being open minded and cool.” Even if you do not wish to become a curmudgeon yourself as you age (good luck with that, dude or dudette!), your advancement in your career will depend upon the approbation of those people you will become if you are fortunate enough to one day advance to their positions.

As a curmudgeon myself (hey, I hadn't yet turned forty when I found myself wandering the corridors of the company I'd founded and silently asking myself, “Who hired that?”), I found nothing in this book with which I disagree, and my only regret is that I couldn't have read it when I was 20. He warns millennials, “You're approaching adulthood with the elastic limit of a Baccarat champagne flute” (p. 96) and counsels them to spend some of those years when their plasticity is greatest and the penalty for errors is minimal in stretching themselves beyond their comfort zone, preparing for the challenges and adversity which will no doubt come later in life. Doug Casey has said that he could parachute naked into a country in sub-saharan Africa and within one week be in the ruler's office pitching a development scheme. That's rather more extreme than what Murray is advocating, but why not go large? Geronimo!

Throughout, Murray argues that what are often disdained as clichés are simply the accumulated wisdom of hundreds of generations of massively parallel trial and error search of the space of solutions of human problems, and that we ignore them at our peril. This is the essence of conservatism—valuing the wisdom of the past. But that does not mean one should be a conservative in the sense of believing that the past provides a unique template for the future. Those who came before did not have the computational power we have, nor the ability to communicate data worldwide almost instantaneously and nearly for free, nor the capacity, given the will, to migrate from Earth and make our species multi-planetary, nor to fix the aging bug and live forever. These innovations will fundamentally change human and post-human society, and yet I believe those who create them, and those who prosper in those new worlds will be exemplars of the timeless virtues which Murray describes here.

And when you get a tattoo or piercing, consider how it will look when you're seventy.

 Permalink

Sheldrake, Rupert. Science Set Free. New York: Random House, 2011. ISBN 978-0-7704-3672-8.
In this book, the author argues that science, as it is practiced today, has become prisoner to a collection of dogmas which constrain what should be free inquiry into the phenomena it investigates. These dogmas are not the principal theories of modern science such as the standard models of particle physics and cosmology, quantum mechanics, general relativity, or evolution (scientists work on a broad front to falsify these theories, knowing that any evidence to the contrary will win a ticket to Stockholm), but rather higher-level beliefs, often with remarkably little experimental foundation, which few people are working to test. It isn't so much that questioning these dogmas will result in excommunication from science, but rather that few working scientists ever think seriously about whether they might be wrong.

Suppose an astrophysicist in the 1960s started raving that everything we could see through our telescopes or had experimented with in our laboratories made up less than 5% of the mass of the universe, and the balance was around 27% invisible matter whose composition we knew nothing about at all and that the balance was invisible energy which was causing the expansion of the universe to accelerate, defying the universal attraction of gravity. Now, this theorist might not be dragged off in a straitjacket, but he would probably find it very difficult to publish his papers in respectable journals and, if he espoused these notions before obtaining tenure, might find them career-limiting. And yet, this is precisely what most present-day cosmologists consider the “standard model”, and it has been supported by experiments to a high degree of precision.

But even this revolution in our view of the universe and our place within it (95% of everything in the universe is unobserved and unknown!) does not challenge the most fundamental dogmas, ten of which are discussed in this book.

1. Is nature mechanical?

Are there self-organising principles of systems which explain the appearance of order and complexity from simpler systems? Do these same principles apply at levels ranging from formation of superclusters of galaxies to the origin of life and its evolution into ever more complex beings? Is the universe better modelled as a mechanism or an organism?

2. Is the total amount of matter and energy always the same?

Conservation of energy is taken almost as an axiom in physics but is now rarely tested. And what about that dark energy? Most cosmologists now believe that it increases without bound as the universe expands. Where does it come from? If we could somehow convert it to useful energy what does this do to the conservation of energy?

3. Are the laws of nature fixed?

If these laws be fixed, where did they come from? Why do the “fundamental constants” have the values they do? Are they, in fact, constants? These constants have varied in published handbooks over the last 50 years by amounts far greater than the error bars published in those handbooks—why? Are the laws simply habits established by the universe as it is tested? Is this why novel experiments produce results all over the map at the start and then settle down on a stable value as they are repeated? Why do crystallographers find it so difficult to initially crystallise a new compound but then find it increasingly easy thereafter?

4. Is matter unconscious?

If you are conscious, and you believe your brain to be purely a material system, then how can matter be unconscious? Is there something apart from the brain in which consciousness is embodied? If so, what is it? If the matter of your brain is conscious, what other matter could be conscious? The Sun is much larger than your brain and pulses with electromagnetic signals. Is it conscious? What does the Sun think about?

5. Is nature purposeless?

Is it plausible that the universe is the product of randomness devoid of purpose? How did a glowing plasma of subatomic particles organise itself into galaxies, solar systems, planets, life, and eventually scientists who would ask how it all came to be? Why does complexity appear to inexorably increase in systems through which energy flows? Why do patterns assert themselves in nature and persist even in the presence of disruptions? Are there limits to reductionism? Is more different?

6. Is all biological inheritance material?

The softer the science, the harder the dogma. Many physical scientists may take the previous questions as legitimate, albeit eccentric, questions amenable to research, but to question part of the dogma of biology is to whack the wasp nest with the mashie niblick. Our astounding success in sequencing the genomes of numerous organisms and understanding how these genomes are translated (including gene regulation) into the proteins which are assembled into those organisms has been enlightening but has explained much less than many enthusiasts expected. Is there something more going on? Is that “junk DNA” really junk, or is it significant? Is genetic transfer between parents and offspring the only means of information transfer?

7. Are memories stored as material traces?

Try to find a neuroscientist who takes seriously the idea that memories are not encoded somehow in the connections and weights of synapses within the brain. And yet, for half a century, every attempt to determine precisely how and where memories are stored has failed. Could there be something more going on? Recent experiments have indicated that Carolina Sphinx moths (Manduca sexta) remember aversions which they have learned as caterpillars, despite their nervous system being mostly dissolved and reconstituted during metamorphosis. How does this work?

8. Are minds confined to brains?

Somewhere between 70 and 97% of people surveyed in Europe and North America report having experienced the sense of being stared at or of having another person they were staring at from behind react to their stare. In experimental tests, involving tens of thousands of trials, some performed over closed circuit television without a direct visual link, 55% of people could detect when they were being stared at, while 50% would be expected by chance. Although the effect size was small, with the number of trials the result was highly significant.

9. Are psychic phenomena illusory?

More than a century of psychical research has produced ever-better controlled experiments which have converged upon results whose significance, while small, is greater than that which has caused clinical drug trials to have approved or rejected pharmaceuticals. Should we reject this evidence because we can't figure out the mechanism by which it works?

10. Is mechanistic medicine the only kind that really works?

We are the descendants of billions of generations of organisms who survived and reproduced before the advent of doctors. Evidently, we have been well-equipped by the ruthless process of evolution to heal ourselves, at least until we've reproduced and raised our offspring. Understanding of the causes of communicable diseases, public health measures, hygiene in hospitals, and surgical and pharmaceutical interventions have dramatically lengthened our lifespans and increased the years in which we are healthy and active. But does this explain everything? Since 2009 in the United States, response to placebos has been increasing: why? Why do we spend more and more on interventions for the gravely ill and little or nothing on research into complementary therapies which have been shown, in the few formal clinical tests performed, to reduce the incidence of these diseases?

This is a challenging book which asks many more questions than the few I've summarised above and provides extensive information, including citations to original sources, on research which challenges these dogmas. The author is not advocating abolishing our current enterprise of scientific investigation. Instead, he suggests, we might allocate a small fraction of the budget (say, between 1% and 5%) to look at wild-card alternatives. Allowing these to be chosen by the public from a list of proposals through a mechanism like crowd-funding Web sites would raise the public profile of science and engage the public (who are, after all, footing the bill) in the endeavour. (Note that “mainstream” research projects, for example extending the mission of a spacecraft, would be welcome to compete.)

 Permalink

Johnson, George. Miss Leavitt's Stars. New York: W. W. Norton, 2005. ISBN 978-0-393-32856-1.
Henrietta Swan Leavitt was a computer. No, this is not a tale of artificial intelligence, but rather of the key discovery which allowed astronomers to grasp the enormity of the universe. In the late 19th century it became increasingly common for daughters of modestly prosperous families to attend college. Henrietta Leavitt's father was a Congregational church minister in Ohio whose income allowed him to send his daughter to Oberlin College in 1885. In 1888 she transferred to the Society for the Collegiate Instruction of Women (later Radcliffe College) in Cambridge Massachusetts where she earned a bachelor's degree in 1892. In her senior year, she took a course in astronomy which sparked a lifetime fascination with the stars. After graduation, she remained in Cambridge and the next year was volunteering at the Harvard College Observatory and was later put on salary.

The director of the observatory, Edward Pickering, realised that while at the time it was considered inappropriate for women to sit up all night operating a telescope, much of the work of astronomy consisted of tedious tasks such as measuring the position and brightness of stars on photographic plates, compiling catalogues, and performing analyses based upon their data. Pickering realised that there was a pool of college educated women (especially in the Boston area) who were unlikely to find work as scientists but who were perfectly capable of doing this office work so essential to the progress of astronomy. Further, they would work for a fraction of the salary of a professional astronomer and Pickering, a shrewd administrator as well as a scientist, reasoned he could boost the output of his observatory by a substantial factor within the available budget. So it was that Leavitt was hired to work full-time at the observatory with a job title of “computer” and a salary of US$ 0.25 per hour (she later got a raise to 0.30, which is comparable to the U.S. federal minimum wage in 2013).

There was no shortage of work for Leavitt and her fellow computers (nicknamed “Pickering's Harem”) to do. The major project underway at the observatory was the creation of a catalogue of the position, magnitude, and colour of all stars visible from the northern hemisphere to the limiting magnitude of the telescope available. This was done by exposing glass photographic plates in long time exposures while keeping the telescope precisely aimed at a given patch of the sky (although telescopes of era had “clock drives” which approximately tracked the apparent motion of the sky, imprecision in the mechanism required a human observer [all men!] to track a guide star through an eyepiece during the long exposure and manually keep the star centred on the crosshairs with fine adjustment controls). Since each plate covered only a small fraction of the sky, the work of surveying the entire hemisphere was long, tedious, and often frustrating, as a cloud might drift across the field of view and ruin the exposure.

But if the work at the telescope was seemingly endless, analysing the plates it produced was far more arduous. Each plate would contain images of thousands of stars, the position and brightness (inferred from the size of the star's image on the plate) of which had to be measured and recorded. Further, plates taken through different colour filters had to be compared, with the difference in brightness used to estimate each star's colour and hence temperature. And if that weren't enough, plates taken of the same field at different times were compared to discover stars whose brightness varied from one time to another.

There are two kinds of these variable stars. The first consist of multiple star systems where one star periodically eclipses another, with the simplest case being an “eclipsing binary”: two stars which eclipse one another. Intrinsic variable stars are individual stars whose brightness varies over time, often accompanied by a change in the star's colour. Both kinds of variable stars were important to astronomers, with intrinsic variables offering clues to astrophysics and the evolution of stars.

Leavitt was called a “variable star ‘fiend’ ” by a Princeton astronomer in a letter to Pickering, commenting on the flood of discoveries she published in the Harvard Observatory's journals. For the ambitious Pickering, one hemisphere did not suffice. He arranged for an observatory to be established in Arequipa Peru, which would allow stars visible only from the southern hemisphere to be observed and catalogued. A 24 inch telescope and its accessories were shipped around Cape Horn from Boston, and before long the southern sky was being photographed, with the plates sent to Harvard for measurement and cataloguing. When the news had come to Harvard, it was the computers, not the astronomers, who scrutinised them to see what had been discovered.

Now, star catalogues of the kind Pickering was preparing, however useful they were to astronomers, were essentially two-dimensional. They give the position of the star on the sky, but no information about how distant it is from the solar system. Indeed, only the distances of few dozen of the very closest stars had been measured by the end of the 19th century by stellar parallax, but for all the rest of the stars their distances were a complete mystery and consequently also the scale of the visible universe was utterly unknown. Because the intrinsic brightness of stars varies over an enormous range (some stars are a million times more luminous than the Sun, which is itself ten thousand times brighter than some dwarf stars), a star of a given magnitude (brightness as observed from Earth) may either be a nearby star of modest brightness or an brilliant supergiant star far away.

One of the first intrinsic variable stars to be studied in depth was Delta Cephei, found to be variable in 1784. It is the prototype Cepheid variable, many more of which were discovered by Leavitt. Cepheids are old, massive stars, which have burnt up most of their hydrogen fuel and vary with a characteristic sawtooth-shaped light curve with periods ranging from days to months. In Leavitt's time the mechanism for this variability was unknown, but it is now understood to be due to oscillations in the star's radius as the ionisation state of helium in the star's outer layer cycles between opaque and transparent states, repeatedly trapping the star's energy and causing it to expand, then releasing it, making the star contract.

When examining the plates from the telescope in Peru, Leavitt was fascinated by the Magellanic clouds, which look like little bits of the Milky Way which broke off and migrated to distant parts of the sky (we now know them to be dwarf galaxies which may be in orbit around the Milky Way). Leavitt became fascinated by the clouds, and by assiduous searches on multiple plates showing them, eventually published in 1908 a list of 1,777 variable stars she had discovered in them. While astronomers did not know the exact nature of the Magellanic clouds, they were confident of two things: they were very distant (since stars within them of spectral types which are inherently bright were much dimmer than those seen elsewhere in the sky), and all of the stars in them were about the same distance from the solar system, since it was evident the clouds must be gravitationally bound to persist over time.

Leavitt's 1908 paper contained one of the greatest understatements in all of the scientific literature: “It is worthy of notice that the brightest variables have the longer periods.” She had discovered a measuring stick for the universe. In examining Cepheids among the variables in her list, she observed that there was a simple linear relationship between the period of pulsation and how bright the star appeared. But since all of the Cepheids in the clouds must be at about the same distance, that meant their absolute brightness could be determined from their periods. This made the Cepheids “standard candles” which could be used to chart the galaxy and beyond. Since they are so bright, they could be observed at great distances.

To take a simple case, suppose you observe a Cepheid in a star cluster, and another in a different part of the sky. The two have about the same period of oscillation, but the one in the cluster has one quarter the brightness at Earth of the other. Since the periods are the same, you know the inherent luminosities of the two stars are alike, so according to the inverse-square law the cluster must be twice as distant as the other star. If the Cepheids have different periods, the relationship Leavitt discovered can be used to compute the relative difference in their luminosity, again allowing their distances to be compared.

This method provides a relative distance scale to as far as you can identify and measure the periods of Cepheids, but it does not give their absolute distances. However, if you can measure the distance to any single Cepheid by other means, you can now compute the absolute distance to all of them. Not without controversy, this was accomplished, and for the first time astronomers beheld just how enormous the galaxy was, that the solar system was far from its centre, and that the mysterious “spiral neublæ” many had argued were clouds of gas or solar systems in formation were entire other galaxies among a myriad in a universe of breathtaking size. This was the work of others, but all of it was founded on Leavitt's discovery.

Henrietta Leavitt would not live to see all of these consequences of her work. She died of cancer in 1921 at the age of 53, while the debate was still raging over whether the Milky Way was the entire universe or just one of a vast number of “island universes”. Both sides in this controversy based their arguments in large part upon her work.

She was paid just ten cents more per hour than a cotton mill worker, and never given the title “astronomer”, never made an observation with a telescope, and yet working endless hours at her desk made one of the most profound discoveries of 20th century astronomy, one which is still being refined by precision measurements from the Earth and space today. While the public hardly ever heard her name, she published her work in professional journals and eminent astronomers were well aware of its significance and her part in creating it. A 66 kilometre crater on the Moon bears her name (the one named after that Armstrong fellow is just 4.6 km, albeit on the near side).

This short book is only in part a biography of Leavitt. Apart from her work, she left few traces of her life. It is as much a story of how astronomy was done in her days and how she and others made the giant leap in establishing what we now call the cosmic distance ladder. This was a complicated process, with many missteps and controversies along the way, which are well described here.

In the Kindle edition (as viewed on the iPad) the quotations at the start of each chapter are mis-formatted so each character appears on its own line. The index contains references to page numbers in the print edition and is useless because the Kindle edition contains no page numbers.

 Permalink

June 2014

Coppley, Jackson. Tales From Our Near Future. Seattle: CreateSpace, 2014. ISBN 978-1-4961-2851-5.
I am increasingly convinced that the 2020s will be a very interesting decade. As computing power continues its inexorable exponential growth (and there is no reason to believe this growth will abate, except in the aftermath of economic and/or societal collapse), more and more things which seemed absurd just a few years before will become commonplace—consider self-driving cars. This slim book (142 pages in the print edition) collects three unrelated stories set in this era. In each, the author envisions a “soft take-off” scenario rather than the sudden onset of a technological singularity which rapidly renders the world incomprehensible.

These are all “puzzle stories” in the tradition of Isaac Asimov's early short stories. You'll enjoy them best if you just immerse yourself in the world the characters inhabit, get to know them, and then discover what is really going on, which may not be at all what it appears on the surface. By the nature of puzzle stories, almost anything I say about them would be a spoiler, so I'll refrain from getting into details other than asking, “What would it be like to know everything?”, which is the premise of the first story, stated on its first page.

Two of the three stories contain explicit sexual scenes and are not suitable for younger readers. This book was recommended (scroll down a few paragraphs) by Jerry Pournelle.

 Permalink

Geraghty, Jim. The Weed Agency. New York: Crown Forum, 2014. ISBN 978-0-7704-3652-0.
During the Carter administration, the peanut farmer become president, a man very well acquainted with weeds, created the Agency of Invasive Species (AIS) within the Department of Agriculture to cope with the menace. Well, not really—the agency which occupies centre stage in this farce is fictional but, as the author notes in the preface, the Federal Interagency Committee for the Management of Noxious and Exotic Weeds, the Aquatic Nuisance Species Task Force, the Federal Interagency Committee on Invasive Terrestrial Animals and Pathogens, and the National Invasive Species Council of which they are members along with a list of other agencies, all do exist. So while it may seem amusing that a bankrupt and over-extended government would have an agency devoted to weeds, in fact that real government has an entire portfolio of such agencies, along with, naturally, a council to co-ordinate their activities.

The AIS has a politically appointed director, but the agency had been run since inception by Administrative Director Adam Humphrey, career civil service, who is training his deputy, Jack Wilkins, new to the civil service after a frustrating low-level post in the Carter White House, in the ways of the permanent bureaucracy and how to deal with political appointees, members of congress, and rival agencies. Humphrey has an instinct for how to position the agency's mission as political winds shift over the decades: during the Reagan years as American agriculture's first line of defence against the threat of devastation by Soviet weeds, at the cutting edge of information technology revolutionising citizens' interaction with government in the Gingrich era, and essential to avert even more disastrous attacks on the nation after the terrorist attacks in 2001.

Humphrey and Wilkins are masters of the care and feeding of congressional allies, who are rewarded with agency facilities in their districts, and neutralising the occasional idealistic budget cutter who wishes to limit the growth of the agency's budget or, horror of horrors, abolish it.

We also see the agency through the eyes of three young women who arrived at the agency in 1993 suffused with optimism for “reinventing government” and “building a bridge to the twenty-first century”. While each of them—Lisa, hired in the communications office; Jamie, an event co-ordinator; and Ava, a technology systems analyst—were well aware that their positions in the federal bureaucracy were deep in the weeds, they believed they had the energy and ambition to excel and rise to positions where they would have the power to effect change for the better.

Then they began to actually work within the structure of the agency and realise what the civil service actually was. Thomas Sowell has remarked that the experience in his life which transformed him from being a leftist (actually, a Marxist) to a champion of free markets and individual liberty was working as a summer intern in 1960 in a federal agency. He says that after experiencing the civil service first-hand, he realised that whatever were the problems of society that concerned him, government bureaucracy was not the solution. Lisa, Jamie, and Ava all have similar experiences, and react in different ways. Ava decides she just can't take it any more and is tempted by a job in the middle of the dot com boom. Her experience is both entertaining and enlightening.

Even the most obscure federal agency has the power to mess up on a colossal scale and wind up on the front page of the Washington Post and the focus of a congressional inquest. So it was to be for the AIS, when an ill wind brought a threat to agriculture in the highly-visible districts of powerful members of congress. All the bureaucratic and political wiles of the agency had to be summoned to counter the threat and allow the agency to continue to do what such organisations do best: nothing.

Jim Geraghty is a veteran reporter, contributing editor, and blogger at National Review; his work has appeared in a long list of other publications. His reportage has always been characterised by a dry wit, but for a first foray into satire and farce, this is a masterful accomplishment. It is as funny as some of the best work of Christopher Buckley, and that's about as good as contemporary political humour gets. Geraghty's plot is not as zany as most of Buckley's, but it is more grounded in the political reality of Washington. One of the most effective devices in the book is to describe this or that absurdity and then add a footnote documenting that what you've just read actually exists, or that an outrageous statement uttered by a character was said on the record by a politician or bureaucrat.

Much of this novel reads like an American version of the British sitcom Yes Minister (Margaret Thatcher's favourite television programme), and although the author doesn't mention it in the author's note or acknowledgements, I suspect that the master civil servant's being named “Humphrey” is an homage to that series. Sharp-eyed readers will discover another oblique reference to Yes Minister in the entry for November 2012 in the final chapter.

 Permalink

Rickards, James. The Death of Money. New York: Portfolio / Penguin, 2014. ISBN 978-1-59184-670-3.
In his 2011 book Currency Wars (November 2011), the author discusses what he sees as an inevitable conflict among fiat currencies for dominance in international trade as the dollar, debased as a result of profligate spending and assumption of debt by the government that issues it, is displaced as the world's preeminent trading and reserve currency. With all currencies backed by nothing more than promises made by those who issue them, the stage is set for a race to the bottom: one government weakens its currency to obtain short-term advantage in international trade, only to have its competitors devalue, setting off a chain of competitive devaluations which disrupt trade, cause investment to be deferred due to uncertainty, and destroy the savings of those holding the currencies in question. In 2011, Rickards wrote that it was still possible to avert an era of currency war, although that was not the way to bet. In this volume, three years later, he surveys the scene and concludes that we are now in the early stages of a collapse of the global monetary system, which will be replaced by something very different from the status quo, but whose details we cannot, at this time, confidently predict. Investors and companies involved in international commerce need to understand what is happening and take steps to protect themselves in the era of turbulence which is ahead.

We often speak of “globalisation” as if it were something new, emerging only in recent years, but in fact it is an ongoing trend which dates from the age of wooden ships and sail. Once ocean commerce became practical in the 18th century, comparative advantage caused production and processing of goods to be concentrated in locations where they could be done most efficiently, linked by the sea lanes. This commerce was enormously facilitated by a global currency—if trading partners all used their own currencies, a plantation owner in the West Indies shipping sugar to Great Britain might see his profit wiped out if the exchange rate between his currency and the British pound changed by the time the ship arrived and he was paid. From the dawn of global trade to the present there has been a global currency. Initially, it was the British pound, backed by gold in the vaults of the Bank of England. Even commerce between, say, Argentina and Italy, was usually denominated in pounds and cleared through banks in London. The impoverishment of Britain in World War I began a shift of the centre of financial power from London to New York, and after World War II the Bretton Woods conference established the U.S. dollar, backed by gold, as the world's reserve and trade currency. The world continued to have a global currency, but now it was issued in Washington, not London. (The communist bloc did not use dollars for trade within itself, but conducted its trade with nations outside the bloc in dollars.) In 1971, the U.S. suspended the convertibility of the dollar to gold, and ever since the dollar has been entirely a fiat currency, backed only by the confidence of those who hold it that they will be able to exchange it for goods in the future.

The international monetary system is now in a most unusual period. The dollar remains the nominal reserve and trade currency, but the fraction of reserves held and trade conducted in dollars continues to fall. All of the major currencies: the dollar, euro, yen, pound, yuan, rouble—are pure fiat currencies unbacked by any tangible asset, and valued only against one another in ever-shifting foreign exchange markets. Most of these currencies are issued by central banks of governments which have taken on vast amounts of debt which nobody in their right mind believes can ever be paid off, and is approaching levels at which even a modest rise in interest rates to historical mean levels would make the interest on the debt impossible to service. There is every reason for countries holding large reserves of dollars to be worried, but there isn't any other currency which looks substantially better as an alternative. The dollar is, essentially, the best horse in the glue factory.

The author argues that we are on the threshold of a collapse of the international monetary system, and that the outlines of what will replace it are not yet clear. The phrase “collapse of the international monetary system” sounds apocalyptic, but we're not talking about some kind of Mad Max societal cataclysm. As the author observes, the international monetary system collapsed three times in the last century: in 1914, 1939, and 1971, and life went on (albeit in the first two cases, with disastrous and sanguinary wars), and eventually the financial system was reconstructed. There were, in each case, winners and losers, and investors who failed to protect themselves against these turbulent changes paid dearly for their complacency.

In this book, the author surveys the evolving international financial scene. He comes to conclusions which may surprise observers from a variety of perspectives. He believes the Euro is here to stay, and that its advantages to Germany coupled with Germany's economic power will carry it through its current problems. Ultimately, the countries on the periphery will consider the Euro, whatever its costs to them in unemployment and austerity, better than the instability of their national currencies before joining the Eurozone. China is seen as the victim of its own success, with financial warlords skimming off the prosperity of its rapid growth, aided by an opaque and deeply corrupt political class. The developing world is increasingly forging bilateral agreements which bypass the dollar and trade in their own currencies.

What is an investor to do faced with such uncertainty? Well, that's far from clear. The one thing one shouldn't do is assume the present system will persist until you're ready to retire, and invest your retirement savings entirely on the assumption nothing will change. Fortunately, there are alternative investments (for example, gold and silver, farm land, fine art, funds investing in natural resources, and, yes, cash in a variety of currencies [to enable you to pick up bargains when other assets crater]) which will appreciate enormously when the monetary system collapses. You don't have to (and shouldn't) bet everything on a collapse: a relatively small hedge against it will protect you should it happen.

This is an extensively researched and deep investigation of the present state of the international monetary system. As the author notes, ever since all currencies were severed from gold in 1971 and began to float against one another, the complexity of the system has increased enormously. What were once fixed exchange rates, adjusted only when countries faced financial crisis, have been replaced by exchange rates which change in milliseconds, with a huge superstructure of futures, options, currency swaps, and other derivatives whose notional value dwarfs the actual currencies in circulation. This is an immensely fragile system which even a small perturbation can cause to collapse. Faced with a risk whose probability and consequences are impossible to quantify, the prudent investor takes steps to mitigate it. This book provides background for developing such a plan.

 Permalink

Mankins, John C. The Case for Space Solar Power. Houston: Virginia Edition, 2014. ISBN 978-0-9913370-0-2.
As world population continues to grow and people in the developing world improve their standard of living toward the level of residents of industrialised nations, demand for energy will increase enormously. Even taking into account anticipated progress in energy conservation and forecasts that world population will reach a mid-century peak and then stabilise, the demand for electricity alone is forecasted to quadruple in the century from 2000 to 2100. If electric vehicles shift a substantial part of the energy consumed for transportation from hydrocarbon fuels to electricity, the demand for electric power will be greater still.

Providing this electricity in an affordable, sustainable way is a tremendous challenge. Most electricity today is produced by burning fuels such as coal, natural gas, and petroleum; by nuclear fission reactors; and by hydroelectric power generated by dams. Quadrupling electric power generation by any of these means poses serious problems. Fossil fuels may be subject to depletion, pose environmental consequences both in extraction and release of combustion products into the atmosphere, and are distributed unevenly around the world, leading to geopolitical tensions between have and have-not countries. Uranium fission is a technology with few environmental drawbacks, but operating it in a safe manner is very demanding and requires continuous vigilance over the decades-long lifespan of a power station. Further, the risk exists that nuclear material can be diverted for weapons use, especially if nuclear power stations proliferate into areas which are politically unstable. Hydroelectric power is clean, generally reliable (except in the case of extreme droughts), and inexhaustible, but unfortunately most rivers which are suitable for its generation have already been dammed, and potential projects which might be developed are insufficient to meet the demand.

Well, what about those “sustainable energy” projects the environmentalists are always babbling about: solar panels, eagle shredders (wind turbines), and the like? They do generate energy without fuel, but they are not the solution to the problem. In order to understand why, we need to look into the nature of the market for electricity, which is segmented into two components, even though the current flows through the same wires. The first is “base load” power. The demand for electricity varies during the day, from day to day, and seasonally (for example, electricity for air conditioning peaks during the mid-day hours of summer). The base load is the electricity demand which is always present, regardless of these changes in demand. If you look at a long-term plot of electricity demand and draw a line through the troughs in the curve, everything below that line is base load power and everything above it is “peak” power. Base load power is typically provided by the sources discussed in the previous paragraph: hydrocarbon, nuclear, and hydroelectric. Because there is a continuous demand for the power they generate, these plants are designed to run non-stop (with excess capacity to cover stand-downs for maintenance), and may be complicated to start up or shut down. In Switzerland, for example, 56% of base load power is produced from hydroelectric plants and 39% from nuclear fission reactors.

The balance of electrical demand, peak power, is usually generated by smaller power plants which can be brought on-line and shut down quickly as demand varies. Peaking plants sell their power onto the grid at prices substantially higher than base load plants, which compensates for their less efficient operation and higher capital costs for intermittent operation. In Switzerland, most peak energy is generated by thermal plants which can burn either natural gas or oil.

Now the problem with “alternative energy” sources such as solar panels and windmills becomes apparent: they produce neither base load nor peak power. Solar panels produce electricity only during the day, and when the Sun is not obscured by clouds. Windmills, obviously, only generate when the wind is blowing. Since there is no way to efficiently store large quantities of energy (all existing storage technologies raise the cost of electricity to uneconomic levels), these technologies cannot be used for base load power, since they cannot be relied upon to continuously furnish power to the grid. Neither can they be used for peak power generation, since the times at which they are producing power may not coincide with times of peak demand. That isn't to say these energy sources cannot be useful. For example, solar panels on the roofs of buildings in the American southwest make a tremendous amount of sense since they tend to produce power at precisely the times the demand for air conditioning is greatest. This can smooth out, but not replace, the need for peak power generation on the grid.

If we wish to dramatically expand electricity generation without relying on fossil fuels for base load power, there are remarkably few potential technologies. Geothermal power is reliable and inexpensive, but is only available in a limited number of areas and cannot come close to meeting the demand. Nuclear fission, especially modern, modular designs is feasible, but faces formidable opposition from the fear-based community. If nuclear fusion ever becomes practical, we will have a limitless, mostly clean energy source, but after sixty years of research we are still decades away from an operational power plant, and it is entirely possible the entire effort may fail. The liquid fluoride thorium reactor, a technology demonstrated in the 1960s, could provide centuries of energy without the nuclear waste or weapons diversion risks of uranium-based nuclear power, but even if it were developed to industrial scale it's still a “nuclear reactor” and can be expected to stimulate the same hysteria as existing nuclear technology.

This book explores an entirely different alternative. Think about it: once you get above the Earth's atmosphere and sufficiently far from the Earth to avoid its shadow, the Sun provides a steady 1.368 kilowatts per square metre, and will continue to do so, non-stop, for billions of years into the future (actually, the Sun is gradually brightening, so on the scale of hundreds of millions of years this figure will increase). If this energy could be harvested and delivered efficiently to Earth, the electricity needs of a global technological civilisation could be met with a negligible impact on the Earth's environment. With present-day photovoltaic cells, we can convert 40% of incident sunlight to electricity, and wireless power transmission in the microwave band (to which the Earth's atmosphere is transparent, even in the presence of clouds and precipitation) has been demonstrated at 40% efficiency, with 60% end-to-end efficiency expected for future systems.

Thus, no scientific breakthrough of any kind is required to harvest abundant solar energy which presently streams past the Earth and deliver it to receiving stations on the ground which feed it into the power grid. Since the solar power satellites would generate energy 99.5% of the time (with short outages when passing through the Earth's shadow near the equinoxes, at which time another satellite at a different longitude could pick up the load), this would be base load power, with no fuel source required. It's “just a matter of engineering” to calculate what would be required to build the collector satellite, launch it into geostationary orbit (where it would stay above the same point on Earth), and build the receiver station on the ground to collect the energy beamed down by the satellite. Then, given a proposed design, one can calculate the capital cost to bring such a system into production, its operating cost, the price of power it would deliver to the grid, and the time to recover the investment in the system.

Solar power satellites are not a new idea. In 1968, Peter Glaser published a description of a system with photovoltaic electricity generation and microwave power transmission to an antenna on Earth; in 1973 he was granted U.S. patent 3,781,647 for the system. In the 1970s NASA and the Department of Energy conducted a detailed study of the concept, publishing a reference design in 1979 which envisioned a platform in geostationary orbit with solar arrays measuring 5 by 25 kilometres and requiring a monstrous space shuttle with payload of 250 metric tons and space factories to assemble the platforms. Design was entirely conventional, using much the same technologies as were later used in the International Space Station (ISS) (but for a structure twenty times its size). Given that the ISS has a cost estimated at US$ 150 billion, NASA's 1979 estimate that a complete, operational solar power satellite system comprising 60 power generation platforms and Earth-based infrastructure would cost (in 2014 dollars) between 2.9 and 8.7 trillion might be considered optimistic. Back then, a trillion dollars was a lot of money, and this study pretty much put an end to serious consideration of solar power satellites in the U.S.for almost two decades. In the late 1990s, NASA, realising that much progress has been made in many of the enabling technologies for space solar power, commissioned a “Fresh Look Study”, which concluded that the state of the art was still insufficiently advanced to make power satellites economically feasible.

In this book, the author, after a 25-year career at NASA, recounts the history of solar power satellites to date and presents a radically new design, SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), which he argues is congruent with 21st century manufacturing technology. There are two fundamental reasons previous cost estimates for solar power satellites have come up with such forbidding figures. First, space hardware is hideously expensive to develop and manufacture. Measured in US$ per kilogram, a laptop computer is around $200/kg, a Boeing 747 $1400/kg, and a smart phone $1800/kg. By comparison, the Space Shuttle Orbiter cost $86,000/kg and the International Space Station around $110,000/kg. Most of the exorbitant cost of space hardware has little to do with the space environment, but is due to its being essentially hand-built in small numbers, and thus never having the benefit of moving down the learning curve as a product is put into mass production nor of automation in manufacturing (which isn't cost-effective when you're only making a few of a product). Second, once you've paid that enormous cost per kilogram for the space hardware, you have launch it from the Earth into space and transport it to the orbit in which it will operate. For communication satellites which, like solar power satellites, operate in geostationary orbit, current launchers cost around US$ 50,000 per kilogram delivered there. New entrants into the market may substantially reduce this cost, but without a breakthrough such as full reusability of the launcher, it will stay at an elevated level.

SPS-ALPHA tackles the high cost of space hardware by adopting a “hyper modular” design, in which the power satellite is composed of huge numbers of identical modules of just eight different types. Each of these modules is on a scale which permits prototypes to be fabricated in facilities no more sophisticated than university laboratories and light enough they fall into the “smallsat” category, permitting inexpensive tests in the space environment as required. A production power satellite, designed to deliver 2 gigawatts of electricity to Earth, will have almost four hundred thousand of each of three types of these modules, assembled in space by 4,888 robot arm modules, using more than two million interconnect modules. These are numbers where mass production economies kick in: once the module design has been tested and certified you can put it out for bids for serial production. And a factory which invests in making these modules inexpensively can be assured of follow-on business if the initial power satellite is a success, since there will a demand for dozens or hundreds more once its practicality is demonstrated. None of these modules is remotely as complicated as an iPhone, and once they are made in comparable quantities shouldn't cost any more. What would an iPhone cost if they only made five of them?

Modularity also requires the design to be distributed and redundant. There is no single-point failure mode in the system. The propulsion and attitude control module is replicated 200 times in the full design. As modules fail, for whatever cause, they will have minimal impact on the performance of the satellite and can be swapped out as part of routine maintenance. The author estimates than on an ongoing basis, around 3% of modules will be replaced per year.

The problem of launch cost is addressed indirectly by the modular design. Since no module masses more than 600 kg (the propulsion module) and none of the others exceed 100 kg, they do not require a heavy lift launcher. Modules can simply be apportioned out among a large number of flights of the most economical launchers available. Construction of a full scale solar power satellite will require between 500 and 1000 launches per year of a launcher with a capacity in the 10 to 20 metric ton range. This dwarfs the entire global launch industry, and will provide motivation to fund the development of new, reusable, launcher designs and the volume of business to push their cost down the learning curve, with a goal of reducing cost for launch to low Earth orbit to US$ 300–500 per kilogram. Note that the SpaceX Falcon Heavy, under development with a projected first flight in 2015, already is priced around US$ 1000/kg without reusability of the three core stages which is expected to be introduced in the future.

The author lays out five “Design Reference Missions” which progress from small-scale tests of a few modules in low Earth orbit to a full production power satellite delivering 2 gigawatts to the electrical grid. He estimates a cost of around US$ 5 billion to the pilot plant demonstrator and 20 billion to the first full scale power satellite. This is not a small sum of money, but is comparable to the approximately US$ 26 billion cost of the Three Gorges Dam in China. Once power satellites start to come on line, each feeding power into the grid with no cost for fuel and modest maintenance expenses (comparable to those for a hydroelectric dam), the initial investment does not take long to be recovered. Further, the power satellite effort will bootstrap the infrastructure for routine, inexpensive access to space, and the power satellite modules can also be used in other space applications (for example, very high power communication satellites).

The most frequently raised objection when power satellites are mentioned is fear that they could be used as a “death ray”. This is, quite simply, nonsense. The microwave power beam arriving at the Earth's surface will have an intensity between 10–20% of summer sunlight, so a mirror reflecting the Sun would be a more effective death ray. Extensive tests were done to determine if the beam would affect birds, insects, and aircraft flying through it and all concluded there was no risk. A power satellite which beamed down its power with a laser could be weaponised, but nobody is proposing that, since it would have problems with atmospheric conditions and cost more than microwave transmission.

This book provides a comprehensive examination of the history of the concept of solar power from space, the various designs proposed over the years and studies conducted of them, and an in-depth presentation of the technology and economic rationale for the SPS-ALPHA system. It presents an energy future which is very different from that which most people envision, provides a way to bring the benefits of electrification to developing regions without any environmental consequences whatever, and ensure a secure supply of electricity for the foreseeable future.

This is a rewarding, but rather tedious read. Perhaps it's due to the author's 25 years at NASA, but the text is cluttered with acronyms—there are fourteen pages of them defined in a glossary at the end of the book—and busy charts, some of which are difficult to read as reproduced in the Kindle edition. Copy editing is so-so: I noted 28 errors, and I wasn't especially looking for them. The index in the Kindle edition lists page numbers in the print edition which are useless because the electronic edition does not contain page numbers.

 Permalink

July 2014

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988, 1994] 2004. ISBN 978-0-345-47609-8.
One hundred years ago the world was on the brink of a cataclysmic confrontation which would cause casualties numbered in the tens of millions, destroy the pre-existing international order, depose royalty and dissolve empires, and plant the seeds for tyrannical regimes and future conflicts with an even more horrific toll in human suffering. It is not exaggeration to speak of World War I as the pivotal event of the 20th century, since so much that followed can be viewed as sequelæ which can be traced directly to that conflict.

It is thus important to understand how that war came to be, and how in the first month after its outbreak the expectations of all parties to the conflict, arrived at through the most exhaustive study by military and political élites, were proven completely wrong and what was expected to be a short, conclusive war turned instead into a protracted blood-letting which would continue for more than four years of largely static warfare. This magnificent book, which covers the events leading to the war and the first month after its outbreak, provides a highly readable narrative history of the period with insight into both the grand folly of war plans drawn up in isolation and mechanically followed even after abundant evidence of their faults have caused tragedy, but also how contingency—chance, and the decisions of fallible human beings in positions of authority can tilt the balance of history.

The author is not an academic historian, and she writes for a popular audience. This has caused some to sniff at her work, but as she noted, Herodotus, Thucydides, Gibbon, and MacCauley did not have Ph.D.s. She immerses the reader in the world before the war, beginning with the 1910 funeral in London of Edward VII where nine monarchs rode in the cortège, most of whose nations would be at war four years hence. The system of alliances is described in detail, as is the mobilisation plans of the future combatants, all of which would contribute to fatal instability of the system to a small perturbation.

Germany, France, Russia, and Austria-Hungary had all drawn up detailed mobilisation plans for assembling, deploying, and operating their conscript armies in the event of war. (Britain, with an all-volunteer regular army which was tiny by continental standards, had no pre-defined mobilisation plan.) As you might expect, Germany's plan was the most detailed, specifying railroad schedules and the composition of individual trains. Now, the important thing to keep in mind about these plans is that, together, they created a powerful first-mover advantage. If Russia began to mobilise, and Germany hesitated in its own mobilisation in the hope of defusing the conflict, it might be at a great disadvantage if Russia had only a few days of advance in assembling its forces. This means that there was a powerful incentive in issuing the mobilisation order first, and a compelling reason for an adversary to begin his own mobilisation order once news of it became known.

Compounding this instability were alliances which compelled parties to them to come to the assistance of others. France had no direct interest in the conflict between Germany and Austria-Hungary and Russia in the Balkans, but it had an alliance with Russia, and was pulled into the conflict. When France began to mobilise, Germany activated its own mobilisation and the Schlieffen plan to invade France through Belgium. Once the Germans violated the neutrality of Belgium, Britain's guarantee of that neutrality required (after the customary ambiguity and dithering) a declaration of war against Germany, and the stage was set for a general war in Europe.

The focus here is on the initial phase of the war: where Germany, France, and Russia were all following their pre-war plans, all initially expecting a swift conquest of their opponents—the Battle of the Frontiers, which occupied most of the month of August 1914. An afterword covers the First Battle of the Marne where the German offensive on the Western front was halted and the stage set for the static trench warfare which was to ensue. At the conclusion of that battle, all of the shining pre-war plans were in tatters, many commanders were disgraced or cashiered, and lessons learned through the tragedy “by which God teaches the law to kings” (p. 275).

A century later, the lessons of the outbreak of World War I could not be more relevant. On the eve of the war, many believed that the interconnection of the soon-to-be belligerents through trade was such that war was unthinkable, as it would quickly impoverish them. Today, the world is even more connected and yet there are conflicts all around the margins, with alliances spanning the globe. Unlike 1914, when the world was largely dominated by great powers, now there are rogue states, non-state actors, movements dominated by religion, and neo-barbarism and piracy loose upon the stage, and some of these may lay their hands on weapons whose destructive power dwarf those of 1914–1918. This book, published more than fifty years ago, about a conflict a century old, could not be more timely.

 Permalink

Patterson, William H., Jr. Robert A. Heinlein: In Dialogue with His Century. Vol. 1 New York: Tor Books, 2010. ISBN 978-0-7653-1960-9.
Robert Heinlein came from a family who had been present in America before there were the United States, and whose members had served in all of the wars of the Republic. Despite being thin, frail, and with dodgy eyesight, he managed to be appointed to the U.S. Naval Academy where, despite demerits for being a hellion, he graduated and was commissioned as a naval officer. He was on the track to a naval career when felled by tuberculosis (which was, in the 1930s, a potential death sentence, with the possibility of recurrence any time in later life).

Heinlein had written while in the Navy, but after his forced medical retirement, turned his attention to writing science fiction for pulp magazines, and after receiving a cheque for US$ 70 for his first short story, “Life-Line”, he exclaimed, “How long has this racket been going on? And why didn't anybody tell me about it sooner?” Heinlein always viewed writing as a business, and kept a thermometer on which he charted his revenue toward paying off the mortgage on his house.

While Heinlein fit in very well with the Navy, and might have been, absent medical problems, a significant commander in the fleet in World War II, he was also, at heart, a bohemian, with a soul almost orthogonal to military tradition and discipline. His first marriage was a fling with a woman who introduced him to physical delights of which he was unaware. That ended quickly, and then he married Leslyn, who was his muse, copy-editor, and business manager in a marriage which persisted throughout World War II, when both were involved in war work. Leslyn worked herself in this effort into insanity and alcoholism, and they divorced in 1947.

It was Robert Heinlein who vaulted science fiction from the ghetto of the pulp magazines to the “slicks” such as Collier's and the Saturday Evening Post. This was due to a technological transition in the publishing industry which is comparable to that presently underway in the migration from print to electronic publishing. Rationing of paper during World War II helped to create the “pocket book” or paperback publishing industry. After the end of the war, these new entrants in the publishing market saw a major opportunity in publishing anthologies of stories previously published in the pulps. The pulp publishers viewed this as an existential threat—who would buy a pulp magazine if, for almost the same price, one could buy a collection of the best stories from the last decade in all of those magazines?

Heinlein found his fiction entrapped in this struggle. While today, when you sell a story to a magazine in the U.S., you usually only sell “First North American serial rights”, in the 1930s and 1940s, authors sold all rights, and it was up to the publisher to release their rights for republication of a work in an anthology or adaptation into a screenplay. This is parallel to the contemporary battle between traditional publishers and independent publishing platforms, which have become the heart of science fiction.

Heinlein was complex. While an exemplary naval officer, he was a nudist, married three times, interested in the esoteric (and a close associate of Jack Parsons and L. Ron Hubbard). He was an enthusiastic supporter of Upton Sinclair's EPIC movement and his “Social Credit” agenda.

This authorised biography, with major contributions from Heinlein's widow, Virginia, chronicles the master storyteller's life in his first forty years—until he found, or created, an audience receptive to the tales of wonder he spun. If you've read all of Heinlein's fiction, it may be difficult to imagine how much of it was based in Heinlein's own life. If you thought Heinlein's later novels were weird, appreciate how the master was weird before you were born.

I had the privilege of meeting Robert and Virginia Heinlein in 1984. I shall always cherish that moment.

 Permalink

Long, Rob. Conversations with My Agent (and Set Up, Joke, Set Up, Joke). London: Bloomsbury Publishing, [1996, 2005] 2014. ISBN 978-1-4088-5583-6.
Hollywood is a strange place, where the normal rules of business, economics, and personal and professional relationships seem to have been suspended. When he arrived in Hollywood in 1930, P. G. Wodehouse found the customs and antics of its denizens so bizarre that he parodied them in a series of hilarious stories. After a year in Hollywood, he'd had enough and never returned. When Rob Long arrived in Hollywood to attend UCLA film school, the television industry was on the threshold of a technology-driven change which would remake it and forever put an end to the domination by three large networks which had existed since its inception. The advent of cable and, later, direct to home satellite broadcasting eliminated the terrestrial bandwidth constraints which had made establishing a television outlet forbiddingly expensive and, at the same time, side-stepped many of the regulatory constraints which forbade “edgy” content on broadcast channels. Long began his television career as a screenwriter for Cheers in 1990, and became an executive producer of the show in 1992. After the end of Cheers, he created and produced other television projects, including Sullivan & Son, which is currently on the air.

Television ratings measure both “rating points”: the absolute number of television sets tuned into the program, and “share points”: the fraction of television sets turned on at the time viewing the program. In the era of Cheers, a typical episode might have a rating equivalent to more than 22 million viewers and a share of 32%, meaning it pulled in around one third of all television viewers in its time slot. The proliferation of channels makes it unlikely any show will achieve numbers like this again. The extremely popular 24 attracted between 9 and 14 million viewers in its eight seasons, and the highly critically regarded Mad Men never topped a mean viewership of 2.7 million in its best season.

It was into this new world of diminishing viewership expectations but voracious thirst for content to fill all the new channels that the author launched his post-Cheers career. The present volume collects two books originally published independently, Conversations with My Agent from 1998, and 2005's Set Up, Joke, Set Up, Joke, written as Hollywood's перестро́йка was well-advanced. The volumes fit together almost seamlessly, and many readers will barely notice the transition.

This is a very funny book, but there is also a great deal of wisdom about the ways of Hollywood, how television projects are created, pitched to a studio, marketed to a network, and the tortuous process leading from concept to script to pilot to series and, all too often, to cancellation. The book is written as a screenplay, complete with scene descriptions, directions, dialogue, transitions, and sound effect call-outs. Most of the scenes are indeed conversations between the author and his agent in various circumstances, but we also get to be a fly on the wall at story pitches, meetings with the network, casting, shooting an episode, focus group testing, and many other milestones in the life cycle of a situation comedy. The circumstances are fictional, but are clearly informed by real-life experience. Anybody contemplating a career in Hollywood, especially as a television screenwriter, would be insane not to read this book. You'll laugh a lot, but also learn something on almost every page.

The reader will also begin to appreciate the curious ways of Hollywood business, what the author calls “HIPE”: the Hollywood Inversion Principle of Economics. “The HIPE, as it will come to be known, postulates that every commonly understood, standard business practice of the outside world has its counterpart in the entertainment industry. Only it's backwards.” And anybody who thinks accounting is not a creative profession has never had experience with a Hollywood project. The culture of the entertainment business is also on display—an intricate pecking order involving writers, producers, actors, agents, studio and network executives, and “below the line” specialists such as camera operators and editors, all of whom have to read the trade papers to know who's up and who's not.

This book provides an insider's perspective on the strange way television programs come to be. In a way, it resembles some aspects of venture capital: most projects come to nothing, and most of those which are funded fail, losing the entire investment. But the few which succeed can generate sufficient money to cover all the losses and still yield a large return. One television show that runs for five years, producing solid ratings and 100+ episodes to go into syndication, can set up its writers and producers for life and cover the studio's losses on all of the dogs and cats.

 Permalink

August 2014

Thor, Brad. Black List. New York: Pocket Books, 2012. ISBN 978-1-4391-9302-0.
This is the twelfth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Brad Thor has remarked in interviews that he strives to write thrillers which anticipate headlines which will break after their publication, and with this novel he hits a grand slam.

Scot Harvath is ambushed in Paris by professional killers who murder a member of his team. After narrowly escaping, he goes to ground and covertly travels to a remote region in Basque country where he has trusted friends. He is then attacked there, again by trained killers, and he has to conclude that the probability is high that the internal security of his employer, the Carlton Group, has been breached, perhaps from inside.

Meanwhile, his employer, Reed Carlton, is attacked at his secure compound by an assault team and barely escapes with his life. When Carlton tries to use his back channels to contact members of his organisation, they all appear to have gone dark. To Carlton, a career spook with tradecraft flowing in his veins, this indicates his entire organisation has been wiped out, for no apparent motive and by perpetrators unknown.

Harvath, Carlton, and the infovore dwarf Nicholas, operating independently, must begin to pick up the pieces to figure out what is going on, while staying under the radar of a pervasive surveillance state which employs every technological means to track them down and target them for summary extra-judicial elimination.

If you pick up this book and read it today, you might think it's based upon the revelations of Edward Snowden about the abuses of the NSA conducting warrantless surveillance on U.S. citizens. But it was published in 2012, a full year before the first of Snowden's disclosures. The picture of the total information awareness state here is, if anything, more benign than what we now know to be the case in reality. What is different is that when Harvath, Carlton, and Nicholas get to the bottom of the mystery, the reaction in high places is what one would hope for in a constitutional republic, as opposed to the “USA! USA! USA!” cheerleading or silence which has greeted the exposure of abuses by the NSA on the part of all too many people.

This is a prophetic thriller which demonstrates how the smallest compromises of privacy: credit card transactions, telephone call metadata, license plate readers, facial recognition, Web site accesses, search engine queries, etc. can be woven into a dossier on any person of interest which makes going dark to the snooper state equivalent to living technologically in 1950. This not just a cautionary tale for individuals who wish to preserve a wall of privacy around themselves from the state, but also a challenge for writers of thrillers. Just as mobile telephones would have wrecked the plots of innumerable mystery and suspense stories written before their existence, the emergence of the panopticon state will make it difficult for thriller writers to have both their heroes and villains operating in the dark. I am sure the author will rise to this challenge.

 Permalink

Lowe, Keith. Savage Continent. New York: Picador, [2012] 2013. ISBN 978-1-250-03356-7.
On May 8th, 1945, World War II in Europe formally ended when the Allies accepted the unconditional surrender of Germany. In popular myth, especially among those too young to have lived through the war and its aftermath, the defeat of Italy and Germany ushered in, at least in Western Europe not occupied by Soviet troops, a period of rebuilding and rapid economic growth, spurred by the Marshall Plan. The French refer to the three decades from 1945 to 1975 as Les Trente Glorieuses. But that isn't what actually happened, as this book documents in detail. Few books cover the immediate aftermath of the war, or concentrate exclusively upon that chaotic period. The author has gone to great lengths to explore little-known conflicts and sort out conflicting accounts of what happened still disputed today by descendants of those involved.

The devastation wreaked upon cities where the conflict raged was extreme. In Germany, Berlin, Hanover, Duisburg, Dortmund, and Cologne lost more than half their habitable buildings, with the figure rising to 70% in the latter city. From Stalingrad to Warsaw to Caen in France, destruction was general with survivors living in the rubble. The transportation infrastructure was almost completely obliterated, along with services such as water, gas, electricity, and sanitation. The industrial plant was wiped out, and along with it the hope of employment. This was the state of affairs in May 1945, and the Marshall Plan did not begin to deliver assistance to Western Europe until three years later, in April 1948. Those three years were grim, and compounded by score-settling, revenge, political instability, and multitudes of displaced people returning to areas with no infrastructure to support them.

And this was in Western Europe. As is the case with just about everything regarding World War II in Europe, the further east you go, the worse things get. In the Soviet Union, 70,000 villages were destroyed, along with 32,000 factories. The redrawing of borders, particularly those of Poland and Germany, set the stage for a paroxysm of ethnic cleansing and mass migration as Poles were expelled from territory now incorporated into the Soviet Union and Germans from the western part of Poland. Reprisals against those accused of collaboration with the enemy were widespread, with murder not uncommon. Thirst for revenge extended to the innocent, including children fathered by soldiers of occupying armies.

The end of the War did not mean an end to the wars. As the author writes, “The Second World War was therefore not only a traditional conflict for territory: it was simultaneously a war of race, and a war of ideology, and was interlaced with half a dozen civil wars fought for purely local reasons.” Defeat of Germany did nothing to bring these other conflicts to an end. Guerrilla wars continued in the Baltic states annexed by the Soviet Union as partisans resisted the invader. An all-out civil war between communists and anti-communists erupted in Greece and was ended only through British and American aid to the anti-communists. Communist agitation escalated to violence in Italy and France. And country after country in Eastern Europe came under Soviet domination as puppet regimes were installed through coups, subversion, or rigged elections.

When reading a detailed history of a period most historians ignore, one finds oneself exclaiming over and over, “I didn't know that!”, and that is certainly the case here. This was a dark period, and no group seemed immune from regrettable acts, including Jews liberated from Nazi death camps and slave labourers freed as the Allies advanced: both sometimes took their revenge upon German civilians. As the author demonstrates, the aftermath of this period still simmers beneath the surface among the people involved—it has become part of the identity of ethnic groups which will outlive any person who actually remembers the events of the immediate postwar period.

In addition to providing an enlightening look at this neglected period, the events in the years following 1945 have much to teach us about those playing out today around the globe. We are seeing long-simmering ethnic and religious strife boil into open conflict as soon as the system is perturbed enough to knock the lid off the kettle. Borders drawn by politicians mean little when people's identity is defined by ancestry or faith, and memories are very long, measured sometimes in centuries. Even after a cataclysmic conflict which levels cities and reduces populations to near-medieval levels of subsistence, many people do not long for peace but instead seek revenge. Economic growth and prosperity can, indeed, change the attitude of societies and allow for alliances among former enemies (imagine how odd the phrase “Paris-Berlin axis”, heard today in discussions of the European Union, would have sounded in 1946), but the results of a protracted conflict can prevent the emergence of the very prosperity which might allow consigning it to the past.

 Permalink

Mahon, Basil. The Man Who Changed Everything. Chichester, UK: John Wiley & Sons, 2003. ISBN 978-0-470-86171-4.
In the 19th century, science in general and physics in particular grew up, assuming their modern form which is still recognisable today. At the start of the century, the word “scientist” was not yet in use, and the natural philosophers of the time were often amateurs. University research in the sciences, particularly in Britain, was rare. Those working in the sciences were often occupied by cataloguing natural phenomena, and apart from Newton's monumental achievements, few people focussed on discovering mathematical laws to explain the new physical phenomena which were being discovered such as electricity and magnetism.

One person, James Clerk Maxwell, was largely responsible for creating the way modern science is done and the way we think about theories of physics, while simultaneously restoring Britain's standing in physics compared to work on the Continent, and he created an institution which would continue to do important work from the time of his early death until the present day. While every physicist and electrical engineer knows of Maxwell and his work, he is largely unknown to the general public, and even those who are aware of his seminal work in electromagnetism may be unaware of the extent his footprints are found all over the edifice of 19th century physics.

Maxwell was born in 1831 to a Scottish lawyer, John Clerk, and his wife Frances Cay. Clerk subsequently inherited a country estate, and added “Maxwell” to his name in honour of the noble relatives from whom he inherited it. His son's first name, then was “James” and his surname “Clerk Maxwell”: this is why his full name is always used instead of “James Maxwell”. From childhood, James was curious about everything he encountered, and instead of asking “Why?” over and over like many children, he drove his parents to distraction with “What's the go o' that?”. His father did not consider science a suitable occupation for his son and tried to direct him toward the law, but James's curiosity did not extend to legal tomes and he concentrated on topics that interested him. He published his first scientific paper, on curves with more than two foci, at the age of 14. He pursued his scientific education first at the University of Edinburgh and later at Cambridge, where he graduated in 1854 with a degree in mathematics. He came in second in the prestigious Tripos examination, earning the title of Second Wrangler.

Maxwell was now free to begin his independent research, and he turned to the problem of human colour vision. It had been established that colour vision worked by detecting the mixture of three primary colours, but Maxwell was the first to discover that these primaries were red, green, and blue, and that by mixing them in the correct proportion, white would be produced. This was a matter to which Maxwell would return repeatedly during his life.

In 1856 he accepted an appointment as a full professor and department head at Marischal College, in Aberdeen Scotland. In 1857, the topic for the prestigious Adams Prize was the nature of the rings of Saturn. Maxwell's submission was a tour de force which proved that the rings could not be either solid nor a liquid, and hence had to be made of an enormous number of individually orbiting bodies. Maxwell was awarded the prize, the significance of which was magnified by the fact that his was the only submission: all of the others who aspired to solve the problem had abandoned it as too difficult.

Maxwell's next post was at King's College London, where he investigated the properties of gases and strengthened the evidence for the molecular theory of gases. It was here that he first undertook to explain the relationship between electricity and magnetism which had been discovered by Michael Faraday. Working in the old style of physics, he constructed an intricate mechanical thought experiment model which might explain the lines of force that Faraday had introduced but which many scientists thought were mystical mumbo-jumbo. Maxwell believed the alternative of action at a distance without any intermediate mechanism was wrong, and was able, with his model, to explain the phenomenon of rotation of the plane of polarisation of light by a magnetic field, which had been discovered by Faraday. While at King's College, to demonstrate his theory of colour vision, he took and displayed the first colour photograph.

Maxwell's greatest scientific achievement was done while living the life of a country gentleman at his estate, Glenair. In his textbook, A Treatise on Electricity and Magnetism, he presented his famous equations which showed that electricity and magnetism were two aspects of the same phenomenon. This was the first of the great unifications of physical laws which have continued to the present day. But that isn't all they showed. The speed of light appeared as a conversion factor between the units of electricity and magnetism, and the equations allowed solutions of waves oscillating between an electric and magnetic field which could propagate through empty space at the speed of light. It was compelling to deduce that light was just such an electromagnetic wave, and that waves of other frequencies outside the visual range must exist. Thus was laid the foundation of wireless communication, X-rays, and gamma rays. The speed of light is a constant in Maxwell's equations, not depending upon the motion of the observer. This appears to conflict with Newton's laws of mechanics, and it was not until Einstein's 1905 paper on special relativity that the mystery would be resolved. In essence, faced with a dispute between Newton and Maxwell, Einstein decided to bet on Maxwell, and he chose wisely. Finally, when you look at Maxwell's equations (in their modern form, using the notation of vector calculus), they appear lopsided. While they unify electricity and magnetism, the symmetry is imperfect in that while a moving electric charge generates a magnetic field, there is no magnetic charge which, when moved, generates an electric field. Such a charge would be a magnetic monopole, and despite extensive experimental searches, none has ever been found. The existence of monopoles would make Maxwell's equations even more beautiful, but sometimes nature doesn't care about that. By all evidence to date, Maxwell got it right.

In 1871 Maxwell came out of retirement to accept a professorship at Cambridge and found the Cavendish Laboratory, which would focus on experimental science and elevate Cambridge to world-class status in the field. To date, 29 Nobel Prizes have been awarded for work done at the Cavendish.

Maxwell's theoretical and experimental work on heat and gases revealed discrepancies which were not explained until the development of quantum theory in the 20th century. His suggestion of Maxwell's demon posed a deep puzzle in the foundations of thermodynamics which eventually, a century later, showed the deep connections between information theory and statistical mechanics. His practical work on automatic governors for steam engines foreshadowed what we now call control theory. He played a key part in the development of the units we use for electrical quantities.

By all accounts Maxwell was a modest, generous, and well-mannered man. He wrote whimsical poetry, discussed a multitude of topics (although he had little interest in politics), was an enthusiastic horseman and athlete (he would swim in the sea off Scotland in the winter), and was happily married, with his wife Katherine an active participant in his experiments. All his life, he supported general education in science, founding a working men's college in Cambridge and lecturing at such colleges throughout his career.

Maxwell lived only 48 years—he died in 1879 of the same cancer which had killed his mother when he was only eight years old. When he fell ill, he was engaged in a variety of research while presiding at the Cavendish Laboratory. We shall never know what he might have done had he been granted another two decades.

Apart from the significant achievements Maxwell made in a wide variety of fields, he changed the way physicists look at, describe, and think about natural phenomena. After using a mental model to explore electromagnetism, he discarded it in favour of a mathematical description of its behaviour. There is no theory behind Maxwell's equations: the equations are the theory. To the extent they produce the correct results when experimental conditions are plugged in, and predict new phenomena which are subsequently confirmed by experiment, they are valuable. If they err, they should be supplanted by something more precise. But they say nothing about what is really going on—they only seek to model what happens when you do experiments. Today, we are so accustomed to working with theories of this kind: quantum mechanics, special and general relativity, and the standard model of particle physics, that we don't think much about it, but it was revolutionary in Maxwell's time. His mathematical approach, like Newton's, eschewed explanation in favour of prediction: “We have no idea how it works, but here's what will happen if you do this experiment.” This is perhaps Maxwell's greatest legacy.

This is an excellent scientific biography of Maxwell which also gives the reader a sense of the man. He was such a quintessentially normal person there aren't a lot of amusing anecdotes to relate. He loved life, loved his work, cherished his friends, and discovered the scientific foundations of the technologies which allow you to read this. In the Kindle edition, at least as read on an iPad, the text appears in a curious, spidery, almost vintage, font in which periods are difficult to distinguish from commas. Numbers sometimes have spurious spaces embedded within them, and the index cites pages in the print edition which are useless since the Kindle edition does not include real page numbers.

 Permalink

September 2014

Amundsen, Roald. The South Pole. New York: Cooper Square Press, [1913] 2001. ISBN 978-0-8154-1127-7.
In modern warfare, it has been observed that “generals win battles, but logisticians win wars.” So it is with planning an exploration mission to a remote destination where no human has ever set foot, and the truths are as valid for polar exploration in the early 20th century as they will be for missions to Mars in the 21st. On December 14th, 1911, Roald Amundsen and his five-man southern party reached the South Pole after a trek from the camp on the Ross Ice Shelf where they had passed the previous southern winter, preparing for an assault on the pole as early as the weather would permit. By over-wintering, they would be able to depart southward well before a ship would be able to land an expedition, since a ship would have to wait until the sea ice dispersed sufficiently to make a landing.

Amundsen's plan was built around what space mission architects call “in-situ resource utilisation” and “depots”, as well as “propulsion staging”. This allowed for a very lightweight push to the pole, both in terms of the amount of supplies which had to be landed by their ship, the Fram, and in the size of the polar party and the loading of their sledges. Upon arriving in Antarctica, Amundsen's party immediately began to hunt the abundant seals near the coast. More than two hundred seals were killed, processed, and stored for later use. (Since the temperature on the Ross Ice Shelf and the Antarctic interior never rises above freezing, the seal meat would keep indefinitely.) Then parties were sent out in the months remaining before the arrival of winter in 1911 to establish depots at every degree of latitude between the base camp and 82° south. These depots contained caches of seal meat for the men and dogs and kerosene for melting snow for water and cooking food. The depot-laying journeys familiarised the explorers with driving teams of dogs and operating in the Antarctic environment.

Amundsen had chosen dogs to pull his sledges. While his rival to be first at the pole, Robert Falcon Scott, experimented with pulling sledges by ponies, motorised sledges, and man-hauling, Amundsen relied upon the experience of indigenous people in Arctic environments that dogs were the best solution. Dogs reproduced and matured sufficiently quickly that attrition could be made up by puppies born during the expedition, they could be fed on seal meat, which could be obtained locally, and if a dog team were to fall into a crevasse (as was inevitable when crossing uncharted terrain), the dogs could be hauled out, no worse for wear, by the drivers of other sledges. For ponies and motorised sledges, this was not the case.

Further, Amundsen adopted a strategy which can best be described as “dog eat dog”. On the journey to the pole, he started with 52 dogs. Seven of these had died from exhaustion or other causes before the ascent to the polar plateau. (Dogs who died were butchered and fed to the other dogs. Greenland sled dogs, being only slightly removed from wolves, had no hesitation in devouring their erstwhile comrades.) Once reaching the plateau, 27 dogs were slaughtered, their meat divided between the surviving dogs and the five men. Only 18 dogs would proceed to the pole. Dog carcasses were cached for use on the return journey.

Beyond the depots, the polar party had to carry everything required for the trip. but knowing the depots would be available for the return allowed them to travel lightly. After reaching the pole, they remained for three days to verify their position, send out parties to ensure they had encircled the pole's position, and built a cairn to commemorate their achievement. Amundsen left a letter which he requested Captain Scott deliver to King Haakon VII of Norway should Amundsen's party be lost on its return to base. (Sadly, that was the fate which awaited Scott, who arrived at the pole on January 17th, 1912, only to find the Amundsen expedition's cairn there.)

This book is Roald Amundsen's contemporary memoir of the expedition. Originally published in two volumes, the present work includes both. Appendices describe the ship, the Fram, and scientific investigations in meteorology, geology, astronomy, and oceanography conducted during the expedition. Amundsen's account is as matter-of-fact as the memoirs of some astronauts, but a wry humour comes through when discussing dealing with sled dogs who have will of their own and also the foibles of humans cooped up in a small cabin in an alien environment during a night which lasts for months. He evinces great respect for his colleagues and competitors in polar exploration, particularly Scott and Shackleton, and worries whether his own approach to reaching the pole would be proved superior to theirs. At the time the book was published, the tragic fate of Scott's expedition was not known.

Today, we might not think of polar exploration as science, but a century ago it was as central to the scientific endeavour as robotic exploration of Mars is today. Here was an entire continent, known only in sketchy detail around its coast, with only a few expeditions into the interior. When Amundsen's party set out on their march to the pole, they had no idea whether they would encounter mountain ranges along the way and, if so, whether they could find a way over or around them. They took careful geographic and meteorological observations along their trek (as well as oceanographical measurements on the trip to Antarctica and back), and these provided some of the first data points toward understanding weather in the southern hemisphere.

In Norway, Amundsen was hailed as a hero. But it is clear from this narrative he never considered himself such. He wrote:

I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.

This work is in the public domain, and there are numerous editions of it available, in print and in electronic form, many from independent publishers. The independent publishers, for the most part, did not distinguish themselves in their respect for this work. Many of their editions were produced by running an optical character recognition program over a print copy of the book, then putting it together with minimal copy-editing. Some (including the one I was foolish enough to buy) elide all of the diagrams, maps, and charts from the original book, which renders parts of the text incomprehensible. The paperback edition cited above, while expensive, is a facsimile edition of the original 1913 two volume English translation of Amundsen's original work, including all of the illustrations. I know of no presently-available electronic edition which has comparable quality and includes all of the material in the original book. Be careful—if you follow the link to the paperback edition, you'll see a Kindle edition listed, but this is from a different publisher and is rife with errors and includes none of the illustrations. I made the mistake of buying it, assuming it was the same as the highly-praised paperback. It isn't; don't be fooled.

 Permalink

Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.
Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.

Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.

Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.

As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.

“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.

This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.

One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.

At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.

As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.

That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

 Permalink

Cawdron, Peter. My Sweet Satan. Seattle: Amazon Digital Services, 2014. ASIN B00NBA6Y1A.
Here the author adds yet another imaginative tale of first contact to his growing list of novels in that genre, a puzzle story which the viewpoint character must figure out having lost memories of her entire adult life. After a botched attempt at reanimation from cryo-sleep, Jasmine Holden finds herself with no memories of her life after the age of nineteen. And yet, here she is, on board Copernicus, in the Saturn system, closing in on the distant retrograde moon Bestla which, when approached by a probe from Earth, sent back an audio transmission to its planet of origin which was mostly gibberish but contained the chilling words: “My sweet Satan. I want to live and die for you, my glorious Satan!”. A follow-up unmanned probe to Bestla is destroyed as it approaches, and the Copernicus is dispatched to make a cautious investigation of what appears to be an alien probe with a disturbing theological predisposition.

Back on Earth, sentiment has swung back and forth about the merits of exploring Bestla and fears of provoking an alien presence in the solar system which, by its very capability of interstellar travel, must be far in advance of Earthly technology. Jasmine, a key member of the science team, suddenly finds herself mentally a 19 year old girl far from her home, and confronted both by an unknown alien presence but also conflict among her crew members, who interpret the imperatives of the mission in different ways.

She finds the ship's computer, an early stage artificial intelligence, the one being in which she can confide, and the only one who comprehends her predicament and is willing to talk her through procedures she learned by heart in her training but have been lost to an amnesia she feels compelled to conceal from human members of the crew.

As the ship approaches Bestla, conflict erupts among the crew, and Jasmine must sort out what is really going on and choose sides without any recollections of her earlier interactions with her crew members. In a way, this is three first contact novels in one: 19 year old Jasmine making contact with her fellow crew members about which she remembers nothing, the Copernicus and whatever is on Bestla, and a third contact about which I cannot say anything without spoiling the story.

This is a cracking good first contact novel which, just when you're nearing the end and beginning to worry “Where's the sense of wonder?” delivers everything you'd hoped for and more.

I read a pre-publication manuscript edition which the author kindly shared with me.

 Permalink

Byers, Bruce K. Destination Moon. Washington: National Aeronautics and Space Administration, 1977. NASA TM X-3487.
In the mid 1960s, the U.S. Apollo lunar landing program was at the peak of its budget commitment and technical development. The mission mode had already been chosen and development of the flight hardware was well underway, along with the ground infrastructure required to test and launch it and the global network required to track missions in flight. One nettlesome problem remained. The design of the lunar module made assumptions about the properties of the lunar surface upon which it would alight. If the landing zone had boulders which were too large, craters sufficiently deep and common that the landing legs could not avoid, or slopes too steep to avoid an upset on landing or tipping over afterward, lunar landing missions would all be aborted by the crew when they reached decision height, judging there was no place they could set down safely. Even if all the crews returned safely without having landed, this would be an ignominious end to the ambitions of Project Apollo.

What was needed in order to identify safe landing zones was high-resolution imagery of the Moon. The most capable Earth-based telescopes, operating through Earth's turbulent and often murky atmosphere, produced images which resolved objects at best a hundred times larger that those which could upset a lunar landing mission. What was required was a large area, high resolution mapping of the Moon and survey of potential landing zones, which could only be done, given the technology of the 1960s, by going there, taking pictures, and returning them to Earth. So was born the Lunar Orbiter program, which in 1966 and 1967 sent lightweight photographic reconnaissance satellites into lunar orbit, providing both the close-up imagery needed to select landing sites for the Apollo missions, but also mapping imagery which covered 99% of the near side of the Moon and 85% of the far side, In fact, Lunar Orbiter provided global imagery of the Moon far more complete than that which would be available for the Earth many years thereafter.

Accomplishing this goal with the technology of the 1960s was no small feat. Electronic imaging amounted to analogue television, which, at the altitude of a lunar orbit, wouldn't produce images any better than telescopes on Earth. The first spy satellites were struggling to return film from Earth orbit, and returning film from the Moon was completely impossible given the mass budget of the launchers available. After a fierce competition, NASA contracted with Boeing to build the Lunar Orbiter, designed to fit on NASA's workhorse Atlas-Agena launcher, which seriously constrained its mass. Boeing subcontracted with Kodak to build the imaging system and RCA for the communications hardware which would relay the images back to Earth and allow the spacecraft to be controlled from the ground.

The images were acquired by a process which may seem absurd to those accustomed to present-day digital technologies but which seemed miraculous in its day. In lunar orbit, the spacecraft would aim its cameras (it had two: a mapping camera which produced overlapping wide-angle views and a high-resolution camera that photographed clips of each frame with a resolution of about one metre) at the Moon and take a series of photos. Because the film used had a very low light sensitivity (ASA [now ISO] 1.6), on low-altitude imaging passes the film would have to be moved to compensate for the motion of the spacecraft to avoid blurring. (The low light sensitivity of the film was due to its very high spatial resolution, but also reduced its likelihood of being fogged by exposure to cosmic rays or energetic particles from solar flares.)

After being exposed, the film would subsequently be processed on-board by putting it in contact with a band containing developer and fixer, and then the resulting negative would be read back for transmission to Earth by scanning it with a moving point of light, measuring the transmission through the negative, and sending the measured intensity back as an analogue signal. At the receiving station, that signal would be used to modulate the intensity of a spot of light scanned across film which, when developed and assembled into images from strips, revealed the details of the Moon. The incoming analogue signal was recorded on tape to provide a backup for the film recording process, but nothing was done with the tapes at the time. More about this later….

Five Lunar Orbiter missions were launched, and although some experienced problems, all achieved their primary mission objectives. The first three missions provided all of the data required by Apollo, so the final two could be dedicated to mapping the Moon from near-polar orbits. After the completion of their primary imaging missions, Lunar Orbiters continued to measure the radiation and micrometeoroid environment near the Moon, and contributed to understanding the Moon's gravitational field, which would be important in planning later Apollo missions that would fly in very low orbits around the Moon. On August 23rd, 1966, the first Lunar Orbiter took one of the most iconic pictures of the 20th century: Earthrise from the Moon. The problems experienced by Lunar Orbiter missions and the improvisation by ground controllers to work around them set the pattern for subsequent NASA robotic missions, with their versatile, reconfigurable flight hardware and fine-grained control from the ground.

You might think the story of Lunar Orbiter a footnote to space exploration history which has scrolled off the screen with subsequent Apollo lunar landings and high-resolution lunar mapping by missions such as Clementine and the Lunar Reconnaissance Orbiter, but that fails to take into account the exploits of 21st century space data archaeologists. Recall that I said that all of the image data from Lunar Orbiter missions was recorded on analogue tapes. These tapes contained about 10 bits of dynamic range, as opposed to the 8 bits which were preserved by the optical recording process used in receiving the images during the missions. This, combined with contemporary image processing techniques, makes for breathtaking images recorded almost half a century ago, but never seen before. Here are a document and video which record the exploits of the Lunar Orbiter Image Recovery Project (LOIRP). Please visit the LOIRP Web site for more restored images and details of the process of restoration.

 Permalink

October 2014

Levinson, Marc. The Box. Princeton: Princeton University Press, [2006] 2008. ISBN 978-0-691-13640-0.
When we think of developments in science and technology which reshape the world economy, we often concentrate upon those which build on fundamental breakthroughs in our understanding of the world we live in, or technologies which employ them to do things never imagined. Examples of these are electricity and magnetism, which gave us the telegraph, electric power, the telephone, and wireless communication. Semiconductor technology, the foundation of the computer and Internet revolutions, is grounded in quantum mechanics, elaborated only in the early 20th century. The global positioning satellites which you use to get directions when you're driving or walking wouldn't work if they did not compensate for the effects of special and general relativity upon the rate at which clocks tick in moving objects and those in gravitational fields.

But sometimes a revolutionary technology doesn't require a scientific breakthrough, nor a complicated manufacturing process to build, but just the realisation that people have been looking at a problem all wrong, or have been earnestly toiling away trying to solve some problem other than the one which people are ready to pay vast sums of money to have solved, once the solution is placed on the market.

The cargo shipping container may be, physically, the one of the least impressive technological achievements of the 20th century, right up there with the inanimate carbon rod, as it required no special materials, fabrication technologies, or design tools which did not exist a century before, and yet its widespread adoption in the latter half of the 20th century was fundamental to the restructuring of the global economy which we now call “globalisation”, and changed assumptions about the relationship between capital, natural resources, labour, and markets which had existed since the start of the industrial revolution.

Ever since the start of ocean commerce, ships handled cargo in much the same way. The cargo was brought to the dock (often after waiting for an extended period in a dockside warehouse for the ship to arrive), then stevedores (or longshoremen, or dockers) would load the cargo into nets, or onto pallets hoisted by nets into the hold of the ship, where other stevedores would unload it and stow the individual items, which might consist of items as varied as bags of coffee beans, boxes containing manufactured goods, barrels of wine or oil, and preserved food items such as salted fish or meat. These individual items were stored based upon the expertise of the gangs working the ship to make the most of the irregular space of the ship's cargo hold, and if the ship was to call upon multiple ports, in an order so cargo could be unloaded with minimal shifting of that bound for subsequent destinations on the voyage. Upon arrival at a port, this process was reversed to offload cargo bound there, and then the loading began again. It was not unusual for a cargo ship to spend 6 days or more in each port, unloading and loading, before the next leg on its voyage.

Shipping is both capital- and labour-intensive. The ship has to be financed and incurs largely fixed maintenance costs, and the crew must be paid regardless of whether they're at sea or waiting in port for cargo to be unloaded and loaded. This means that what engineers call the “duty cycle” of the ship is critical to its cost of operation and, consequently, what the shipowner must charge shippers to make a profit. A ship operating coastal routes in the U.S., say between New York and a port in the Gulf, could easily spend half its time in ports, running up costs but generating no revenue. This model of ocean transport, called break bulk cargo, prevailed from the age of sail until the 1970s.

Under the break bulk model, ocean transport was very expensive. Further, with cargos sitting in warehouses waiting for ships to arrive on erratic schedules, delivery times were not just long but also unpredictable. Goods shipped from a factory in the U.S. midwest to a destination in Europe would routinely take three months to arrive end to end, with an uncertainty measured in weeks, accounting for trucking, railroads, and ocean shipping involved in getting them to their destination. This meant that any importation of time-sensitive goods required keeping a large local inventory to compensate for unpredictable delivery times, and paying the substantial shipping cost included in their price. Economists, going back to Ricardo, often modelled shipping as free, but it was nothing of the kind, and was often the dominant factor in the location and structure of firms.

When shipping is expensive, firms have an advantage in being located in proximity to both their raw materials (or component suppliers) and customers. Detroit became the Motor City in large part because its bulk inputs: iron ore and coal, could be transported at low cost from mines to factories by ships plying the Great Lakes. Industries dependent on imports and exports would tend to cluster around major ports, since otherwise the cost of transporting their inputs and outputs overland from the nearest port would be prohibitive. And many companies simply concentrated on their local market, where transportation costs were not a major consideration in their cost structure. In 1964, when break bulk shipping was the norm, 40% of exports from Britain originated within 25 miles of their port of export, and two thirds of all imports were delivered to destinations a similar distance from their port of arrival.

But all of this was based upon the cost structure of break bulk ocean cargo shipping, and a similarly archaic way of handling rail and truck cargo. A manufacturing plant in Iowa might pack its goods destined for a customer in Belgium into boxes which were loaded onto a truck, driven to a terminal in Chicago where they were unloaded and reloaded into a boxcar, then sent by train to New Jersey, where they were unloaded and put onto a small ship to take them to the port of New York, where after sitting in a warehouse they'd be put onto a ship bound for a port in Germany. After arrival, they'd be transported by train, then trucked to the destination. Three months or so later, plus or minus a few, the cargo would arrive—at least that which wasn't stolen en route.

These long delays, and the uncertainty in delivery times, required those engaging in international commerce to maintain large inventories, which further increased the cost of doing business overseas. Many firms opted for vertical integration in their own local region.

Malcom McLean started his trucking company in 1934 with one truck and one driver, himself. What he lacked in capital (he often struggled to pay bridge tolls when delivering to New York), he made up in ambition, and by 1945, his company operated 162 trucks. He was a relentless cost-cutter, and from his own experience waiting for hours on New York docks for his cargo to be unloaded onto ships, in 1953 asked why shippers couldn't simply put the entire truck trailer on a ship rather than unload its cargo into the ship's hold, then unload it piece by piece at the destination harbour and load it back onto another truck. War surplus Liberty ships were available for almost nothing, and they could carry cargo between the U.S. northeast and south at a fraction of the cost of trucks, especially in the era before expressways.

McLean immediately found himself in a tangled web of regulatory and union constraints. Shipping, trucking, and railroads were all considered completely different businesses, each of which had accreted its own, often bizarre, government regulation and union work rules. The rate a carrier could charge for hauling a ton of cargo from point A to point B depended not upon its mass or volume, but what it was, with radically different rates for say, coal as opposed to manufactured goods. McLean's genius was in seeing past all of this obstructionist clutter and realising that what the customer—the shipper—wanted was not to purchase trucking, railroad, and shipping services, but rather delivery of the shipment, however accomplished, at a specified time and cost.

The regulatory mess made it almost impossible for a trucking company to own ships, so McLean created a legal structure which would allow his company to acquire a shipping line which had fallen on hard times. He then proceeded to convert a ship to carry containers, which would not be opened from the time they were loaded on trucks at the shipper's location until they arrived at the destination, and could be transferred between trucks and ships rapidly. Working out the details of the construction of the containers, setting their size, and shepherding all of this through a regulatory gauntlet which had never heard of such concepts was daunting, but the potential payoff was enormous. Loading break bulk cargo onto a ship the size of McLean's first container vessel cost US$ 5.83 per ton. Loading freight in containers cost US$ 0.16 per ton. This reduction in cost, passed on to the shipper, made containerised freight compelling, and sparked a transformation in the global economy.

Consider Barbie. Her body is manufactured in China, using machines from Japan and Europe and moulds designed in the U.S. Her hair comes from Japan, the plastic for her body from Taiwan, dyed with U.S. pigments, and her clothes are produced in other factories in China. The final product is shipped worldwide. There are no large inventories anywhere in the supply chain: every step depends upon reliable delivery of containers of intermediate products. Managers setting up such a supply chain no longer care whether the products are transported by truck, rail, or sea, and since transportation costs for containers are so small compared to the value of their contents (and trade barriers such as customs duties have fallen), the location of suppliers and factories is based almost entirely upon cost, with proximity to resources and customers almost irrelevant. We think of the Internet as having abolished distance, but the humble ocean cargo container has done so for things as much as the Internet has for data.

This is a thoroughly researched and fascinating look at how the seemingly most humble technological innovation can have enormous consequences, and also how the greatest barriers to restructuring economies may be sclerotic government and government-enabled (union) structures which preserve obsolete models long after they have become destructive of prosperity. It also demonstrates how those who try to freeze innovation into a model fixed in the past will be bypassed by those willing to embrace a more efficient way of doing business. The container ports which handle most of the world's cargo are, for the most part, not the largest ports of the break bulk era. They are those which, unencumbered by history, were able to build the infrastructure required to shift containers at a rapid rate.

The Kindle edition has some flaws. In numerous places, spaces appear within words which don't belong there (perhaps words hyphenated across lines in the print edition and not re-joined?) and the index is just a list of searchable terms, not linked to references in the text.

 Permalink

Barry, John M. The Great Influenza. New York: Penguin, [2004] 2005. ISBN 978-0-14-303649-4.
In the year 1800, the practice of medicine had changed little from that in antiquity. The rapid progress in other sciences in the 18th century had had little impact on medicine, which one historian called “the withered arm of science”. This began to change as the 19th century progressed. Researchers, mostly in Europe and especially in Germany, began to lay the foundations for a scientific approach to medicine and public health, understanding the causes of disease and searching for means of prevention and cure. The invention of new instruments for medical examination, anesthesia, and antiseptic procedures began to transform the practice of medicine and surgery.

All of these advances were slow to arrive in the United States. As late as 1900 only one medical school in the U.S. required applicants to have a college degree, and only 20% of schools required a high school diploma. More than a hundred U.S. medical schools accepted any applicant who could pay, and many graduated doctors who had never seen a patient or done any laboratory work in science. In the 1870s, only 10% of the professors at Harvard's medical school had a Ph.D.

In 1873, Johns Hopkins died, leaving his estate of US$ 3.5 million to found a university and hospital. The trustees embarked on an ambitious plan to build a medical school to be the peer of those in Germany, and began to aggressively recruit European professors and Americans who had studied in Europe to build a world class institution. By the outbreak of World War I in Europe, American medical research and education, still concentrated in just a few centres of excellence, had reached the standard set by Germany. It was about to face its greatest challenge.

With the entry of the United States into World War I in April of 1917, millions of young men conscripted for service were packed into overcrowded camps for training and preparation for transport to Europe. These camps, thrown together on short notice, often had only rudimentary sanitation and shelter, with many troops living in tent cities. Large number of doctors and especially nurses were recruited into the Army, and by the start of 1918 many were already serving in France. Doctors remaining in private practice in the U.S. were often older men, trained before the revolution in medical education and ignorant of modern knowledge of diseases and the means of treating them.

In all American wars before World War I, more men died from disease than combat. In the Civil War, two men died from disease for every death on the battlefield. Army Surgeon General William Gorgas vowed that this would not be the case in the current conflict. He was acutely aware that the overcrowded camps, frequent transfers of soldiers among far-flung bases, crowded and unsanitary troop transport ships, and unspeakable conditions in the trenches were a tinderbox just waiting for the spark of an infectious disease to ignite it. But the demand for new troops for the front in France caused his cautions to be overruled, and still more men were packed into the camps.

Early in 1918, a doctor in rural Haskell County, Kansas began to treat patients with a disease he diagnosed as influenza. But this was nothing like the seasonal influenza with which he was familiar. In typical outbreaks of influenza, the people at greatest risk are the very young (whose immune systems have not been previously exposed to the virus) and the very old, who lack the physical resilience to withstand the assault by the disease. Most deaths are among these groups, leading to a “bathtub curve” of mortality. This outbreak was different: the young and elderly were largely spared, while those in the prime of life were struck down, with many dying quickly of symptoms which resembled pneumonia. Slowly the outbreak receded, and by mid-March things were returning to normal. (The location and mechanism where the disease originated remain controversial to this day and we may never know for sure. After weighing competing theories, the author believes the Kansas origin most likely, but other origins have their proponents.)

That would have been the end of it, had not soldiers from Camp Funston, the second largest Army camp in the U.S., with 56,000 troops, visited their families in Haskell County while on leave. They returned to camp carrying the disease. The spark had landed in the tinderbox. The disease spread outward as troop trains travelled between camps. Often a train would leave carrying healthy troops (infected but not yet symptomatic) and arrive with up to half the company sick and highly infectious to those at the destination. Before long the disease arrived via troop ships at camps and at the front in France.

This was just the first wave. The spring influenza was unusual in the age group it hit most severely, but was not particularly more deadly than typical annual outbreaks. Then in the fall a new form of the disease returned in a much more virulent form. It is theorised that under the chaotic conditions of wartime a mutant form of the virus had emerged and rapidly spread among the troops and then passed into the civilian population. The outbreak rapidly spread around the globe, and few regions escaped. It was particularly devastating to aboriginal populations in remote regions like the Arctic and Pacific islands who had not developed any immunity to influenza.

The pathogen in the second wave could kill directly within a day by destroying the lining of the lung and effectively suffocating the patient. The disease was so virulent and aggressive that some medical researchers doubted it was influenza at all and suspected some new kind of plague. Even those who recovered from the disease had much of their immunity and defences against respiratory infection so impaired that some people who felt well enough to return to work would quickly come down with a secondary infection of bacterial pneumonia which could kill them.

All of the resources of the new scientific medicine were thrown into the battle with the disease, with little or no impact upon its progression. The cause of influenza was not known at the time: some thought it was a bacterial disease while others suspected a virus. Further adding to the confusion is that influenza patients often had a secondary infection of bacterial pneumonia, and the organism which causes that disease was mis-identified as the pathogen responsible for influenza. Heroic efforts were made, but the state of medical science in 1918 was simply not up to the challenge posed by influenza.

A century later, influenza continues to defeat every attempt to prevent or cure it, and another global pandemic remains a distinct possibility. Supportive treatment in the developed world and the availability of antibiotics to prevent secondary infection by pneumonia will reduce the death toll, but a mass outbreak of the virus on the scale of 1918 would quickly swamp all available medical facilities and bring society to the brink as it did then. Even regular influenza kills between a quarter and a half million people a year. The emergence of a killer strain like that of 1918 could increase this number by a factor of ten or twenty.

Influenza is such a formidable opponent due to its structure. It is an RNA virus which, unusually for a virus, has not a single strand of genetic material but seven or eight separate strands of RNA. Some researchers argue that in an organism infected with two or more variants of the virus these strands can mix to form new mutants, allowing the virus to mutate much faster than other viruses with a single strand of genetic material (this is controversial). The virus particle is surrounded by proteins called hemagglutinin (HA) and neuraminidase (NA). HA allows the virus to break into a target cell, while NA allows viruses replicated within the cell to escape to infect others.

What makes creating a vaccine for influenza so difficult is that these HA and NA proteins are what the body's immune system uses to identify the virus as an invader and kill it. But HA and NA come in a number of variants, and a specific strain of influenza may contain one from column H and one from column N, creating a large number of possibilities. For example, H1N2 is endemic in birds, pigs, and humans. H5N1 caused the bird flu outbreak in 2004, and H1N1 was responsible for the 1918 pandemic. It gets worse. As a child, when you are first exposed to influenza, your immune system will produce antibodies which identify and target the variant to which you were first exposed. If you were infected with and recovered from, say, H3N2, you'll be pretty well protected against it. But if, subsequently, you encounter H1N1, your immune system will recognise it sufficiently to crank out antibodies, but they will be coded to attack H3N2, not the H1N1 you're battling, against which they're useless. Influenza is thus a chameleon, constantly changing its colours to hide from the immune system.

Strains of influenza tend to come in waves, with one HxNy variant dominating for some number of years, then shifting to another. Developers of vaccines must play a guessing game about which you're likely to encounter in a given year. This explains why the 1918 pandemic particularly hit healthy adults. Over the decades preceding the 1918 outbreak, the primary variant had shifted from H1N1, then decades of another variant, and then after 1900 H1N1 came back to the fore. Consequently, when the deadly strain of H1N1 appeared in the fall of 1918, the immune systems of both young and elderly people were ready for it and protected them, but those in between had immune systems which, when confronted with H1N1, produced antibodies for the other variant, leaving them vulnerable.

With no medical defence against or cure for influenza even today, the only effective response in the case of an outbreak of a killer strain is public health measures such as isolation and quarantine. Influenza is airborne and highly infectious: the gauze face masks you see in pictures from 1918 were almost completely ineffective. The government response to the outbreak in 1918 could hardly have been worse. After creating military camps which were nothing less than a culture medium containing those in the most vulnerable age range packed in close proximity, once the disease broke out and reports began to arrive that this was something new and extremely lethal, the troop trains and ships continued to run due to orders from the top that more and more men had to be fed into the meat grinder that was the Western Front. This inoculated camp after camp. Then, when the disease jumped into the civilian population and began to devastate cities adjacent to military facilities such as Boston and Philadelphia, the press censors of Wilson's proto-fascist war machine decided that honest reporting of the extent and severity of the disease or measures aimed at slowing its spread would impact “morale” and war production, so newspapers were ordered to either ignore it or print useless happy talk which only accelerated the epidemic. The result was that in the hardest-hit cities, residents confronted with the reality before their eyes giving to lie to the propaganda they were hearing from authorities retreated into fear and withdrawal, allowing neighbours to starve rather than risk infection by bringing them food.

As was known in antiquity, the only defence against an infectious disease with no known medical intervention is quarantine. In Western Samoa, the disease arrived in September 1918 on a German steamer. By the time the disease ran its course, 22% of the population of the islands was dead. Just a few kilometres across the ocean in American Samoa, authorities imposed a rigid quarantine and not a single person died of influenza.

We will never know the worldwide extent of the 1918 pandemic. Many of the hardest-hit areas, such as China and India, did not have the infrastructure to collect epidemiological data and what they had collapsed under the impact of the crisis. Estimates are that on the order of 500 million people worldwide were infected and that between 50 and 100 million died: three to five percent of the world's population.

Researchers do not know why the 1918 second wave pathogen was so lethal. The genome has been sequenced and nothing jumps out from it as an obvious cause. Understanding its virulence may require recreating the monster and experimenting with it in animal models. Obviously, this is not something which should be undertaken without serious deliberation beforehand and extreme precautions, but it may be the only way to gain the knowledge needed to treat those infected should a similar wild strain emerge in the future. (It is possible this work may have been done but not published because it could provide a roadmap for malefactors bent on creating a synthetic plague. If this be the case, we'll probably never know about it.)

Although medicine has made enormous strides in the last century, influenza, which defeated the world's best minds in 1918, remains a risk, and in a world with global air travel moving millions between dense population centres, an outbreak today would be even harder to contain. Let us hope that in that dire circumstance authorities will have the wisdom and courage to take the kind of dramatic action which can make the difference between a regional tragedy and a global cataclysm.

 Permalink

November 2014

Schlosser, Eric. Command and Control. New York: Penguin, 2013. ISBN 978-0-14-312578-5.
On the evening of September 18th, 1980 two U.S. Air Force airmen, members of a Propellant Transfer System (PTS) team, entered a Titan II missile silo near Damascus, Arkansas to perform a routine maintenance procedure. Earlier in the day they had been called to the site because a warning signal had indicated that pressure in the missile's second stage oxidiser tank was low. This was not unusual, especially for a missile which had recently been refuelled, as this one had, and the procedure of adding nitrogen gas to the tank to bring the pressure up to specification was considered straightforward. That is, if you consider any work involving a Titan II “routine” or “straightforward”. The missile, in an underground silo, protected by a door weighing more than 65 tonnes and able to withstand the 300 psi overpressure of a nearby nuclear detonation, stood more than 31 metres high and contained 143 tonnes of highly toxic fuel and oxidiser which, in addition to being poisonous to humans in small concentrations, were hypergolic: they burst into flames upon contact with one another, with no need of a source of ignition. Sitting atop this volatile fuel was a W-53 nuclear warhead with a yield of 9 megatons and high explosives in the fission primary which were not, as more modern nuclear weapons, insensitive to shock and fire. While it was unlikely in the extreme that detonation of these explosives due to an accident would result in a nuclear explosion, they could disperse the radioactive material in the bomb over the local area, requiring a massive clean-up effort.

The PTS team worked on the missile wearing what amounted to space suits with their own bottled air supply. One member was an experienced technician while the other was a 19-year old rookie receiving on the job training. Early in the procedure, the team was to remove the pressure cap from the side of the missile. While the lead technician was turning the cap with a socket wrench, the socket fell off the wrench and down the silo alongside the missile. The socket struck the thrust mount supporting the missile, bounced back upward, and struck the side of the missile's first stage fuel tank. Fuel began to spout outward as if from a garden hose. The trainee remarked, “This is not good.”

Back in the control centre, separated from the silo by massive blast doors, the two man launch team who had been following the servicing operation, saw their status panels light up like a Christmas tree decorated by somebody inordinately fond of the colour red. The warnings were contradictory and clearly not all correct. Had there indeed been both fuel and oxidiser leaks, as indicated, there would already have been an earth-shattering kaboom from the silo, and yet that had not happened. The technicians knew they had to evacuate the silo as soon as possible, but their evacuation route was blocked by dense fuel vapour.

The Air Force handles everything related to missiles by the book, but the book was silent about procedures for a situation like this, with massive quantities of toxic fuel pouring into the silo. Further, communication between the technicians and the control centre were poor, so it wasn't clear at first just what had happened. Before long, the commander of the missile wing, headquarters of the Strategic Air Command (SAC) in Omaha, and the missile's manufacturer, Martin Marietta, were in conference trying to decide how to proceed. The greatest risks were an electrical spark or other source of ignition setting the fuel on fire or, even greater, of the missile collapsing in the silo. With tonnes of fuel pouring from the fuel tank and no vent at its top, pressure in the tank would continue to fall. Eventually, it would be below atmospheric pressure, and would be crushed, likely leading the missile to crumple under the weight of the intact and fully loaded first stage oxidiser and second stage tanks. These tanks would then likely be breached, leading to an explosion. No Titan II had ever exploded in a closed silo, so there was no experience as to what the consequences of this might be.

As the night proceeded, all of the Carter era military malaise became evident. The Air Force lied to local law enforcement and media about what was happening, couldn't communicate with first responders, failed to send an evacuation helicopter for a gravely injured person because an irrelevant piece of equipment wasn't available, and could not come to a decision about how to respond as the situation deteriorated. Also on display was the heroism of individuals, in the Air Force and outside, who took matters into their own hands on the spot, rescued people, monitored the situation, evacuated nearby farms in the path of toxic clouds, and improvised as events required.

Among all of this, nothing whatsoever had been done about the situation of the missile. Events inevitably took their course. In the early morning hours of September 19th, the missile collapsed, releasing all of its propellants, which exploded. The 65 tonne silo door was thrown 200 metres, shearing trees in its path. The nuclear warhead was thrown two hundred metres in another direction, coming to rest in a ditch. Its explosives did not detonate, and no radiation was released.

While there were plenty of reasons to worry about nuclear weapons during the Cold War, most people's concerns were about a conflict escalating to the deliberate use of nuclear weapons or the possibility of an accidental war. Among the general public there was little concern about the tens of thousands of nuclear weapons in depots, aboard aircraft, atop missiles, or on board submarines—certainly every precaution had been taken by the brilliant people at the weapons labs to make them safe and reliable, right?

Well, that was often the view among “defence intellectuals” until they were briefed in on the highly secret details of weapons design and the command and control procedures in place to govern their use in wartime. As documented in this book, which uses the Damascus accident as a backdrop (a ballistic missile explodes in rural Arkansas, sending its warhead through the air, because somebody dropped a socket wrench), the reality was far from reassuring, and it took decades, often against obstructionism and foot-dragging from the Pentagon, to remedy serious risks in the nuclear stockpile.

In the early days of the U.S. nuclear stockpile, it was assumed that nuclear weapons were the last resort in a wartime situation. Nuclear weapons were kept under the civilian custodianship of the Atomic Energy Commission (AEC), and would only be released to the military services by a direct order from the President of the United States. Further, the nuclear cores (“pits”) of weapons were stored separately from the rest of the weapon assembly, and would only be inserted in the weapon, in the case of bombers, in the air, after the order to deliver the weapon was received. (This procedure had been used even for the two bombs dropped on Japan.) These safeguards meant that the probability of an accidental nuclear explosion was essentially nil in peacetime, although the risk did exist of radioactive contamination if a pit were dispersed due to fire or explosion.

As the 1950s progressed, and fears of a Soviet sneak attack grew, pressure grew to shift the custodianship of nuclear weapons to the military. The development of nuclear tactical and air defence weapons, some of which were to be forward deployed outside the United States, added weight to this argument. If radar detected a wave of Soviet bombers heading for the United States, how practical would it be to contact the President, get him to sign off on transferring the anti-aircraft warheads to the Army and Air Force, have the AEC deliver them to the military bases, install them on the missiles, and prepare the missiles for launch? The missile age only compounded this situation. Now the risk existed for a “decapitation” attack which could take out the senior political and military leadership, leaving nobody with the authority to retaliate.

The result of all this was a gradual devolution of control over nuclear weapons from civilian to military commands, with fully-assembled nuclear weapons loaded on aircraft, sitting at the ends of runways in the United States and Europe, ready to take off on a few minutes' notice. As tensions continued to increase, B-52s, armed with hydrogen bombs, were on continuous “airborne alert”, ready at any time to head toward their targets.

The weapons carried by these aircraft, however, had not been designed for missions like this. They used high explosives which could be detonated by heat or shock, often contained few interlocks to prevent a stray electrical signal from triggering a detonation, were not “one point safe” (guaranteed that detonation of one segment of the high explosives could not cause a nuclear yield), and did not contain locks (“permissive action links”) to prevent unauthorised use of a weapon. Through much of the height of the Cold War, it was possible for a rogue B-52 or tactical fighter/bomber crew to drop a weapon which might start World War III; the only protection against this was rigid psychological screening and the enemy's air defence systems.

The resistance to introducing such safety measures stemmed from budget and schedule pressures, but also from what was called the “always/never” conflict. A nuclear weapon should always detonate when sent on a wartime mission. But it should never detonate under any other circumstances, including an airplane crash, technical malfunction, maintenance error, or through the deliberate acts of an insane or disloyal individual or group. These imperatives inevitably conflict with one another. The more safeguards you design into a weapon to avoid an unauthorised detonation, the greater the probability one of them may fail, rendering the weapon inert. SAC commanders and air crews were not enthusiastic about the prospect of risking their lives running the gauntlet of enemy air defences only to arrive over their target and drop a dud.

As documented here, it was only after the end of Cold War, as nuclear weapon stockpiles were drawn down, that the more dangerous weapons were retired and command and control procedures put into place which seem (to the extent outsiders can assess such highly classified matters) to provide a reasonable balance between protection against a catastrophic accident or unauthorised launch and a reliable deterrent.

Nuclear command and control extends far beyond the design of weapons. The author also discusses in detail the development of war plans, how civilian and military authorities interact in implementing them, how emergency war orders are delivered, authenticated, and executed, and how this entire system must be designed not only to be robust against errors when intact and operating as intended, but in the aftermath of an attack.

This is a serious scholarly work and, at 632 pages, a long one. There are 94 pages of end notes, many of which expand substantially upon items in the main text. A Kindle edition is available.

 Permalink

Metzger, Th. Undercover Mormon. New York: Roadswell Editions, 2013.
The author, whose spiritual journey had earlier led him to dabble with becoming a Mennonite, goes weekly to an acupuncturist named Rudy Kilowatt who believes in the power of crystals, attends neo-pagan fertility rituals in a friend's suburban back yard, had been oddly fascinated by Mormonism ever since, as a teenager, he attended the spectacular annual Mormon pageant at Hill Cumorah, near his home in upstate New York.

He returned again and again for the spectacle of the pageant, and based upon his limited knowledge of Mormon doctrine, found himself admiring how the religion seemed to have it all: “All religion is either sword and sorcery or science fiction. The reason Mormonism is growing so fast is that you guys have both, and don't apologize for either.” He decides to pursue this Mormon thing further, armouring himself in white shirt, conservative tie, and black pants, and heading off to the nearest congregation for the Sunday service.

Approached by missionaries who spot him as a newcomer, he masters his anxiety (bolstered by the knowledge he has a couple of Xanax pills in his pocket), gives a false name, and indicates he's interested in learning more about the faith. Before long he's attending Sunday school, reading tracts, and spinning into the Mormon orbit, with increasing suggestions that he might convert.

All of this is described in a detached, ironic manner, in which the reader (and perhaps the author) can't decide how seriously to take it all. Metzger carries magic talismans to protect himself against the fearful “Mormo”, describes his anxiety to his psychoanalyst, who prescribes the pharmaceutical version of magic bones. He struggles with paranoia about his deception being found out and agonises over the consequences. He consults a friend who, “For a while he was an old-order Quaker, then a Sufi, then a retro-neo-pagan. Now he's a Unitarian-Universalist professor of history.”

The narrative is written in the tediously quaint “new journalism” style where it's as much about the author as the subject. This works poorly here because the author isn't very interesting. He comes across as so neurotic and self-absorbed as to make Woody Allen seem like Clint Eastwood. His “discoveries” about the content of LDS scripture could have been made just as easily by reading the original documents on the LDS Web site, and his exploration of the history of Joseph Smith and the early days of Mormonism in New York could have been accomplished by consulting Wikipedia. His antics, such as burying chicken bones around the obelisk of Moroni on Hill Cumorah and digging up earth from the grave of Luman Walter to spread it in the sacred grove, push irony past the point of parody—does anybody believe the author took such things seriously (and if he did, why should anybody care what he thinks about anything)?

The book does not mock Mormonism, and treats the individuals he encounters on his journey more or less respectfully (with just that little [and utterly unjustified] “I'm better than you” that the hip intellectual has for earnest, clean-cut, industrious people who are “as white as angel food cake, and almost as spongy.”) But you'll learn nothing about the history and doctrine of the religion here that you won't find elsewhere without all the baggage of the author's tiresome “adventures”.

 Permalink

Rawles, James Wesley. Liberators. New York: Dutton, 2014. ISBN 978-0-525-95391-3.
This novel is the fifth in the series which began with Patriots (December 2008), then continued with Survivors (January 2012), Founders (October 2012), and Expatriates (October 2013), These books are not a conventional multi-volume narrative, in that all describe events in the lives of their characters in roughly the same time period surrounding “the Crunch”—a grid down societal collapse due to a debt crisis and hyperinflation. Taking place at the same time, you can read these books in any order, but if you haven't read the earlier novels you'll miss much of the back-story of the characters who appear here, which informs the parts they play in this episode.

Here the story cuts back and forth between the United States, where Megan LaCroix and her sister Malorie live on a farm in West Virginia with Megan's two boys, and Joshua Kim works in security at the National Security Agency where Megan is an analyst. When the Crunch hits, Joshua and the LaCroix sisters decide to team up to bug out to Joshua's childhood friend's place in Kentucky, where survival from the urban Golden Horde may be better assured. They confront the realities of a collapsing society, where the rule of law is supplanted by extractive tyrannies, and are forced to over-winter in a wilderness, living by their wits and modest preparations.

In Western Canada, the immediate impact of the Crunch was less severe because electrical power, largely hydroelectric, remained on. At the McGregor Ranch, in inland British Columbia (a harsh, northern continental climate nothing like that of Vancouver), the family and those who have taken refuge with them ride out the initial crisis only to be confronted with an occupation of Canada by a nominally United Nations force called UNPROFOR, which is effectively a French colonial force which, in alliance with effete urban eastern and francophone Canada, seeks to put down the fractious westerners and control the resource-rich land they inhabit.

This leads to an asymmetrical war of resistance, aided by the fact that when earlier faced with draconian gun registration and prohibition laws imposed by easterners, a large number of weapons in the west simply vanished, only to reappear when they were needed most. As was demonstrated in Vietnam and Algeria, French occupation forces can be tenacious and brutal, but are ultimately no match for an indigenous insurgency with the support of the local populace. A series of bold strikes against UNPROFOR assets eventually turns the tide.

But just when Canada seems ready to follow the U.S. out of the grip of tyranny, an emboldened China, already on the march in Africa, makes a move to seize western Canada's abundant natural resources. Under the cover of a UN resolution, a massive Chinese force, with armour and air support, occupies the western provinces. This is an adversary of an entirely different order than the French, and will require the resistance, supported by allies from the liberation struggle in the U.S., to audacious and heroic exploits, including one of the greatest acts of monkey-wrenching ever described in a thriller.

As this story has developed over the five novels, the author has matured into a first-rate thriller novelist. There is still plenty of information on gear, tactics, intelligence operations, and security, but the characters are interesting, well-developed, and the action scenes both plausible and exciting. In the present book, we encounter many characters we've met in previous volumes, with their paths crossing as events unfold. There is no triumphalism or glossing over the realities of insurgent warfare against a tyrannical occupying force. There is a great deal of misery and hardship, and sometimes tragedy can result when you've taken every precaution, made no mistake, but simply run out of luck.

Taken together, these five novels are an epic saga of survival in hard and brutal times, painted on a global canvas. Reading them, you will not only be inspired that you and your loved ones can survive such a breakdown in the current economic and social order, but you will also learn a great deal of the details of how to do so. This is not a survival manual, but attentive readers will find many things to research further for their own preparations for an uncertain future. An excellent place to begin that research is the author's own survivalblog.com Web site, whose massive archives you can spend months exploring.

 Permalink

Weir, Andy. The Martian. New York: Broadway Books, [2011] 2014. ISBN 978-0-553-41802-6.
Mark Watney was part of the six person crew of Ares 3 which landed on Mars to carry out an exploration mission in the vicinity of its landing site in Acidalia Planitia. The crew made a precision landing at the target where “presupply” cargo flights had already landed their habitation module, supplies for their stay on Mars, rovers and scientific instruments, and the ascent vehicle they would use to return to the Earth-Mars transit vehicle waiting for them in orbit. Just six days after landing, having set up the habitation module and unpacked the supplies, they are struck by a dust storm of unprecedented ferocity. With winds up to 175 kilometres per hour, the Mars Ascent Vehicle (MAV), already fuelled by propellant made on Mars by reacting hydrogen brought from Earth with the Martian atmosphere, was at risk of being blown over, which would destroy the fragile spacecraft and strand the crew on Mars. NASA gives the order to abort the mission and evacuate to orbit in the MAV for an immediate return to Earth.

But the crew first has to get from the habitation module to the MAV, which means walking across the surface in the midst of the storm. (You'd find it very hard to walk in a 175 km/h wind on Earth, but recall that the atmosphere pressure on Mars is only about 1/200 that of Earth at sea level, so the wind doesn't pack anywhere near the punch.) Still, there was dust and flying debris from equipment ripped loose from the landers. Five members of the crew made it to the MAV. Mark Watney didn't.

As the crew made the traverse to the MAV, Watney was struck by part of an antenna array torn from the habitation, puncturing his suit and impaling him. He was carried away by the wind, and the rest of the crew, seeing his vital signs go to zero before his suit's transmitter failed, followed mission rules to leave him behind and evacuate in the MAV while they still could.

But Watney wasn't dead. His injury was not fatal, and his blood loss was sufficient to seal the leak in the suit where the antenna had pierced it, as the water in the blood boiled off and the residue mostly sealed the breach. Awakening after the trauma, he made an immediate assessment of his situation. I'm alive. Cool! I hurt like heck. Not cool. The habitation module is intact. Yay! The MAV is gone—I'm alone on Mars. Dang!

“Dang” is not precisely how Watney put it. This book contains quite a bit of profanity which I found gratuitous. NASA astronauts in the modern era just don't swear like sailors, especially on open air-to-ground links. Sure, I can imagine launching a full salvo of F-bombs upon discovering I'd been abandoned on Mars, especially when I'm just talking to myself, but everybody seems to do it here on all occasions. This is the only reason I'd hesitate to recommend this book to younger readers who would otherwise be inspired by the story.

Watney is stranded on Mars with no way to communicate with Earth, since all communications were routed through the MAV, which has departed. He has all of the resources for a six-person mission, so he has no immediate survival problems after he gets back to the habitation and stitches up his wound, but he can work the math: even if he can find a way to communicate to Earth that he's still alive, orbital mechanics dictates that it will take around two years to send a rescue mission. His supplies cannot be stretched that far.

This sets the stage for a gripping story of survival, improvisation, difficult decisions, necessity versus bureaucratic inertia, trying to do the right thing in a media fishbowl, and all done without committing any howlers in technology, orbital mechanics, or the way people and organisations behave. Sure, you can quibble about this or that detail, but then people far in the future may regard a factual account of Apollo 13 as largely legend, given how many things had to go right to rescue the crew. Things definitely do not go smoothly here: there is reverse after reverse, and many inscrutable mysteries to be unscrewed if Watney is to get home.

This is an inspiring tale of pioneering on a new world. People have already begun to talk about going to Mars to stay. These settlers will face stark challenges though, one hopes, not as dire as Watney, and with the confidence of regular re-supply missions and new settlers to follow. Perhaps this novel will be seen, among the first generation born on Mars, as inspiration that the challenges they face in bringing a barren planet to life are within the human capacity to solve, especially if their media library isn't exclusively populated with 70s TV shows and disco.

A Kindle edition is available.

 Permalink

December 2014

Wade, Nicholas. A Troublesome Inheritance. New York: Penguin Press, 2014. ISBN 978-1-59420-446-3.
Geographically isolated populations of a species (unable to interbreed with others of their kind) will be subject to natural selection based upon their environment. If that environment differs from that of other members of the species, the isolated population will begin to diverge genetically, as genetic endowments which favour survival and more offspring are selected for. If the isolated population is sufficiently small, the mechanism of genetic drift may cause a specific genetic variant to become almost universal or absent in that population. If this process is repeated for a sufficiently long time, isolated populations may diverge to such a degree they can no longer interbreed, and therefore become distinct species.

None of this is controversial when discussing other species, but in some circles to suggest that these mechanisms apply to humans is the deepest heresy. This well-researched book examines the evidence, much from molecular biology which has become available only in recent years, for the diversification of the human species into distinct populations, or “races” if you like, after its emergence from its birthplace in Africa. In this book the author argues that human evolution has been “recent, copious, and regional” and presents the genetic evidence to support this view.

A few basic facts should be noted at the outset. All humans are members of a single species, and all can interbreed. Humans, as a species, have an extremely low genetic diversity compared to most other animal species: this suggests that our ancestors went through a genetic “bottleneck” where the population was reduced to a very small number, causing the variation observed in other species to be lost through genetic drift. You might expect different human populations to carry different genes, but this is not the case—all humans have essentially the same set of genes. Variation among humans is mostly a result of individuals carrying different alleles (variants) of a gene. For example, eye colour in humans is entirely inherited: a baby's eye colour is determined completely by the alleles of various genes inherited from the mother and father. You might think that variation among human populations is then a question of their carrying different alleles of genes, but that too is an oversimplification. Human genetic variation is, in most cases, a matter of the frequency of alleles among the population.

This means that almost any generalisation about the characteristics of individual members of human populations with different evolutionary histories is ungrounded in fact. The variation among individuals within populations is generally much greater than that of populations as a whole. Discrimination based upon an individual's genetic heritage is not just abhorrent morally but scientifically unjustified.

Based upon these now well-established facts, some have argued that “race does not exist” or is a “social construct”. While this view may be motivated by a well-intentioned desire to eliminate discrimination, it is increasingly at variance with genetic evidence documenting the history of human populations.

Around 200,000 years ago, modern humans emerged in Africa. They spent more than three quarters of their history in that continent, spreading to different niches within it and developing a genetic diversity which today is greater than that of all humans in the rest of the world. Around 50,000 years before the present, by the genetic evidence, a small band of hunter-gatherers left Africa for the lands to the north. Then, some 30,000 years ago the descendants of these bands who migrated to the east and west largely ceased to interbreed and separated into what we now call the Caucasian and East Asian populations. These have remained the main three groups within the human species. Subsequent migrations and isolations have created other populations such as Australian and American aborigines, but their differentiation from the three main races is less distinct. Subsequent migrations, conquest, and intermarriage have blurred the distinctions between these groups, but the fact is that almost any child, shown a picture of a person of European, African, or East Asian ancestry can almost always effortlessly and correctly identify their area of origin. University professors, not so much: it takes an intellectual to deny the evidence of one's own eyes.

As these largely separated populations adapted to their new homes, selection operated upon their genomes. In the ancestral human population children lost the ability to digest lactose, the sugar in milk, after being weaned from their mothers' milk. But in populations which domesticated cattle and developed dairy farming, parents who passed on an allele which would allow their children to drink cow's milk their entire life would have more surviving offspring and, in a remarkably short time on the evolutionary scale, lifetime lactose tolerance became the norm in these areas. Among populations which never raised cattle or used them only for meat, lifetime lactose tolerance remains rare today.

Humans in Africa originally lived close to the equator and had dark skin to protect them from the ultraviolet radiation of the Sun. As human bands occupied northern latitudes in Europe and Asia, dark skin would prevent them from being able to synthesise sufficient Vitamin D from the wan, oblique sunlight of northern winters. These populations were under selection pressure for alleles of genes which gave them lighter skin, but interestingly Europeans and East Asians developed completely different genetic means to lighten their skin. The selection pressure was the same, but evolution blundered into two distinct pathways to meet the need.

Can genetic heritage affect behaviour? There's evidence it can. Humans carry a gene called MAO-A, which breaks down neurotransmitters that affect the transmission of signals within the brain. Experiments in animals have provided evidence that under-production of MAO-A increases aggression and humans with lower levels of MAO-A are found to be more likely to commit violent crime. MAO-A production is regulated by a short sequence of DNA adjacent to the gene: humans may have anywhere from two to five copies of the promoter; the more you have, the more the MAO-A, and hence the mellower you're likely to be. Well, actually, people with three to five copies are indistinguishable, but those with only two (2R) show higher rates of delinquency. Among men of African ancestry, 5.5% carry the 2R variant, while 0.1% of Caucasian males and 0.00067% of East Asian men do. Make of this what you will.

The author argues that just as the introduction of dairy farming tilted the evolutionary landscape in favour of those bearing the allele which allowed them to digest milk into adulthood, the transition of tribal societies to cities, states, and empires in Asia and Europe exerted a selection pressure upon the population which favoured behavioural traits suited to living in such societies. While a tribal society might benefit from producing a substantial population of aggressive warriors, an empire has little need of them: its armies are composed of soldiers, courageous to be sure, who follow orders rather than charging independently into battle. In such a society, the genetic traits which are advantageous in a hunter-gatherer or tribal society will be selected out, as those carrying them will, if not expelled or put to death for misbehaviour, be unable to raise as large a family in these settled societies.

Perhaps, what has been happening over the last five millennia or so is a domestication of the human species. Precisely as humans have bred animals to live with them in close proximity, human societies have selected for humans who are adapted to prosper within them. Those who conform to the social hierarchy, work hard, come up with new ideas but don't disrupt the social structure will have more children and, over time, whatever genetic predispositions there may be for these characteristics (which we don't know today) will become increasingly common in the population. It is intriguing that as humans settled into fixed communities, their skeletons became less robust. This same process of gracilisation is seen in domesticated animals compared to their wild congeners. Certainly there have been as many human generations since the emergence of these complex societies as have sufficed to produce major adaptation in animal species under selective breeding.

Far more speculative and controversial is whether this selection process has been influenced by the nature of the cultures and societies which create the selection pressure. East Asian societies tend to be hierarchical, obedient to authority, and organised on a large scale. European societies, by contrast, are fractious, fissiparous, and prone to bottom-up insurgencies. Is this in part the result of genetic predispositions which have been selected for over millennia in societies which work that way?

It is assumed by many right-thinking people that all that is needed to bring liberty and prosperity to those regions of the world which haven't yet benefited from them is to create the proper institutions, educate the people, and bootstrap the infrastructure, then stand back and watch them take off. Well, maybe—but the history of colonialism, the mission civilisatrice, and various democracy projects and attempts at nation building over the last two centuries may suggest it isn't that simple. The population of the colonial, conquering, or development-aid-giving power has the benefit of millennia of domestication and adaptation to living in a settled society with division of labour. Its adaptations for tribalism have been largely bred out. Not so in many cases for the people they're there to “help”. Withdraw the colonial administration or occupation troops and before long tribalism will re-assert itself because that's the society for which the people are adapted.

Suggesting things like this is anathema in academia or political discourse. But look at the plain evidence of post-colonial Africa and more recent attempts of nation-building, and couple that with the emerging genetic evidence of variation in human populations and connections to behaviour and you may find yourself thinking forbidden thoughts. This book is an excellent starting point to explore these difficult issues, with numerous citations of recent scientific publications.

 Permalink

Thorne, Kip. The Science of Interstellar. New York: W. W. Norton, 2014. ISBN 978-0-393-35137-8.
Christopher Nolan's 2014 film Interstellar was eagerly awaited by science fiction enthusiasts who, having been sorely disappointed so many times by movies that crossed the line into fantasy by making up entirely implausible things to move the plot along, hoped that this effort would live up to its promise of getting the science (mostly) right and employing scientifically plausible speculation where our present knowledge is incomplete.

The author of the present book is one of the most eminent physicists working in the field of general relativity (Einstein's theory of gravitation) and a pioneer in exploring the exotic strong field regime of the theory, including black holes, wormholes, and gravitational radiation. Prof. Thorne was involved in the project which became Interstellar from its inception, and worked closely with the screenwriters, director, and visual effects team to get the science right. Some of the scenes in the movie, such as the visual appearance of orbiting a rotating black hole, have never been rendered accurately before, and are based upon original work by Thorne in computing light paths through spacetime in its vicinity which will be published as professional papers.

Here, the author recounts the often bumpy story of the movie's genesis and progress over the years from his own, Hollywood-outsider, perspective, how the development of the story presented him, as technical advisor (he is credited as an executive producer), with problem after problem in finding a physically plausible solution, sometimes requiring him to do new physics. Then, Thorne provides a popular account of the exotic physics on which the story is based, including gravitational time dilation, black holes, wormholes, and speculative extra dimensions and “brane” scenarios stemming from string theory. Then he “interprets” the events and visual images in the film, explaining (where possible) how they could be produced by known, plausible, or speculative physics. Of course, this isn't always possible—in some cases the needs of story-telling or the requirement not to completely baffle a non-specialist with bewilderingly complicated and obscure images had to take priority over scientific authenticity, and when this is the case Thorne is forthright in admitting so.

Sections are labelled with icons identifying them as “truth”: generally accepted by those working in the field and often with experimental evidence, “educated guess”: a plausible inference from accepted physics, but without experimental evidence and assuming existing laws of physics remain valid in circumstances under which we've never tested them, and “speculation”: wild and wooly stuff (for example quantum gravity or the interior structure of a black hole) which violates no known law of physics, but for which we have no complete and consistent theory and no evidence whatsoever.

This is a clearly written and gorgeously illustrated book which, for those who enjoyed the movie but weren't entirely clear whence some of the stunning images they saw came, will explain the science behind them. The cover of the book has a “SPOILER ALERT” warning potential readers that the ending and major plot details are given away in the text. I will refrain from discussing them here so as not to make this a spoiler in itself. I have not yet seen the movie, and I expect when I do I will enjoy it more for having read the book, since I'll know what to look for in some of the visuals and be less likely to dismiss some of the apparently outrageous occurrences by knowing that there is a physically plausible (albeit extremely speculative and improbable) explanation for them.

For the animations and blackboard images mentioned in the text, the book directs you to a Web site which is so poorly designed and difficult to navigate it took me ten minutes to find them on the first visit. Here is a direct link. In the Kindle edition the index cites page numbers in the print edition which are useless since the electronic edition does not contain real page numbers. There are a few typographical errors and one factual howler: Io is not “Saturn's closest moon”, and Cassini was captured in Saturn orbit by a propulsion burn, not a gravitational slingshot (this does not affect the movie in any way: it's in background material).

 Permalink

Thor, Brad. Hidden Order. New York: Pocket Books, 2013. ISBN 978-1-4767-1710-4.
This is the thirteenth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Earlier novels have largely been in the mainstream of the “techno-thriller” genre, featuring missions in exotic locations confronting shadowy adversaries bent on inflicting great harm. The present book is a departure from this formula, being largely set in the United States and involving institutions considered pillars of the establishment such as the Federal Reserve System and the Central Intelligence Agency.

A CIA operative “accidentally” runs into a senior intelligence official of the Jordanian government in an airport lounge in Europe, who passes her disturbing evidence that members of a now-disbanded CIA team of which she was a member were involved in destabilising governments now gripped with “Arab Spring” uprisings and next may be setting their sights on Jordan.

Meanwhile, Scot Harvath, just returned from a harrowing mission on the high seas, is taken by his employer, Reed Carlton, to discreetly meet a new client: the Federal Reserve. The Carlton Group is struggling to recover from the devastating blow it took in the previous novel, Black List (August 2014), and its boss is willing to take on unconventional missions and new clients, especially ones “with a license to print their own money”. The chairman of the Federal Reserve has recently and unexpectedly died and the five principal candidates to replace him have all been kidnapped, almost simultaneously, across the United States. These people start turning up dead, in circumstances with symbolism dating back to the American revolution.

Investigation of the Jordanian allegations is shut down by the CIA hierarchy, and has to be pursued through back channels, involving retired people who know how the CIA really works. Evidence emerges of a black program that created weapons of frightful potential which may have gone even blacker and deeper under cover after being officially shut down.

Earlier Brad Thor novels were more along the “U-S-A! U-S-A!” line of most thrillers. Here, the author looks below the surface of highly dubious institutions (“The Federal Reserve is about as federal as Federal Express”) and evil that flourishes in the dark, especially when irrigated with abundant and unaccountable funds. Like many Americans, Scot Harvath knew little about the Federal Reserve other than it had something to do with money. Over the course of his investigations he, and the reader, will learn many disturbing things about its dodgy history and operations, all accurate as best I can determine.

The novel is as much police procedural as thriller, with Harvath teamed with a no-nonsense Boston Police Department detective, processing crime scenes and running down evidence. The story is set in an unspecified near future (the Aerion Supersonic Business Jet is in operation). All is eventually revealed in the end, with a resolution in the final chapter devoutly to be wished, albeit highly unlikely to occur in the cesspool of corruption which is real-world Washington. There is less action and fancy gear than in most Harvath novels, but interesting characters, an intricate mystery, and a good deal of information of which many readers may not be aware.

A short prelude to this novel, Free Fall, is available for free for the Kindle. It provides the background of the mission in progress in which we first encounter Scot Harvath in chapter 2 here. My guess is that this chapter was originally part of the manuscript and was cut for reasons of length and because it spent too much time on a matter peripheral to the main plot. It's interesting to read before you pick up Hidden Order, but if you skip it you'll miss nothing in the main story.

 Permalink

Robinson, Peter. How Ronald Reagan Changed My Life. New York: Harper Perennial, 2003. ISBN 978-0-06-052400-5.
In 1982, the author, a recent graduate of Dartmouth College who had spent two years studying at Oxford, then remained in England to write a novel, re-assessed his career prospects and concluded that, based upon experience, novelist did not rank high among them. He sent letters to everybody he thought might provide him leads on job opportunities. Only William F. Buckley replied, suggesting that Robinson contact his son, Christopher, then chief speechwriter for Vice President George H. W. Bush, who might know of some openings for speechwriters. Hoping at most for a few pointers, the author flew to Washington to meet Buckley, who was planning to leave the White House, creating a vacancy in the Vice President's speechwriting shop. After a whirlwind of interviews, Robinson found himself, in his mid-twenties, having never written a speech before in his life, at work in the Old Executive Office Building, tasked with putting words into the mouth of the Vice President of the United States.

After a year and a half writing for Bush, two of the President's speechwriters quit at the same time. Forced to find replacements on short notice, the head of the office recruited the author to write for Reagan: “He hired me because I was already in the building.” From then through 1988, he wrote speeches for Reagan, some momentous (Reagan's June 1987 speech at the Brandenburg gate, where Robinson's phrase, “Mr. Gorbachev, tear down this wall!”, uttered by Reagan against vehement objections from the State Department and some of his senior advisers, was a pivotal moment in the ending of the Cold War), but also many more for less epochal events such as visits of Boy Scouts to the White House, ceremonies honouring athletes, and the dozens of other circumstances where the President was called upon to “say a few words”. And because the media were quick to pounce on any misstatement by the President, even the most routine remarks had to be meticulously fact-checked by a team of researchers. For every grand turn of phrase in a high profile speech, there were many moments spent staring at the blank screen of a word processor as the deadline for some inconsequential event loomed ever closer and wondering, “How am I supposed to get twenty minutes out of that?“.

But this is not just a book about the life of a White House speechwriter (although there is plenty of insight to be had on that topic). Its goal is to collect and transmit the wisdom that a young man, in his first job, learned by observing Ronald Reagan masterfully doing the job to which he had aspired since entering politics in the 1960s. Reagan was such a straightforward and unaffected person that many underestimated him. For example, compared to the hard-driving types toiling from dawn to dusk who populate many White House positions, Reagan never seemed to work very hard. He would rise at his accustomed hour, work for five to eight hours at his presidential duties, exercise, have dinner, review papers, and get to bed on time. Some interpreted this as his being lazy, but Robinson's fellow speechwriter, Clark Judge, remarked “He never confuses inputs with output. … Who cares how many hours a day a President puts in? It's what a President accomplishes that matters.”

These are lessons aplenty here, all illustrated with anecdotes from the Reagan White House: the distinction between luck and the results from persistence in the face of adversity seen in retrospect; the unreasonable effectiveness and inherent dignity of doing one's job, whatever it be, well; viewing life not as background scenery but rather an arena in which one can act, changing not just the outcome but the circumstances one encounters; the power of words, especially those sincerely believed and founded in comprehensible, time-proven concepts; scepticism toward the pronouncements of “experts” whose oracle-like proclamations make sense only to other experts—if it doesn't make sense to an intelligent person with some grounding in the basics, it probably doesn't make sense period; the importance of marriage, and how the Reagans complemented one another in facing the challenges and stress of the office; the centrality of faith, tempered by a belief in free will and the importance of the individual; how both true believers and pragmatists, despite how often they despise one another, are both essential to actually getting things done; and that what ultimately matters is what you make of whatever situation in which you find yourself.

These are all profound lessons to take on board, especially in the drinking from a firehose environment of the Executive Office of the President, and in one's twenties. But this is not a dour self-help book: it is an insightful, beautifully written, and often laugh-out-loud funny account of how these insights were gleaned on the job, by observing Reagan at work and how he and his administration got things done, often against fierce political and media opposition. This is one of those books that I wish I could travel back in time and hand a copy to my twenty-year-old self—it would have saved a great deal of time and anguish, even for a person like me who has no interest whatsoever in politics. Fundamentally, it's about getting things done, and that's universally applicable.

People matter. Individuals matter. Long before Ronald Reagan was a radio broadcaster, actor, or politician, he worked summers as a lifeguard. Between 1927 and 1932, he personally saved 77 people from drowning. “There were seventy-seven people walking around northern Illinois who wouldn't have been there if it hadn't been for Reagan—and Reagan knew it.” It is not just a few exceptional people who change the world for the better, but all of those who do their jobs and overcome the challenges with which life presents them. Learning this can change anybody's life.

More recently, Mr. Robinson is the host of Uncommon Knowledge and co-founder of Ricochet.com.

 Permalink