2017  

January 2017

Brown, Brandon R. Planck. Oxford: Oxford University Press, 2015. ISBN 978-0-19-021947-5.
Theoretical physics is usually a young person's game. Many of the greatest breakthroughs have been made by researchers in their twenties, just having mastered existing theories while remaining intellectually flexible and open to new ideas. Max Planck, born in 1858, was an exception to this rule. He spent most of his twenties living with his parents and despairing of finding a paid position in academia. He was thirty-six when he took on the project of understanding heat radiation, and forty-two when he explained it in terms which would launch the quantum revolution in physics. He was in his fifties when he discovered the zero-point energy of the vacuum, and remained engaged and active in science until shortly before his death in 1947 at the age of 89. As theoretical physics editor for the then most prestigious physics journal in the world, Annalen der Physik, in 1905 he approved publication of Einstein's special theory of relativity, embraced the new ideas from a young outsider with neither a Ph.D. nor an academic position, extended the theory in his own work in subsequent years, and was instrumental in persuading Einstein to come to Berlin, where he became a close friend.

Sometimes the simplest puzzles lead to the most profound of insights. At the end of the nineteenth century, the radiation emitted by heated bodies was such a conundrum. All objects emit electromagnetic radiation due to the thermal motion of their molecules. If an object is sufficiently hot, such as the filament of an incandescent lamp or the surface of the Sun, some of the radiation will fall into the visible range and be perceived as light. Cooler objects emit in the infrared or lower frequency bands and can be detected by instruments sensitive to them. The radiation emitted by a hot object has a characteristic spectrum (the distribution of energy by frequency), and has a peak which depends only upon the temperature of the body. One of the simplest cases is that of a black body, an ideal object which perfectly absorbs all incident radiation. Consider an ideal closed oven which loses no heat to the outside. When heated to a given temperature, its walls will absorb and re-emit radiation, with the spectrum depending upon its temperature. But the equipartition theorem, a cornerstone of statistical mechanics, predicted that the absorption and re-emission of radiation in the closed oven would result in a ever-increasing peak frequency and energy, diverging to infinite temperature, the so-called ultraviolet catastrophe. Not only did this violate the law of conservation of energy, it was an affront to common sense: closed ovens do not explode like nuclear bombs. And yet the theory which predicted this behaviour, the Rayleigh-Jeans law, made perfect sense based upon the motion of atoms and molecules, correctly predicted numerous physical phenomena, and was correct for thermal radiation at lower temperatures.

At the time Planck took up the problem of thermal radiation, experimenters in Germany were engaged in measuring the radiation emitted by hot objects with ever-increasing precision, confirming the discrepancy between theory and reality, and falsifying several attempts to explain the measurements. In December 1900, Planck presented his new theory of black body radiation and what is now called Planck's Law at a conference in Berlin. Written in modern notation, his formula for the energy emitted by a body of temperature T at frequency ν is:

Planck's Law

This equation not only correctly predicted the results measured in the laboratories, it avoided the ultraviolet catastrophe, as it predicted an absolute cutoff of the highest frequency radiation which could be emitted based upon an object's temperature. This meant that the absorption and re-emission of radiation in the closed oven could never run away to infinity because no energy could be emitted above the limit imposed by the temperature.

Fine: the theory explained the measurements. But what did it mean? More than a century later, we're still trying to figure that out.

Planck modeled the walls of the oven as a series of resonators, but unlike earlier theories in which each could emit energy at any frequency, he constrained them to produce discrete chunks of energy with a value determined by the frequency emitted. This had the result of imposing a limit on the frequency due to the available energy. While this assumption yielded the correct result, Planck, deeply steeped in the nineteenth century tradition of the continuum, did not initially suggest that energy was actually emitted in discrete packets, considering this aspect of his theory “a purely formal assumption.” Planck's 1900 paper generated little reaction: it was observed to fit the data, but the theory and its implications went over the heads of most physicists.

In 1905, in his capacity as editor of Annalen der Physik, he read and approved the publication of Einstein's paper on the photoelectric effect, which explained another physics puzzle by assuming that light was actually emitted in discrete bundles with an energy determined by its frequency. But Planck, whose equation manifested the same property, wasn't ready to go that far. As late as 1913, he wrote of Einstein, “That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.” Only in the 1920s did Planck fully accept the implications of his work as embodied in the emerging quantum theory.

The equation for Planck's Law contained two new fundamental physical constants: Planck's constant (h) and Boltzmann's constant (kB). (Boltzmann's constant was named in memory of Ludwig Boltzmann, the pioneer of statistical mechanics, who committed suicide in 1906. The constant was first introduced by Planck in his theory of thermal radiation.) Planck realised that these new constants, which related the worlds of the very large and very small, together with other physical constants such as the speed of light (c), the gravitational constant (G), and the Coulomb constant (ke), allowed defining a system of units for quantities such as length, mass, time, electric charge, and temperature which were truly fundamental: derived from the properties of the universe we inhabit, and therefore comprehensible to intelligent beings anywhere in the universe. Most systems of measurement are derived from parochial anthropocentric quantities such as the temperature of somebody's armpit or the supposed distance from the north pole to the equator. Planck's natural units have no such dependencies, and when one does physics using them, equations become simpler and more comprehensible. The magnitudes of the Planck units are so far removed from the human scale they're unlikely to find any application outside theoretical physics (imagine speed limit signs expressed in a fraction of the speed of light, or road signs giving distances in Planck lengths of 1.62×10−35 metres), but they reflect the properties of the universe and may indicate the limits of our ability to understand it (for example, it may not be physically meaningful to speak of a distance smaller than the Planck length or an interval shorter than the Planck time [5.39×10−44 seconds]).

Planck's life was long and productive, and he enjoyed robust health (he continued his long hikes in the mountains into his eighties), but was marred by tragedy. His first wife, Marie, died of tuberculosis in 1909. He outlived four of his five children. His son Karl was killed in 1916 in World War I. His two daughters, Grete and Emma, both died in childbirth, in 1917 and 1919. His son and close companion Erwin, who survived capture and imprisonment by the French during World War I, was arrested and executed by the Nazis in 1945 for suspicion of involvement in the Stauffenberg plot to assassinate Hitler. (There is no evidence Erwin was a part of the conspiracy, but he was anti-Nazi and knew some of those involved in the plot.)

Planck was repulsed by the Nazis, especially after a private meeting with Hitler in 1933, but continued in his post as the head of the Kaiser Wilhelm Society until 1937. He considered himself a German patriot and never considered emigrating (and doubtless his being 75 years old when Hitler came to power was a consideration). He opposed and resisted the purging of Jews from German scientific institutions and the campaign against “Jewish science”, but when ordered to dismiss non-Aryan members of the Kaiser Wilhelm Society, he complied. When Heisenberg approached him for guidance, he said, “You have come to get my advice on political questions, but I am afraid I can no longer advise you. I see no hope of stopping the catastrophe that is about to engulf all our universities, indeed our whole country. … You simply cannot stop a landslide once it has started.”

Planck's house near Berlin was destroyed in an Allied bombing raid in February 1944, and with it a lifetime of his papers, photographs, and correspondence. (He and his second wife Marga had evacuated to Rogätz in 1943 to escape the raids.) As a result, historians have only limited primary sources from which to work, and the present book does an excellent job of recounting the life and science of a man whose work laid part of the foundations of twentieth century science.

 Permalink

Wolfe, Tom. The Kingdom of Speech. New York: Little, Brown, 2016. ISBN 978-0-316-40462-4.
In this short (192) page book, Tom Wolfe returns to his roots in the “new journalism”, of which he was a pioneer in the 1960s. Here the topic is the theory of evolution; the challenge posed to it by human speech (because no obvious precursor to speech occurs in other animals); attempts, from Darwin to Noam Chomsky to explain this apparent discrepancy and preserve the status of evolution as a “theory of everything”; and the evidence collected by linguist and anthropologist Daniel Everett among the Pirahã people of the Amazon basin in Brazil, which appears to falsify Chomsky's lifetime of work on the origin of human language and the universality of its structure. A second theme is contrasting theorists and intellectuals such as Darwin and Chomsky with “flycatchers” such as Alfred Russel Wallace, Darwin's rival for priority in publishing the theory of evolution, and Daniel Everett, who work in the field—often in remote, unpleasant, and dangerous conditions—to collect the data upon which the grand thinkers erect their castles of hypothesis.

Doubtless fearful of the reaction if he suggested the theory of evolution applied to the origin of humans, in his 1859 book On the Origin of Species, Darwin only tiptoed close to the question two pages from the end, writing, “In the distant future, I see open fields for far more important researches. Psychology will be securely based on a new foundation, that of the necessary acquirement of each mental power and capacity of gradation. Light will be thrown on the origin of man and his history.” He needn't have been so cautious: he fooled nobody. The very first review, five days before publication, asked, “If a monkey has become a man—…?”, and the tempest was soon at full force.

Darwin's critics, among them Max Müller, German-born professor of languages at Oxford, and Darwin's rival Alfred Wallace, seized upon human characteristics which had no obvious precursors in the animals from which man was supposed to have descended: a hairless body, the capacity for abstract thought, and, Müller's emphasis, speech. As Müller said, “Language is our Rubicon, and no brute will dare cross it.” How could Darwin's theory, which claimed to describe evolution from existing characteristics in ancestor species, explain completely novel properties which animals lacked?

Darwin responded with his 1871 The Descent of Man, and Selection in Relation to Sex, which explicitly argued that there were precursors to these supposedly novel human characteristics among animals, and that, for example, human speech was foreshadowed by the mating songs of birds. Sexual selection was suggested as the mechanism by which humans lost their hair, and the roots of a number of human emotions and even religious devotion could be found in the behaviour of dogs. Many found these arguments, presented without any concrete evidence, unpersuasive. The question of the origin of language had become so controversial and toxic that a year later, the Philological Society of London announced it would no longer accept papers on the subject.

With the rediscovery of Gregor Mendel's work on genetics and subsequent research in the field, a mechanism which could explain Darwin's evolution was in hand, and the theory became widely accepted, with the few discrepancies set aside (as had the Philological Society) as things we weren't yet ready to figure out.

In the years after World War II, the social sciences became afflicted by a case of “physics envy”. The contribution to the war effort by their colleagues in the hard sciences in areas such as radar, atomic energy, and aeronautics had been handsomely rewarded by prestige and funding, while the more squishy sciences remained in a prewar languor along with the departments of Latin, Medieval History, and Drama. Clearly, what was needed was for these fields to adopt a theoretical approach grounded in mathematics which had served so well for chemists, physicists, engineers, and appeared to be working for the new breed of economists.

It was into this environment that in the late 1950s a young linguist named Noam Chomsky burst onto the scene. Over its century and a half of history, much of the work of linguistics had been cataloguing and studying the thousands of languages spoken by people around the world, much as entomologists and botanists (or, in the pejorative term of Darwin's age, flycatchers) travelled to distant lands to discover the diversity of nature and try to make sense of how it was all interrelated. In his 1957 book, Syntactic Structures, Chomsky, then just twenty-eight years old and working in the building at MIT where radar had been developed during the war, said all of this tedious and messy field work was unnecessary. Humans had evolved (note, “evolved”) a “language organ”, an actual physical structure within the brain—the “language acquisition device”—which children used to learn and speak the language they heard from their parents. All human languages shared a “universal grammar”, on top of which all the details of specific languages so carefully catalogued in the field were just fluff, like the specific shape and colour of butterflies' wings. Chomsky invented the “Martian linguist” which was to come to feature in his lectures, who he claimed, arriving on Earth, would quickly discover the unity underlying all human languages. No longer need the linguist leave his air conditioned office. As Wolfe writes in chapter 4, “Now, all the new, Higher Things in a linguist's life were to be found indoors, at a desk…looking at learned journals filled with cramped type instead of at a bunch of hambone faces in a cloud of gnats.”

Given the alternatives, most linguists opted for the office, and for the prestige that a theory-based approach to their field conferred, and by the 1960s, Chomsky's views had taken over linguistics, with only a few dissenters, at whom Chomsky hurled thunderbolts from his perch on academic Olympus. He transmuted into a general-purpose intellectual, pronouncing on politics, economics, philosophy, history, and whatever occupied his fancy, all with the confidence and certainty he brought to linguistics. Those who dissented he denounced as “frauds”, “liars”, or “charlatans”, including B. F. Skinner, Alan Dershowitz, Jacques Lacan, Elie Wiesel, Christopher Hitchens, and Jacques Derrida. (Well, maybe I agree when it comes to Derrida and Lacan.) In 2002, with two colleagues, he published a new theory claiming that recursion—embedding one thought within another—was a universal property of human language and component of the universal grammar hard-wired into the brain.

Since 1977, Daniel Everett had been living with and studying the Pirahã in Brazil, originally as a missionary and later as an academic linguist trained and working in the Chomsky tradition. He was the first person to successfully learn the Pirahã language, and documented it in publications. In 2005 he published a paper in which he concluded that the language, one of the simplest ever described, contained no recursion whatsoever. It also contained neither a past nor future tense, description of relations beyond parents and siblings, gender, numbers, and many additional aspects of other languages. But the absence of recursion falsified Chomsky's theory, which pronounced it a fundamental part of all human languages. Here was a field worker, a flycatcher, braving not only gnats but anacondas, caimans, and just about every tropical disease in the catalogue, knocking the foundation from beneath the great man's fairy castle of theory. Naturally, Chomsky and his acolytes responded with their customary vituperation, (this time, the adjective of choice for Everett was “charlatan”). Just as they were preparing the academic paper which would drive a stake through this nonsense, Everett published Don't Sleep, There Are Snakes, a combined account of his thirty years with the Pirahã and an analysis of their language. The book became a popular hit and won numerous awards. In 2012, Everett followed up with Language: The Cultural Tool, which rejects Chomsky's view of language as an innate and universal human property in favour of the view that it is one among a multitude of artifacts created by human societies as a tool, and necessarily reflects the characteristics of those societies. Chomsky now refuses to discuss Everett's work.

In the conclusion, Wolfe comes down on the side of Everett, and argues that the solution to the mystery of how speech evolved is that it didn't evolve at all. Speech is simply a tool which humans used their big brains to invent to help them accomplish their goals, just as they invented bows and arrows, canoes, and microprocessors. It doesn't make any more sense to ask how evolution produced speech than it does to suggest it produced any of those other artifacts not made by animals. He further suggests that the invention of speech proceeded from initial use of sounds as mnemonics for objects and concepts, then progressed to more complex grammatical structure, but I found little evidence in his argument to back the supposition, nor is this a necessary part of viewing speech as an invented artifact. Chomsky's grand theory, like most theories made up without grounding in empirical evidence, is failing both by being falsified on its fundamentals by the work of Everett and others, and also by the failure, despite half a century of progress in neurophysiology, to identify the “language organ” upon which it is based.

It's somewhat amusing to see soft science academics rush to Chomsky's defence, when he's arguing that language is biologically determined as opposed to being, as Everett contends, a social construct whose details depend upon the cultural context which created it. A hunter-gatherer society such as the Pirahã living in an environment where food is abundant and little changes over time scales from days to generations, doesn't need a language as complicated as those living in an agricultural society with division of labour, and it shouldn't be a surprise to find their language is more rudimentary. Chomsky assumed that all human languages were universal (able to express any concept), in the sense David Deutsch defined universality in The Beginning of Infinity, but why should every people have a universal language when some cultures get along just fine without universal number systems or alphabets? Doesn't it make a lot more sense to conclude that people settle on a language, like any other tools, which gets the job done? Wolfe then argues that the capacity of speech is the defining characteristic of human beings, and enables all of the other human capabilities and accomplishments which animals lack. I'd consider this not proved. Why isn't the definitive human characteristic the ability to make tools, and language simply one among a multitude of tools humans have invented?

This book strikes me as one or two interesting blog posts struggling to escape from a snarknado of Wolfe's 1960s style verbal fireworks, including Bango!, riiippp, OOOF!, and “a regular crotch crusher!”. At age 85, he's still got it, but I wonder whether he, or his editor, questioned whether this style of journalism is as effective when discussing evolutionary biology and linguistics as in mocking sixties radicals, hippies, or pretentious artists and architects. There is some odd typography, as well. Grave accents are used in words like “learnèd”, presumably to indicate it's to be pronounced as two syllables, but then occasionally we get an acute accent instead—what's that supposed to mean? Chapter endnotes are given as superscript letters while source citations are superscript numbers, neither of which are easy to select on a touch-screen Kindle edition. There is no index.

 Permalink

February 2017

Verne, Jules. Hector Servadac. Seattle: CreateSpace, [1877] 2014. ISBN 978-1-5058-3124-5.
Over the years, I have been reading my way through the classic science fiction novels of Jules Verne, and I have prepared public domain texts of three of them which are available on my site and Project Gutenberg. Verne not only essentially invented the modern literary genre of science fiction, he was an extraordinary prolific author, publishing sixty-two novels in his Voyages extraordinaires between 1863 and 1905. What prompted me to pick up the present work was an interview I read in December 2016, in which Freeman Dyson recalled that it was reading this book at around the age of eight which, more than anything, set him on a course to become a mathematician and physicist. He notes that he originally didn't know it was fiction, and was disappointed to discover the events recounted hadn't actually happened. Well, that's about as good a recommendation as you can get, so I decided to put Hector Servadac on the list.

On the night of December 31–January 1, Hector Servadac, a captain in the French garrison at Mostaganem in Algeria, found it difficult to sleep, since in the morning he was to fight a duel with Wassili Timascheff, his rival for the affections of a young woman. During the night, the captain and his faithful orderly Laurent Ben-Zouf, perceived an enormous shock, and regained consciousness amid the ruins of their hut, and found themselves in a profoundly changed world.

Thus begins a scientific detective story much different than many of Verne's other novels. We have the resourceful and intrepid Captain Servadac and his humorous side-kick Ben-Zouf, to be sure, but instead of them undertaking a perilous voyage of exploration, instead they are taken on a voyage, by forces unknown, and must discover what has happened and explain the odd phenomena they are experiencing. And those phenomena are curious, indeed: the Sun rises in the west and sets in the east, and the day is now only twelve hours long; their weight, and that of all objects, has been dramatically reduced, and they can now easily bound high into the air; the air itself seems to have become as thin as on high mountain peaks; the Moon has vanished from the sky; the pole has shifted and there is a new north star; and their latitude now seems to be near the equator.

Exploring their environs only adds mysteries to the ever-growing list. They now seem to inhabit an island of which they are the only residents: the rest of Algeria has vanished. Eventually they make contact with Count Timascheff, whose yacht was standing offshore and, setting aside their dispute (the duel deferred in light of greater things is a theme you'll find elsewhere in the works of Verne), they seek to explore the curiously altered world they now inhabit.

Eventually, they discover its inhabitants seem to number only thirty-six: themselves, the Russian crew of Timascheff's yacht; some Spanish workers; a young Italian girl and Spanish boy; Isac Hakhabut, a German Jewish itinerant trader whose ship full of merchandise survived the cataclysm; the remainder of the British garrison at Gibraltar, which has been cut off and reduced to a small island; and Palmyrin Rosette, formerly Servadac's teacher (and each other's nemeses), an eccentric and irritable astronomer. They set out on a voyage of exploration and begin to grasp what has happened and what they must do to survive.

In 1865, Verne took us De la terre à la lune. Twelve years later, he treats us to a tour of the solar system, from the orbit of Venus to that of Jupiter, with abundant details of what was known about our planetary neighbourhood in his era. As usual, his research is nearly impeccable, although the orbital mechanics are fantasy and must be attributed to literary license: a body with an orbit which crosses those of Venus and Jupiter cannot have an orbital period of two years: it will be around five years, but that wouldn't work with the story. Verne has his usual fun with the national characteristics of those we encounter. Modern readers may find the descriptions of the miserly Jew Hakhabut and the happy but indolent Spaniards offensive—so be it—such is nineteenth century literature.

This is a grand adventure: funny, enlightening, and engaging the reader in puzzling out mysteries of physics, astronomy, geology, chemistry, and, if you're like this reader, checking the author's math (which, orbital mechanics aside, is more or less right, although he doesn't make the job easy by using a multitude of different units). It's completely improbable, of course—you don't go to Jules Verne for that: he's the fellow who shot people to the Moon with a nine hundred foot cannon—but just as readers of modern science fiction are willing to accept faster than light drives to make the story work, a little suspension of disbelief here will yield a lot of entertainment.

Jules Verne is the second most translated of modern authors (Agatha Christie is the first) and the most translated of those writing in French. Regrettably, Verne, and his reputation, have suffered from poor translation. He is a virtuoso of the French language, using his large vocabulary to layer meanings and subtexts beneath the surface, and many translators fail to preserve these subtleties. There have been several English translations of this novel under different titles (which I shall decline to state, as they are spoilers for the first half of the book), none of which are deemed worthy of the original.

I read the Kindle edition from Arvensa, which is absolutely superb. You don't usually expect much when you buy a Kindle version of a public domain work for US$ 0.99, but in this case you'll receive a thoroughly professional edition free of typographical errors which includes all of the original illustrations from the original 1877 Hetzel edition. In addition there is a comprehensive biography of Jules Verne and an account of his life and work published at the height of his career. Further, the Kindle French dictionary, a free download, is absolutely superb when coping with Verne's enormous vocabulary. Verne is very fond of obscure terms, and whether discussing nautical terminology, geology, astronomy, or any other specialties, peppers his prose with jargon which used to send me off to flip through the Little Bob. Now it's just a matter of highlighting the word (in the iPad Kindle app), and up pops the definition from the amazingly comprehensive dictionary. (This is a French-French dictionary; if you need a dictionary which provides English translations, you'll need to install such an application.) These Arvensa Kindle editions are absolutely the best way to enjoy Jules Verne and other classic French authors, and I will definitely seek out others to read in the future. You can obtain the complete works of Jules Verne, 160 titles, with 5400 illustrations, for US$ 2.51 at this writing.

 Permalink

Jenne, Mike. Pale Blue. New York: Yucca Publishing, 2016. ISBN 978-1-63158-084-0.
This is the final novel in the trilogy which began with Blue Gemini (April 2016) and continued in Blue Darker than Black (August 2016). After the harrowing rescue mission which concluded the second book, Drew Carson and Scott Ourecky, astronauts of the U.S. Air Force's covert Blue Gemini project, a manned satellite interceptor based upon NASA's Project Gemini spacecraft, hope for a long stand-down before what is slated to be the final mission in the project, whose future is uncertain due to funding issues, inter-service rivalry, the damage to its Pacific island launch site due to a recent tropical storm, and the upcoming 1972 presidential election.

Meanwhile, in the Soviet Union, progress continues on the Krepost project: a manned space station equipped for surveillance and armed with a nuclear warhead which can be de-orbited and dropped on any target along the station's ground track. General Rustam Abdirov, a survivor of the Nedelin disaster in 1960, is pushing the project to completion through his deputy, Gregor Yohzin, and believes it may hold the key to breaking what Abdirov sees as the stalemate of the Cold War. Yohzin is increasingly worried about Abdirov's stability and the risks posed by the project, and has been covertly passing information to U.S. intelligence.

As information from Yohzin's espionage reaches Blue Gemini headquarters, Carson and Ourecky are summoned back and plans drawn up to intercept the orbital station before a crew can be launched to it, after which destroying it would not only be hazardous, but could provoke a superpower confrontation. On the Soviet side, nothing is proceeding as planned, and the interception mission must twist and turn based upon limited and shifting information.

About half way through the book, and after some big surprises, the Krepost crisis is resolved. The reader might be inclined, then, to wonder “what next?” What follows is a war story, set in the final days of the Vietnam conflict, and for quite a while it seems incongruous and unrelated to all that has gone before. I have remarked in reviews of the earlier books of the trilogy that the author is keeping a large number of characters and sub-plots in the air, and wondered whether and how he was going to bring it all together. Well, in the last five chapters he does it, magnificently, and ties everything up with a bow on the top, ending what has been a rewarding thriller in a moving, human conclusion.

There are a few goofs. Launch windows to inclined Earth orbits occur every day; in case of a launch delay, there is no need for a long wait before the next launch attempt (chapter 4). Attempting to solve a difficult problem, “the variables refused to remain constant”—that's why they're called variables (chapter 10)! Beaujolais is red, not white, wine (chapter 16). A character claims to have seen a hundred stars in the Pleiades from space with the unaided eye. This is impossible: while the cluster contains around 1000 stars, only 14 are bright enough to be seen with the best human vision under the darkest skies. Observing from space is slightly better than from the Earth's surface, but in this case the observer would have been looking through a spacecraft window, which would attenuate light more than the Earth's atmosphere (chapter 25). MIT's Draper Laboratory did not design the Gemini on-board computer; it was developed by the IBM Federal Systems Division (chapter 26).

The trilogy is a big, sprawling techno-thriller with interesting and complicated characters and includes space flight, derring do in remote and dangerous places, military and political intrigue in both the U.S. and Soviet Union, espionage, and a look at how the stresses of military life and participation in black programs make the lives of those involved in them difficult. Although the space program which is the centrepiece of the story is fictional, the attention to detail is exacting: had it existed, this is probably how it would have been done. I have one big quibble with a central part of the premise, which I will discuss behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
The rationale for the Blue Gemini program which caused it to be funded is largely as a defence against a feared Soviet “orbital bombardment system”: one or more satellites which, placed in orbits which regularly overfly the U.S. and allies, could be commanded to deorbit and deliver nuclear warheads to any location below. It is the development of such a weapon, its deployment, and a mission to respond to the threat which form the core of the plot of this novel.

But an orbital bombardment system isn't a very useful weapon, and doesn't make much sense, especially in the context of the late 1960s to early '70s in which this story is set. The Krepost of the novel was armed with a single high-yield weapon, and operated in a low Earth orbit at an inclination of 51°. The weapon was equipped with only a retrorocket and heat shield, and would have little cross-range (ability to hit targets lateral to its orbital path). This would mean that in order to hit a specific target, the orbital station would have to wait up to a day for the Earth to rotate so the target was aligned with the station's orbital plane. And this would allow bombardment of only a single target with one warhead. Keeping the station ready for use would require a constant series of crew ferry and freighter launches, all to maintain just one bomb on alert. By comparison, by 1972, the Soviet Union had on the order of a thousand warheads mounted on ICBMs, which required no space launch logistics to maintain, and could reach targets anywhere within half an hour of the launch order being given. Finally, a space station in low Earth orbit is pretty much a sitting duck for countermeasures. It is easy to track from the ground, and has limited maneuvering capability. Even guns in space do not much mitigate the threat from a variety of anti-satellite weapons, including Blue Gemini.

While the drawbacks of orbital deployment of nuclear weapons caused the U.S. and Soviet Union to eschew them in favour of more economical and secure platforms such as silo-based missiles and ballistic missile submarines, their appearance here does not make this “what if?” thriller any less effective or thrilling. This was the peak of the Cold War, and both adversaries explored many ideas which, in retrospect, appear to have made little sense. A hypothetical Soviet nuclear-armed orbital battle station is no less crazy than Project Pluto in the U.S.

Spoilers end here.  
This trilogy is one long story which spans three books. The second and third novels begin with brief summaries of prior events, but these are intended mostly for readers who have forgotten where the previous volume left off. If you don't read the three books in order, you'll miss a great deal of the character and plot development which makes the entire story so rewarding. More than 1600 pages may seem a large investment in a fictional account of a Cold War space program that never happened, but the technical authenticity; realistic portrayal of military aerospace projects and the interaction of pilots, managers, engineers, and politicians; and complicated and memorable characters made it more than worthwhile to this reader.

 Permalink

March 2017

Awret, Uziel, ed. The Singularity. Exeter, UK: Imprint Academic, 2016. ISBN 978-1-84540-907-4.
For more than half a century, the prospect of a technological singularity has been part of the intellectual landscape of those envisioning the future. In 1965, in a paper titled “Speculations Concerning the First Ultraintelligent Machine” statistician I. J. Good wrote,

Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment.

Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn't matter what fixed level you're aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.

  1. Progress in computing comes to a halt before reaching parity with human brain power, due to technological limits, economics (inability to afford the new technologies required, or lack of applications to fund the intermediate steps), or intervention by authority (for example, regulation motivated by a desire to avoid the risks and displacement due to super-human intelligence).
  2. Computing continues to advance, but we find that the human brain is either far more complicated than we believed it to be, or that something is going on in there which cannot be modelled or simulated by a deterministic computational process. The goal of human-level artificial intelligence recedes into the distant future.
  3. Blooie! Human level machine intelligence is achieved, successive generations of machine intelligences run away to approach the physical limits of computation, and before long machine intelligence exceeds that of humans to the degree humans surpass the intelligence of mice (or maybe insects).

Now, the thing about this is that many people will dismiss such speculation as science fiction having nothing to do with the “real world” they inhabit. But there's no more conservative form of forecasting than observing a trend which has been in existence for a long time (in the case of growth in computing power, more than a century, spanning multiple generations of very different hardware and technologies), and continuing to extrapolate it into the future and then ask, “What happens then?” When you go through this exercise and an answer pops out which seems to indicate that within the lives of many people now living, an event completely unprecedented in the history of our species—the emergence of an intelligence which far surpasses that of humans—might happen, the prospects and consequences bear some serious consideration.

The present book, based upon two special issues of the Journal of Consciousness Studies, attempts to examine the probability, nature, and consequences of a singularity from a variety of intellectual disciplines and viewpoints. The volume begins with an essay by philosopher David Chalmers originally published in 2010: “The Singularity: a Philosophical Analysis”, which attempts to trace various paths to a singularity and evaluate their probability. Chalmers does not attempt to estimate the time at which a singularity may occur—he argues that if it happens any time within the next few centuries, it will be an epochal event in human history which is worth thinking about today. Chalmers contends that the argument for artificial intelligence (AI) is robust because there appear to be multiple paths by which we could get there, and hence AI does not depend upon a fragile chain of technological assumptions which might break at any point in the future. We could, for example, continue to increase the performance and storage capacity of our computers, to such an extent that the “deep learning” techniques already used in computing applications, combined with access to a vast amount of digital data on the Internet, may cross the line of human intelligence. Or, we may continue our progress in reverse-engineering the microstructure of the human brain and apply our ever-growing computing power to emulating it at a low level (this scenario is discussed in detail in Robin Hanson's The Age of Em [September 2016]). Or, since human intelligence was produced by the process of evolution, we might set our supercomputers to simulate evolution itself (which we're already doing to some extent with genetic algorithms) in order to evolve super-human artificial intelligence (not only would computer-simulated evolution run much faster than biological evolution, it would not be random, but rather directed toward desired results, much like selective breeding of plants or livestock).

Regardless of the path or paths taken, the outcomes will be one of the three discussed above: either a singularity or no singularity. Assume, arguendo, that the singularity occurs, whether before 2050 as some optimists project or many decades later. What will it be like? Will it be good or bad? Chalmers writes,

I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.

…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment?

What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren't zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained?

Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn't arguing that there's something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry?

Now let's consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human's brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn't destructive? When it's done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave's not here”? Wait—we're both here! Now what?

Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent's brain is scanned (in this case it's irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person's identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter?

Deep questions, these. The book presents Chalmers' paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers' motives and says that he “never leaps to conclusions; he oozes to conclusions”.

Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won't. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won't. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won't. They will either share the values and morality of their progenitors or they won't. It will either be possible for humans to upload their brains to a digital substrate, or it won't. These uploads will either be conscious, or they'll be zombies. If they're conscious, they'll either continue the identity and life experience of the pre-upload humans, or they won't. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they're a risk to job security disputing questions their ancestors have been puzzling over at least since Athens.

Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we're living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them.

This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:

If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

I don't burn books (it's некультурный and expensive when you read them on an iPad), but you'll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.

 Permalink

Hannan, Daniel. What Next. London: Head of Zeus, 2016. ISBN 978-1-78669-193-4.
On June 23rd, 2016, the people of the United Kingdom, against the advice of most politicians, big business, organised labour, corporate media, academia, and their self-styled “betters”, narrowly voted to re-assert their sovereignty and reclaim the independence of their proud nation, slowly being dissolved in an “ever closer union” with the anti-democratic, protectionist, corrupt, bankrupt, and increasingly authoritarian European Union (EU). The day of the referendum, bookmakers gave odds which implied less than a 20% chance of a Leave vote, and yet the morning after the common sense and perception of right and wrong of the British people, which had caused them to prevail in the face of wars, economic and social crises, and a changing international environment re-asserted itself, and caused them to say, “No more, thank you. We prefer our thousand year tradition of self-rule to being dictated to by unelected foreign oligarchic technocrats.”

The author, Conservative Member of the European Parliament for South East England since 1999, has been one of the most vociferous and eloquent partisans of Britain's reclaiming its independence and campaigners for a Leave vote in the referendum; the vote was a personal triumph for him. In the introduction, he writes, “After forty-three years, we have pushed the door ajar. A rectangle of light dazzles us and, as our eyes adjust, we see a summer meadow. Swallows swoop against the blue sky. We hear the gurgling of a little brook. Now to stride into the sunlight.” What next, indeed?

Before presenting his vision of an independent, prosperous, and more free Britain, he recounts Britain's history in the European Union, the sordid state of the institutions of that would-be socialist superstate, and the details of the Leave campaign, including a candid and sometimes acerbic view not just of his opponents but also nominal allies. Hannan argues that Leave ultimately won because those advocating it were able to present a positive future for an independent Britain. He says that every time the Leave message veered toward negatives of the existing relationship with the EU, in particular immigration, polling in favour of Leave declined, and when the positive benefits of independence—for example free trade with Commonwealth nations and the rest of the world, local control of Britain's fisheries and agriculture, living under laws made in Britain by a parliament elected by the British people—Leave's polling improved. Fundamentally, you can only get so far asking people to vote against something, especially when the establishment is marching in lockstep to create fear of the unknown among the electorate. Presenting a positive vision was, Hannan believes, essential to prevailing.

Central to understanding a post-EU Britain is the distinction between a free-trade area and a customs union. The EU has done its best to confuse people about this issue, presenting its single market as a kind of free trade utopia. Nothing could be farther from the truth. A free trade area is just what the name implies: a group of states which have eliminated tariffs and other barriers such as quotas, and allow goods and services to cross borders unimpeded. A customs union such as the EU establishes standards for goods sold within its internal market which, through regulation, members are required to enforce (hence, the absurdity of unelected bureaucrats in Brussels telling the French how to make cheese). Further, while goods conforming to the regulations can be sold within the union, there are major trade barriers with parties outside, often imposed to protect industries with political pull inside the union. For example, wine produced in California or Chile is subject to a 32% tariff imposed by the EU to protect its own winemakers. British apparel manufacturers cannot import textiles from India, a country with long historical and close commercial ties, without paying EU tariffs intended to protect uncompetitive manufacturers on the Continent. Pointy-headed and economically ignorant “green” policies compound the problem: a medium-sized company in the EU pays 20% more for energy than a competitor in China and twice as much as one in the United States. In international trade disputes, Britain in the EU is represented by one twenty-eighth of a European Commissioner, while an independent Britain will have its own seat, like New Zealand, Switzerland, and the US.

Hannan believes that after leaving the EU, the UK should join the European Free Trade Association (EFTA), and demonstrates how ETFA states such as Norway and Switzerland are more prosperous than EU members and have better trade with countries outside it. (He argues against joining the European Economic Area [EEA], from which Switzerland has wisely opted out. The EEA provides too much leverage to the Brussels imperium to meddle in the policies of member states.) More important for Britain's future than its relationship to the EU is its ability, once outside, to conclude bilateral trade agreements with important trading partners such as the US (even, perhaps, joining NAFTA), Anglosphere countries such as Australia, South Africa, and New Zealand, and India, China, Russia, Brazil and other nations: all of which it cannot do while a member of the EU.

What of Britain's domestic policy? Free of diktats from Brussels, it will be whatever Britons wish, expressed through their representatives at Westminster. Hannan quotes the psychologist Kurt Lewin, who in the 1940s described change as a three stage process. First, old assumptions about the way things are and the way they have to be become “unfrozen”. This ushers in a period of rapid transformation, where institutions become fluid and can adapt to changed circumstances and perceptions. Then the new situation congeals into a status quo which endures until the next moment of unfreezing. For four decades, Britain has been frozen into an inertia where parliamentarians and governments respond to popular demands all too often by saying, “We'd like to do that, but the EU doesn't permit it.” Leaving the EU will remove this comfortable excuse, and possibly catalyse a great unfreezing of Britain's institutions. Where will this ultimately go? Wherever the people wish it to. Hannan has some suggestions for potential happy outcomes in this bright new day.

Britain has devolved substantial governance to Scotland, and yet Scottish MPs still vote in Westminster for policies which affect England but to which their constituents are not subject. Perhaps federalisation might progress to the point where the House of Commons becomes the English Parliament, with either a reformed House of Lords or a new body empowered to vote only on matters affecting the entire Union such as national defence and foreign policy. Free of the EU, the UK can adopt competitive corporate taxation and governance policies, and attract companies from around the world to build not just headquarters but also research and development and manufacturing facilities. The national VAT could be abolished entirely and replaced with a local sales tax, paid at point of retail, set by counties or metropolitan areas in competition with one another (current payments to these authorities by the Treasury are almost exactly equal to revenue from the VAT); with competition, authorities will be forced to economise lest their residents vote with their feet. With their own source of revenue, decision making for a host of policies, from housing to welfare, could be pushed down from Whitehall to City Hall. Immigration can be re-focused upon the need of the country for skills and labour, not thrown open to anybody who arrives.

The British vote for independence has been decried by the elitists, oligarchs, and would-be commissars as a “populist revolt”. (Do you think those words too strong? Did you know that all of those EU politicians and bureaucrats are exempt from taxation in their own countries, and pay a flat tax of around 21%, far less than the despised citizens they rule?) What is happening, first in Britain, and before long elsewhere as the corrupt foundations of the EU crumble, is that the working classes are standing up to the smirking classes and saying, “Enough.” Britain's success, which (unless the people are betrayed and their wishes subverted) is assured, since freedom and democracy always work better than slavery and bureaucratic dictatorship, will serve to demonstrate to citizens of other railroad-era continental-scale empires that smaller, agile, responsive, and free governance is essential for success in the information age.

 Permalink

Pratchett, Terry and Stephen Baxter. The Long War. New York: HarperCollins, 2013. ISBN 978-0-06-206869-9.
This is the second novel in the authors' series which began with The Long Earth (November 2012). That book, which I enjoyed immensely, created a vast new arena for storytelling: a large, perhaps infinite, number of parallel Earths, all synchronised in time, among which people can “step” with the aid of a simple electronic gizmo (incorporating a potato) whose inventor posted the plans on the Internet on what has since been called Step Day. Some small fraction of the population has always been “natural steppers”—able to move among universes without mechanical assistance, but other than that tiny minority, all of the worlds of the Long Earth beyond our own (called the Datum) are devoid of humans. There are natural stepping humanoids, dubbed “elves” and “trolls”, but none with human-level intelligence.

As this book opens, a generation has passed since Step Day, and the human presence has begun to expand into the vast expanses of the Long Earth. Most worlds are pristine wilderness, with all the dangers to pioneers venturing into places where large predators have never been controlled. Joshua Valienté, whose epic voyage of exploration with Lobsang (who from moment to moment may be a motorcycle repairman, computer network, Tibetan monk, or airship) discovered the wonders of these innumerable worlds in the first book, has settled down to raise a family on a world in the Far West.

Humans being humans, this gift of what amounts of an infinitely larger scope for their history has not been without its drawbacks and conflicts. With the opening of an endless frontier, the restless and creative have decamped from the Datum to seek adventure and fortune free of the crowds and control of their increasingly regimented home world. This has resulted in a drop in innovation and economic hit to the Datum, and for Datum politicians (particularly in the United States, the grabbiest of all jurisdictions) to seek to expand their control (and particularly the ability to loot) to all residents of the so-called “Aegis”—the geographical footprint of its territory across the multitude of worlds. The trolls, who mostly get along with humans and work for them, hear news from across the worlds through their “long call” of scandalous mistreatment of their kind by humans in some places, and now appear to have vanished from many human settlements to parts unknown. A group of worlds in the American Aegis in the distant West have adopted the Valhalla Declaration, asserting their independence from the greedy and intrusive government of the Datum and, in response, the Datum is sending a fleet of stepping airships (or “twains”, named for the Mark Twain of the first novel) to assert its authority over these recalcitrant emigrants. Joshua and Sally Linsay, pioneer explorers, return to the Datum to make their case for the rights of trolls. China mounts an ambitious expedition to the unseen worlds of its footprint in the Far East.

And so it goes, for more than four hundred pages. This really isn't a novel at all, but rather four or five novellas interleaved with one another, where the individual stories barely interact before most of the characters meet at a barbecue in the next to last chapter. When I put down The Long Earth, I concluded that the authors had created a stage in which all kinds of fiction could play out and looked forward to seeing what they'd do with it. What a disappointment! There are a few interesting concepts, such as evolutionary consequences of travel between parallel Earths and technologies which oppressive regimes use to keep their subjects from just stepping away to freedom, but they are few and far between. There is no war! If you're going to title your book The Long War, many readers are going to expect one, and it doesn't happen. I can recall only two laugh-out-loud lines in the entire book, which is hardly what you expect when picking up a book with Terry Pratchett's name on the cover. I shall not be reading the remaining books in the series which, if Amazon reviews are to be believed, go downhill from here.

 Permalink

April 2017

Houellebecq, Michel. Soumission. Paris: J'ai Lu, [2015] 2016. ISBN 978-2-290-11361-5.
If you examine the Pew Research Center's table of Muslim Population by Country, giving the percent Muslim population for countries and territories, one striking thing is apparent. Here are the results, binned into quintiles.

Quintile   % Muslim   Countries
1 100–80 36
2 80–60 5
3 60–40 8
4 40–20 7
5 20–0 132

The distribution in this table is strongly bimodal—instead of the Gaussian (normal, or “bell curve”) distribution one encounters so often in the natural and social sciences, the countries cluster at the extremes: 36 are 80% or more Muslim, 132 are 20% or less Muslim, and only a total of 20 fall in the middle between 20% and 80%. What is going on?

I believe this is evidence for an Islamic population fraction greater than some threshold above 20% being an attractor in the sense of dynamical systems theory. With the Islamic doctrine of its superiority to other religions and destiny to bring other lands into its orbit, plus scripturally-sanctioned discrimination against non-believers, once a Muslim community reaches a certain critical mass, and if it retains its identity and coherence, resisting assimilation into the host culture, it will tend to grow not just organically but by making conversion (whether sincere or motivated by self-interest) an attractive alternative for those who encounter Muslims in their everyday life.

If this analysis is correct, what is the critical threshold? Well, that's the big question, particularly for countries in Europe which have admitted substantial Muslim populations that are growing faster than the indigenous population due to a higher birthrate and ongoing immigration, and where there is substantial evidence that subsequent generations are retaining their identity as a distinct culture apart from that of the country where they were born. What happens as the threshold is crossed, and what does it mean for the original residents and institutions of these countries?

That is the question explored in this satirical novel set in the year 2022, in the period surrounding the French presidential election of that year. In the 2017 election, the Front national narrowly won the first round of the election, but was defeated in the second round by an alliance between the socialists and traditional right, resulting in the election of a socialist president in a country with a centre-right majority.

Five years after an election which satisfied few people, the electoral landscape has shifted substantially. A new party, the Fraternité musulmane (Muslim Brotherhood), led by the telegenic, pro-European, and moderate Mohammed Ben Abbes, French-born son of a Tunisian immigrant, has grown to rival the socialist party for second place behind the Front national, which remains safely ahead in projections for the first round. When the votes are counted, the unthinkable has happened: all of the traditional government parties are eliminated, and the second round will be a run-off between FN leader Marine Le Pen and Ben Abbes.

These events are experienced and recounted by “François” (no last name is given), a fortyish professor of literature at the Sorbonne, a leading expert on the 19th century French writer Joris-Karl Huysmans, who was considered a founder of the decadent movement, but later in life reverted to Catholicism and became a Benedictine oblate. François is living what may be described as a modern version of the decadent life. Single, living alone in a small apartment where he subsists mostly on microwaved dinners, he has become convinced his intellectual life peaked with the publication of his thesis on Huysmans and holds nothing other than going through the motions teaching his classes at the university. His amorous life is largely confined to a serial set of affairs with his students, most of which end with the academic year when they “meet someone” and, in the gaps, liaisons with “escorts” in which he indulges in the kind of perversion the decadents celebrated in their writings.

About the only thing which interests him is politics and the election, but not as a participant but observer watching television by himself. After the first round election, there is the stunning news that in order to prevent a Front national victory, the Muslim brotherhood, socialist, and traditional right parties have formed an alliance supporting Ben Abbes for president, with an agreed division of ministries among the parties. Myriam, François' current girlfriend, leaves with her Jewish family to settle in Israel, joining many of her faith who anticipate what is coming, having seen it so many times before in the history of their people.

François follows in the footsteps of Huysmans, visiting the Benedictine monastery in Martel, a village said to have been founded by Charles Martel, who defeated the Muslim invasion of Europe in a.d. 732 at the Battle of Tours. He finds no solace nor inspiration there and returns to Paris where, with the alliance triumphant in the second round of the election and Ben Abbes president, changes are immediately apparent.

Ethnic strife has fallen to a low level: the Muslim community sees itself ascendant and has no need for political agitation. The unemployment rate has fallen to historical lows: forcing women out of the workforce will do that, especially when they are no longer counted in the statistics. Polygamy has been legalised, as part of the elimination of gender equality under the law. More and more women on the street dress modestly and wear the veil. The Sorbonne has been “privatised”, becoming the Islamic University of Paris, and all non-Muslim faculty, including François, have been dismissed. With generous funding from the petro-monarchies of the Gulf, François and other now-redundant academics receive lifetime pensions sufficient that they never need work again, but it grates upon them to see intellectual inferiors, after a cynical and insincere conversion to Islam, replace them at salaries often three times higher than they received.

Unemployed, François grasps at an opportunity to edit a new edition of Huysmans for Pléiade, and encounters Robert Rediger, an ambitious academic who has been appointed rector of the Islamic University and has the ear of Ben Abbes. They later meet at Rediger's house, where, over a fine wine, he gives François a copy of his introductory book on Islam, explains the benefits of polygamy and arranged marriage to a man of his social standing, and the opportunities open to Islamic converts in the new university.

Eventually, François, like France, ends in submission.

As G. K. Chesterton never actually said, “When a man stops believing in God he doesn't then believe in nothing; he believes anything.” (The false quotation appears to be a synthesis of similar sentiments expressed by Chesterton in a number of different works.) Whatever the attribution, there is truth in it. François is an embodiment of post-Christian Europe, where the nucleus around which Western civilisation has been built since the fall of the Roman Empire has evaporated, leaving a void which deprives people of the purpose, optimism, and self-confidence of their forbears. Such a vacuum is more likely to be filled with something—anything, than long endure, especially when an aggressive, virile, ambitious, and prolific competitor has established itself in the lands of the decadent.

An English translation is available. This book is not recommended for young readers due to a number of sex scenes I found gratuitous and, even to this non-young reader, somewhat icky. This is a social satire, not a forecast of the future, but I found it more plausible than many scenarios envisioned for a Muslim conquest of Europe. I'll leave you to discover for yourself how the clever Ben Abbes envisions co-opting Eurocrats in his project of grand unification.

 Permalink

May 2017

Jacobsen, Annie. Phenomena. New York: Little, Brown, 2017. ISBN 978-0-316-34936-9.
At the end of World War II, it was clear that science and technology would be central to competition among nations in the postwar era. The development of nuclear weapons, German deployment of the first operational ballistic missile, and the introduction of jet propelled aircraft pointed the way to a technology-driven arms race, and both the U.S. and the Soviet Union scrambled to lay hands on the secret super-weapon programs of the defeated Nazi regime. On the U.S. side, the Alsos Mission not only sought information on German nuclear and missile programs, but also came across even more bizarre projects, such as those undertaken by Berlin's Ahnenerbe Institute, founded in 1935 by SS leader Heinrich Himmler. Investigating the institute's headquarters in a Berlin suburb, Samuel Goudsmit, chief scientist of Alsos, found what he described as “Remnants of weird Teutonic symbols and rites … a corner with a pit of ashes in which I found the skull of an infant.” What was going on? Had the Nazis attempted to weaponise black magic? And, to the ever-practical military mind, did it work?

In the years after the war, the intelligence community and military services in both the U.S. and Soviet Union would become involved in the realm of the paranormal, funding research and operational programs based upon purported psychic powers for which mainstream science had no explanation. Both superpowers were not only seeking super powers for their spies and soldiers, but also looking over their shoulders afraid the other would steal a jump on them in exploiting these supposed powers of mind. “We can't risk a ‘woo-woo gap’ with the adversary!”

Set aside for a moment (as did most of the agencies funding this research) the question of just how these mental powers were supposed to work. If they did, in fact, exist and if they could be harnessed and reliably employed, they would confer a tremendous strategic advantage on their possessor. Consider: psychic spies could project their consciousness out of body and penetrate the most secure military installations; telepaths could read the minds of diplomats during negotiations or perhaps even plant thoughts and influence their judgement; telekinesis might be able to disrupt the guidance systems of intercontinental missiles or space launchers; and psychic assassins could undetectably kill by stopping the hearts of their victims remotely by projecting malign mental energy in their direction.

All of this may seem absurd on its face, but work on all of these phenomena and more was funded, between 1952 and 1995, by agencies of the U.S. government including the U.S. Army, Air Force, Navy, the CIA, NSA, DIA, and ARPA/DARPA, expending tens of millions of dollars. Between 1978 and 1995 the Defense Department maintained an operational psychic espionage program under various names, using “remote viewing” to provide information on intelligence targets for clients including the Secret Service, Customs Service, Drug Enforcement Administration, and the Coast Guard.

What is remote viewing? Experiments in parapsychology laboratories usually employ a protocol called “outbounder-beacon”, where a researcher travels to a location selected randomly from a set of targets and observes the locale while a subject in the laboratory, usually isolated from sensory input which might provide clues, attempts to describe, either in words or by a drawing, what the outbounder is observing. At the conclusion of the experiment, the subject's description is compared with pictures of the targets by an independent judge (unaware of which was the outbounder's destination), who selects the one which is the closest match to the subject's description. If each experiment picked the outbounder's destination from a set of five targets, you'd expect from chance alone that in an ensemble of experiments the remote viewer's perception would match the actual target around 20% of the time. Experiments conducted in the 1970s at the Stanford Research Institute (and subsequently the target of intense criticism by skeptics) claimed in excess of 65% accuracy by talented remote viewers.

While outbounder-beacon experiments were used to train and test candidate remote viewers, operational military remote viewing as conducted by the Stargate Project (and under assorted other code names over the years), was quite different. Usually the procedure involved “coordinate remote viewing”. The viewer would simply be handed a slip of paper containing the latitude and longitude of the target and then, relaxing and clearing his or her mind, would attempt to describe what was there. In other sessions, the viewer might be handed a sealed envelope containing a satellite reconnaissance photograph. The results were sometimes stunning. In 1979, a KH-9 spy satellite photographed a huge building which had been constructed at Severodvinsk Naval Base in the Soviet arctic. Analysts thought the Soviets might be building their first aircraft carrier inside the secret facility. Joe McMoneagle, an Army warrant office and Vietnam veteran who was assigned to the Stargate Project as its first remote viewer, was given the target in the form of an envelope with the satellite photo sealed inside. Concentrating on the target, he noted “There's some kind of a ship. Some kind of a vessel. I'm getting a very, very strong impression of props [propellers]”. Then, “I'm seeing fins…. They look like shark fins.” He continued, “I'm seeing what looks like part of a submarine in this building.” The entire transcript was forty-seven pages long.

McMoneagle's report was passed on to the National Security Council, which dismissed it because it didn't make any sense for the Soviets to build a huge submarine in a building located one hundred metres from the water. McMoneagle had described a canal between the building and the shore, but the satellite imagery showed no such structure. Then, four months later, in January 1980, another KH-9 pass showed a large submarine at a dock at Severodvinsk, along with a canal between the mystery building and the sea, which had been constructed in the interim. This was the prototype of the new Typhoon class ballistic missile submarine, which was a complete surprise to Western analysts, but not Joe McMoneagle. This is what was referred to as an “eight martini result”. When McMoneagle retired in 1984, he was awarded the Legion of Merit for exceptionally meritorious service in the field of human intelligence.

A decade later the U.S. Customs Service approached the remote viewing unit for assistance in tracking down a rogue agent accused of taking bribes from cocaine smugglers in Florida. He had been on the run for two years, and appeared on the FBI's Most Wanted List. He was believed to be in Florida or somewhere in the Caribbean. Self-taught remote viewer Angela Dellafiora concentrated on the case and immediately said, “He's in Lowell, Wyoming.” Wyoming? There was no reason for him to be in such a place. Further, there was no town named Lowell in the state. Agents looked through an atlas and found there was, however, a Lovell, Wyoming. Dellafiora said, “Well, that's probably it.” Several weeks later, she was asked to work the case again. Her notes include, “If you don't get him now you'll lose him. He's moving from Lowell.” She added that he was “at or near a campground that had a large boulder at its entrance”, and that she “sensed an old Indian burial ground is located nearby.”. After being spotted by a park ranger, the fugitive was apprehended at a campground next to an Indian burial ground, about fifty miles from Lovell, Wyoming, where he had been a few weeks before. Martinis all around.

A total of 417 operational sessions were run in 1989 and 1990 for the counter-narcotics mission; 52% were judged as producing results of intelligence value while 47% were of no value. Still, what was produced was considered of sufficient value that the customers kept coming back.

Most of this work and its products were classified, in part to protect the program from ridicule by journalists and politicians. Those running the projects were afraid of being accused of dabbling in the occult, so they endorsed an Army doctrine that remote viewing, like any other military occupational specialty, was a normal human facility which could be taught to anybody with a suitable training process, and a curriculum was developed to introduce new people to the program. This was despite abundant evidence that the ability to remote view, if it exists at all, is a rare trait some people acquire at birth, and cannot be taught to randomly selected individuals any more than they can be trained to become musical composers or chess grand masters.

Under a similar shroud of secrecy, paranormal research for military applications appears to have been pursued in the Soviet Union and China. From time to time information would leak out into the open literature, such as the Soviet experiments with Ninel Kulagina. In China, H. S. Tsien (Qian Xuesen), a co-founder of the Jet Propulsion Laboratory in the United States who, after being stripped of his security clearance and moving to mainland China in 1955, led the Chinese nuclear weapons and missile programs, became a vocal and powerful advocate of research into the paranormal which, in accordance with Chinese Communist doctrine, was called “Extraordinary Human Body Functioning” (EHBF), and linked to the concept of qi, an energy field which is one of the foundations of traditional Chinese medicine and martial arts. It is likely this work continues today in China.

The U.S. remote viewing program came to an end in June 1995, when the CIA ordered the Defense Intelligence Agency to shut down the Stargate project. Many documents relating to the project have since been declassified but, oddly for a program which many claimed produced no useful results, others remain secret to this day. The paranormal continues to appeal to some in the military. In 2014, the Office of Naval Research launched a four year project funded with US$ 3.85 million to investigate premonitions, intuition, and hunches—what the press release called “Spidey sense”. In the 1950s, during a conversation between physicist Wolfgang Pauli and psychiatrist Carl Jung about psychic phenomena, Jung remarked, “As is only to be expected, every conceivable kind of attempt has been made to explain away these results, which seem to border on the miraculous and frankly impossible. But all such attempts come to grief on the facts, and the facts refuse so far to be argued out of existence.” A quarter century later in 1975, a CIA report concluded “A large body of reliable experimental evidence points to the inescapable conclusion that extrasensory perception does exist as a real phenomenon.”

To those who have had psychic experiences, there is no doubt of the reality of the phenomena. But research into them or, even more shockingly, attempts to apply them to practical ends, runs squarely into a paradigm of modern science which puts theory ahead of observation and experiment. A 1986 report by the U.S. Army said that its research had “succeeded in documenting general anomalies worthy of scientific interest,“ but that “in the absence of a confirmed paranormal theory…paranormality could be rejected a priori.” When the remote viewing program was cancelled in 1995, a review of its work stated that “a statistically significant effect has been observed in the laboratory…[but] the laboratory studies do not provide evidence regarding the sources or origins of the phenomenon.” In other words, experimental results can be discarded if there isn't a theory upon which to hang them, and there is no general theory of paranormal phenomena. Heck, they could have asked me.

One wonders where many currently mature fields of science would be today had this standard been applied during their formative phases: rejecting experimental results due to lack of a theory to explain them. High-temperature superconductivity was discovered in 1986 and won the Nobel Prize in 1987, and still today there is no theory that explains how it works. Perhaps it is only because it is so easily demonstrated with a desktop experiment that it, too, has not been relegated to the realm of “fringe science”.

This book provides a comprehensive history of the postwar involvement of the military and intelligence communities with the paranormal, focusing on the United States. The author takes a neutral stance: both believers and skeptics are given their say. One notes a consistent tension between scientists who reject the phenomena because “it can't possibly work” and intelligence officers who couldn't care less about how it works as long as it is providing them useful results.

The author has conducted interviews with many of the principals still alive, and documented the programs with original sources, many obtained by her under the Freedom of Information Act. Extensive end notes and source citations are included. I wish I could be more confident in the accuracy of the text, however. Chapter 7 relates astronaut Edgar Mitchell's Apollo 14 mission to the Moon, during which he conducted, on his own initiative, some unauthorised ESP experiments. But most of the chapter is about the mission itself, and it is riddled with errors, all of which could be corrected with no more research than consulting Wikipedia pages about the mission and the Apollo program. When you read something you know about and discover much of it is wrong, you have to guard against what Michael Crichton called the Gell-Mann amnesia effect: turning the page and assuming what you read there, about which you have no personal knowledge, is to be trusted. When dealing with spooky topics and programs conducted in secret, one should be doubly cautious. The copy editing is only of fair quality, and the Kindle edition has no index (the print edition does have an index).

Napoléon Bonaparte said, “There are but two powers in the world, the sword and the mind. In the long run, the sword is always beaten by the mind.” The decades of secret paranormal research were an attempt to apply this statement literally, and provide a fascinating look inside a secret world where nothing was dismissed as absurd if it might provide an edge over the adversary. Almost nobody knew about this work at the time. One wonders what is going on today.

 Permalink

June 2017

Shute, Nevil. Kindling. New York: Vintage Books, [1938, 1951] 2010. ISBN 978-0-307-47417-9.
It is the depth of the great depression, and yet business is booming at Warren Sons and Mortimer, merchant bankers, in the City of London. Henry Warren, descendant of the founder of the bank in 1750 and managing director, has never been busier. Despite the general contraction in the economy, firms failing, unemployment hitting record after record, and a collapse in international trade, his bank, which specialises in floating securities in London for foreign governments, has more deals pending than he can handle as those governments seek to raise funds to bolster their tottering economies. A typical week might see him in Holland, Sweden, Finland, Estonia, Germany, Holland again, and back to England in time for a Friday entirely on the telephone and in conferences at his office. It is an exhausting routine and, truth be told, he was sufficiently wealthy not to have to work if he didn't wish to, but it was the Warren and Mortimer bank and he was this generation's Warren in charge, and that's what Warrens did.

But in the few moments he had to reflect upon his life, there was little joy in it. He worked so hard he rarely saw others outside work except for his wife Elise's social engagements, which he found tedious and her circle of friends annoying and superficial, but endured out of a sense of duty. He suspected Elise might be cheating on him with the suave but thoroughly distasteful Prince Ali Said, and he wasn't the only one: there were whispers and snickers behind his back in the City. He had no real friends; only business associates, and with no children, no legacy to work for other than the firm. Sleep came only with sleeping pills. He knew his health was declining from stress, sleep deprivation, and lack of exercise.

After confirming his wife's affair, he offers her an ultimatum: move away from London to a quiet life in the country or put an end to the marriage. Independently wealthy, she immediately opts for the latter and leaves him to work out the details. What is he now to do with his life? He informs the servants he is closing the house and offers them generous severance, tells the bank he is taking an indefinite leave to travel and recuperate, and tells his chauffeur to prepare for a long trip, details to come. They depart in the car, northbound. He vows to walk twenty miles a day, every day, until he recovers his health, mental equilibrium, and ability to sleep.

After a few days walking, eating and sleeping at inns and guest houses in the northlands, he collapses in excruciating pain by the side of the road. A passing lorry driver takes him to a small hospital in the town of Sharples. Barely conscious, a surgeon diagnoses him with an intestinal obstruction and says an operation will be necessary. He is wheeled to the operating theatre. The hospital staff speculates on who he might be: he has no wallet or other identification. “Probably one of the men on the road, seeking work in the South”, they guess.

As he begins his recovery in the hospital Warren decides not to complicate matters with regard to his identity: “He had no desire to be a merchant banker in a ward of labourers.” He confirmed their assumption, adding that he was a bank clerk recently returned from America where there was no work at all, in hopes of finding something in the home country. He recalls that Sharples had been known for the Barlow shipyard, once a prosperous enterprise, which closed five years ago, taking down the plate mill and other enterprises it and its workers supported. There was little work in Sharples, and most of the population was on relief. He begins to notice that patients in the ward seem to be dying at an inordinate rate, of maladies not normally thought life-threatening. He asks Miss McMahon, the hospital's Almoner, who tells him it's the poor nutrition affordable on relief, plus the lack of hope and sense of purpose in life due to long unemployment that's responsible. As he recovers and begins to take walks in the vicinity, he sees the boarded up stores, and the derelict shipyard and rolling mill. Curious, he arranges to tour them. When people speak to him of their hope the economy will recover and the yard re-open, he is grimly realistic and candid: with the equipment sold off or in ruins and the skilled workforce dispersed, how would it win an order even if there were any orders to be had?

As he is heading back to London to pick up his old life, feeling better mentally and physically than he had for years, ideas and numbers begin to swim in his mind.

It was impossible. Nobody, in this time of depression, could find an order for a single ship…—let alone a flock of them.

There was the staff. … He could probably get them together again at a twenty per cent rise in salary—if they were any good. But how was he to judge of that?

The whole thing was impossible, sheer madness to attempt. He must be sensible, and put it from his mind.

It would be damn good fun…

Three weeks later, acting through a solicitor to conceal his identity, Mr. Henry Warren, merchant banker of the City, became the owner of Barlows' Yard, purchasing it outright for the sum of £5500. Thus begins one of the most entertaining, realistic, and heartwarming tales of entrepreneurship (or perhaps “rentrepreneurship”) I have ever read. The fact that the author was himself founder and director of an aircraft manufacturing company during the depression, and well aware of the need to make payroll every week, get orders to keep the doors open even if they didn't make much business sense, and do whatever it takes so that the business can survive and meet its obligations to its customers, investors, employees, suppliers, and creditors, contributes to the authenticity of the tale. (See his autobiography, Slide Rule [July 2011], for details of his career.)

Back in his office at the bank, there is the matter of the oil deal in Laevatia. After defaulting on their last loan, the Balkan country is viewed as a laughingstock and pariah in the City, but Warren has an idea. If they are to develop oil in the country, they will need to ship it, and how better to ship it than in their own ships, built in Britain on advantageous terms? Before long, he's off to the Balkans to do a deal in the Balkan manner (involving bejewelled umbrellas, cases of Worcestershire sauce, losing to the Treasury minister in the local card game at a dive in the capital, and working out a deal where the dividends on the joint stock oil company will be secured by profits from the national railway. And, there's the matter of the ships, which will be contracted for by Warren's bank.

Then it's back to London to pitch the deal. Warren's reputation counts for a great deal in the City, and the preference shares are placed. That done, the Hawside Ship and Engineering Company Ltd. is registered with cut-out directors, and the process of awarding the contract for the tankers to it is undertaken. As Warren explains to Miss McMahon, who he has begun to see more frequently, once the order is in hand, it can be used to float shares in the company to fund the equipment and staff to build the ships. At least if the prospectus is sufficiently optimistic—perhaps too optimistic….

Order in hand, life begins to return to Sharples. First a few workers, then dozens, then hundreds. The welcome sound of riveting and welding begins to issue from the yard. A few boarded-up shops re-open, and then more. Then another order for a ship came in, thanks to arm-twisting by one of the yard's directors. With talk of Britain re-arming, there was the prospect of Admiralty business. There was still only one newspaper a week in Sharples, brought in from Newcastle and sold to readers interested in the football news. On one of his more frequent visits to the town, yard, and Miss McMahon, Warren sees the headline: “Revolution in Laevatia”. “This is a very bad one,” Warren says. “I don't know what this is going to mean.”

But, one suspects, he did. As anybody who has been in the senior management of a publicly-traded company is well aware, what happens next is well-scripted: the shareholder suit by a small investor, the press pile-on, the back-turning by the financial community, the securities investigation, the indictment, and, eventually, the slammer. Warren understands this, and works diligently to ensure the Yard survives. There is a deep mine of wisdom here for anybody facing a bad patch.

“You must make this first year's accounts as bad as they ever can be,” he said. “You've got a marvellous opportunity to do so now, one that you'll never have again. You must examine every contract that you've got, with Jennings, and Grierson must tell the auditors that every contract will be carried out at a loss. He'll probably be right, of course—but he must pile it on. You've got to make reserves this year against every possible contingency, probable or improbable.”

“Pile everything into this year's loss, including a lot that really ought not to be there. If you do that, next year you'll be bound to show a profit, and the year after, if you've done it properly this year. Then as soon as you're showing profits and a decent show of orders in hand, get rid of this year's losses by writing down your capital, pay a dividend, and make another issue to replace the capital.”

Sage advice—I've been there. We had cash in the till, so we were able to do a stock buy-back at the bottom, but the principle is the same.

Having been brought back to life by almost dying in small town hospital, Warren is rejuvenated by his time in gaol. In November 1937, he is released and returns to Sharples where, amidst evidence of prosperity everywhere he approaches the Yard, to see a plaque on the wall with his face in profile: “HENRY WARREN — 1934 — HE GAVE US WORK”. Then he was off to see Miss McMahon.

The only print edition currently available new is a very expensive hardcover. Used paperbacks are readily available: check under both Kindling and the original British title, Ruined City. I have linked to the Kindle edition above.

 Permalink

Ringo, John. Into the Looking Glass. Riverdale, NY: Baen Publishing, 2005. ISBN 978-1-4165-2105-1.
Without warning, on a fine spring day in central Florida, an enormous explosion destroys the campus of the University of Central Florida and the surrounding region. The flash, heat pulse, and mushroom cloud are observed far from the site of the detonation. It is clear that casualties will be massive. First responders, fearing the worst, break out their equipment to respond to what seems likely to be nuclear terrorism. The yield of the explosion is estimated at 60 kilotons of TNT.

But upon closer examination, things seem distinctly odd. There is none of the residual radiation one would expect from a nuclear detonation, nor evidence of the prompt radiation nor electromagnetic pulse expected from a nuclear blast. A university campus seems an odd target for nuclear terrorism, in any case. What else could cause such a blast of such magnitude? Well, an asteroid strike could do it, but the odds against such an event are very long, and there was no evidence of ejecta falling back as you'd expect from an impact.

Faced with a catastrophic yet seemingly inexplicable event, senior government officials turn to a person with the background and security clearances to investigate further: Dr. Bill Weaver, a “redneck physicist” from Huntsville who works as a consultant to one of the “Beltway bandit” contractors who orbit the Pentagon. Weaver recalls that a physicist at the university, Ray Chen, was working on shortcut to produce a Higgs boson, bypassing the need for an enormous particle collider. Weaver's guess is that Chen's idea worked better than he imagined, releasing a pulse of energy which caused the detonation.

If things so far seemed curious, now they began to get weird. Approaching the site of the detonation, teams observed a black globe, seemingly absorbing all light, where Dr. Chen's laboratory used to be. Then one, and another, giant bug emerge from the globe. Floridians become accustomed to large, ugly-looking bugs, but nothing like this—these are creatures from another world, or maybe universe. A little girl, unharmed, wanders into the camp, giving her home address as in an area completely obliterated by the explosion. She is clutching a furry alien with ten legs: “Tuffy”, who she says speaks to her. Scientists try to examine the creature and quickly learn the wisdom of the girl's counsel to not mess with Tuffy.

Police respond to a home invasion call some distance from the site of the detonation: a report that demons are attacking their house. Investigating, another portal is discovered in the woods behind the house, from which monsters begin to issue, quickly overpowering the light military force summoned to oppose them. It takes a redneck militia to reinforce a perimeter around the gateway, while waiting for the Army to respond.

Apparently, whatever happened on the campus not only opened a gateway there, but is spawning gateways further removed. Some connect to worlds seemingly filled with biologically-engineered monsters bent upon conquest, while others connect to barren planets, a race of sentient felines, and other aliens who may be allies or enemies. Weaver has to puzzle all of this out, while participating in the desperate effort to prevent the invaders, “T!Ch!R!” or “Titcher”, from establishing a beachhead on Earth. And the stakes may be much greater than the fate of the Earth.

This is an action-filled romp, combining the initiation of humans into a much larger universe worthy of Golden Age science fiction with military action fiction. I doubt that in the real world Weaver, the leading expert on the phenomenon and chief investigator into it, would be allowed to participate in what amounts to commando missions in which his special skills are not required but, hey, it makes the story more exciting, and if a thriller doesn't thrill, it has failed in its mission.

I loved one aspect of the conclusion: never let an alien invasion go to waste. You'll understand what I'm alluding to when you get there. And, in the Golden Age tradition, the story sets up for further adventures. While John Ringo wrote this book by himself, the remaining three novels in the Looking Glass series are co-authored with Travis S. Taylor, upon whom the character of Bill Weaver was modeled.

 Permalink

Haffner, Sebastian [Raimund Pretzel]. Defying Hitler. New York: Picador, [2000] 2003. ISBN 978-0-312-42113-7.
In 1933, the author was pursuing his ambition to follow his father into a career in the Prussian civil service. While completing his law degree, he had obtained a post as a Referendar, the lowest rank in the civil service, performing what amounted to paralegal work for higher ranking clerks and judges. He enjoyed the work, especially doing research in the law library and drafting opinions, and was proud to be a part of the Prussian tradition of an independent judiciary. He had no strong political views nor much interest in politics. But, as he says, “I have a fairly well developed figurative sense of smell, or to put it differently, a sense of the worth (or worthlessness!) of human, moral, political views and attitudes. Most Germans unfortunately lack this sense almost completely.”

When Hitler came to power in January 1933, “As for the Nazis, my nose left me with no doubts. … How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset. What was not at all clear to me was what terrible enemies they would turn out to be.” Initially, little changed: it was a “matter for the press”. The new chancellor might rant to enthralled masses about the Jews, but in the court where Haffner clerked, a Jewish judge continued to sit on the bench and work continued as before. He hoped that the political storm on the surface would leave the depths of the civil service unperturbed. This was not to be the case.

Haffner was a boy during the First World War, and, like many of his schoolmates, saw the war as a great adventure which unified the country. Coming of age in the Weimar Republic, he experienced the great inflation of 1921–1924 as up-ending the society: “Amid all the misery, despair, and poverty there was an air of light-headed youthfulness, licentiousness, and carnival. Now, for once, the young had money and the old did not. Its value lasted only a few hours. It was spent as never before or since; and not on the things old people spend their money on.” A whole generation whose ancestors had grown up in a highly structured society where most decisions were made for them now were faced with the freedom to make whatever they wished of their private lives. But they had never learned to cope with such freedom.

After the Reichstag fire and the Nazi-organised boycott of Jewish businesses (enforced by SA street brawlers standing in doors and intimidating anybody who tried to enter), the fundamental transformation of the society accelerated. Working in the library at the court building, Haffner is shocked to see this sanctum of jurisprudence defiled by the SA, who had come to eject all Jews from the building. A Jewish colleague is expelled from university, fired from the civil service, and opts to emigrate.

The chaos of the early days of the Nazi ascendency gives way to Gleichschaltung, the systematic takeover of all institutions by placing Nazis in key decision-making positions within them. Haffner sees the Prussian courts, which famously stood up to Frederick the Great a century and a half before, meekly toe the line.

Haffner begins to consider emigrating from Germany, but his father urges him to complete his law degree before leaving. His close friends among the Referendars run the gamut from Communist sympathisers to ardent Nazis. As he is preparing for the Assessor examination (the next rank in the civil service, and the final step for a law student), he is called up for mandatory political and military indoctrination now required for the rank. The barrier between the personal, professional, and political had completely fallen. “Four weeks later I was wearing jackboots and a uniform with a swastika armband, and spent many hours each day marching in a column in the vicinity of Jüterbog.”

He discovers that, despite his viewing the Nazis as essentially absurd, there is something about order, regimentation, discipline, and forced camaraderie that resonates in his German soul.

Finally, there was a typically German aspiration that began to influence us strongly, although we hardly noticed it. This was the idolization of proficiency for its own sake, the desire to do whatever you are assigned to do as well as it can possibly be done. However senseless, meaningless, or downright humiliating it may be, it should be done as efficiently, thoroughly, and faultlessly as could be imagined. So we should clean lockers, sing, and march? Well, we would clean them better than any professional cleaner, we would march like campaign veterans, and we would sing so ruggedly that the trees bent over. This idolization of proficiency for its own sake is a German vice; the Germans think it is a German virtue.

That was our weakest point—whether we were Nazis or not. That was the point they attacked with remarkable psychological and strategic insight.

And here the memoir comes to an end; the author put it aside. He moved to Paris, but failed to become established there and returned to Berlin in 1934. He wrote apolitical articles for art magazines, but as the circle began to close around him and his new Jewish wife, in 1938 he obtained a visa for the U.K. and left Germany. He began a writing career, using the nom de plume Sebastian Haffner instead of his real name, Raimund Pretzel, to reduce the risk of reprisals against his family in Germany. With the outbreak of war, he was deemed an enemy alien and interned on the Isle of Man. His first book written since emigration, Germany: Jekyll and Hyde, was a success in Britain and questions were raised in Parliament why the author of such an anti-Nazi work was interned: he was released in August, 1940, and went on to a distinguished career in journalism in the U.K. He never prepared the manuscript of this work for publication—he may have been embarrassed at the youthful naïveté in evidence throughout. After his death in 1999, his son, Oliver Pretzel (who had taken the original family name), prepared the manuscript for publication. It went straight to the top of the German bestseller list, where it remained for forty-two weeks. Why? Oliver Pretzel says, “Now I think it was because the book offers direct answers to two questions that Germans of my generation had been asking their parents since the war: ‘How were the Nazis possible?’ and ‘Why didn't you stop them?’ ”.

This is a period piece, not a work of history. Set aside by the author in 1939, it provides a look through the eyes of a young man who sees his country becoming something which repels him and the madness that ensues when the collective is exalted above the individual. The title is somewhat odd—there is precious little defying of Hitler here—the ultimate defiance is simply making the decision to emigrate rather than give tacit support to the madness by remaining. I can appreciate that.

This edition was translated from the original German and annotated by the author's son, Oliver Pretzel, who wrote the introduction and afterword which place the work in the context of the author's career and describe why it was never published in his lifetime. A Kindle edition is available.

Thanks to Glenn Beck for recommending this book.

 Permalink

July 2017

Segrè, Gino and Bettina Hoerlin. The Pope of Physics. New York: Henry Holt, 2016. ISBN 978-1-62779-005-5.
By the start of the 20th century, the field of physics had bifurcated into theoretical and experimental specialties. While theorists and experimenters were acquainted with the same fundamentals and collaborated, with theorists suggesting phenomena to be explored in experiments and experimenters providing hard data upon which theorists could build their models, rarely did one individual do breakthrough work in both theory and experiment. One outstanding exception was Enrico Fermi, whose numerous achievements seemed to jump effortlessly between theory and experiment.

Fermi was born in 1901 to a middle class family in Rome, the youngest of three children born in consecutive years. As was common at the time, Enrico and his brother Giulio were sent to be wet-nursed and raised by a farm family outside Rome and only returned to live with their parents when two and a half years old. His father was a division head in the state railway and his mother taught elementary school. Neither parent had attended university, but hoped all of their children would have the opportunity. All were enrolled in schools which concentrated on the traditional curriculum of Latin, Greek, and literature in those languages and Italian. Fermi was attracted to mathematics and science, but little instruction was available to him in those fields.

At age thirteen, the young Fermi made the acquaintance of Adolfo Amidei, an engineer who worked with his father. Amidei began to loan the lad mathematics and science books, which Fermi devoured—often working out solutions to problems which Amidei was unable to solve. Within a year, studying entirely on his own, he had mastered geometry and calculus. In 1915, Fermi bought a used book, Elementorum Physicæ Mathematica, at a flea market in Rome. Published in 1830 and written entirely in Latin, it was a 900 page compendium covering mathematical physics of that era. By that time, he was completely fluent in the language and the mathematics used in the abundant equations, and worked his way through the entire text. As the authors note, “Not only was Fermi the only twentieth-century physics genius to be entirely self-taught, he surely must be the only one whose first acquaintance with the subject was through a book in Latin.”

At sixteen, Fermi skipped the final year of high school, concluding it had nothing more to teach him, and with Amidei's encouragement, sat for a competitive examination for a place at the elite Sculoa Normale Superiore, which provided a complete scholarship including room and board to the winners. He ranked first in all of the examinations and left home to study in Pisa. Despite his talent for and knowledge of mathematics, he chose physics as his major—he had always been fascinated by mechanisms and experiments, and looked forward to working with them in his career. Italy, at the time a leader in mathematics, was a backwater in physics. The university in Pisa had only one physics professor who, besides having already retired from research, had knowledge in the field not much greater than Fermi's own. Once again, this time within the walls of a university, Fermi would teach himself, taking advantage of the university's well-equipped library. He taught himself German and English in addition to Italian and French (in which he was already fluent) in order to read scientific publications. The library subscribed to the German journal Zeitschrift für Physik, one of the most prestigious sources for contemporary research, and Fermi was probably the only person to read it there. In 1922, after completing a thesis on X-rays and having already published three scientific papers, two on X-rays and one on general relativity (introducing what are now called Fermi coordinates, the first of many topics in physics which would bear his name), he received his doctorate in physics, magna cum laude. Just twenty-one, he had his academic credential, published work to his name, and the attention of prominent researchers aware of his talent. What he lacked was the prospect of a job in his chosen field.

Returning to Rome, Fermi came to the attention of Orso Mario Corbino, a physics professor and politician who had become a Senator of the Kingdom and appointed minister of public education. Corbino's ambition was to see Italy enter the top rank of physics research, and saw in Fermi the kind of talent needed to achieve this goal. He arranged a scholarship so Fermi could study physics in one the centres of research in northern Europe. Fermi chose Göttingen, Germany, a hotbed of work in the emerging field of quantum mechanics. Fermi was neither particularly happy nor notably productive during his eight months there, but was impressed with the German style of research and the intellectual ferment of the large community of German physicists. Henceforth, he published almost all of his research in either German or English, with a parallel paper submitted to an Italian journal. A second fellowship allowed him to spend 1924 in the Netherlands, working with Paul Ehrenfest's group at Leiden, deepening his knowledge of statistical and quantum mechanics.

Finally, upon returning to Italy, Corbino and his colleague Antonio Garbasso found Fermi a post as a lecturer in physics in Florence. The position paid poorly and had little prestige, but at least it was a step onto the academic ladder, and Fermi was happy to accept it. There, Fermi and his colleague Franco Rasetti did experimental work measuring the spectra of atoms under the influence of radio frequency fields. Their work was published in prestigious journals such as Nature and Zeitschrift für Physik.

In 1925, Fermi took up the problem of reconciling the field of statistical mechanics with the discovery by Wolfgang Pauli of the exclusion principle, a purely quantum mechanical phenomenon which restricts certain kinds of identical particles from occupying the same state at the same time. Fermi's paper, published in 1926, resolved the problem, creating what is now called Fermi-Dirac statistics (British physicist Paul Dirac independently discovered the phenomenon, but Fermi published first) for the particles now called fermions, which include all of the fundamental particles that make up matter. (Forces are carried by other particles called bosons, which go beyond the scope of this discussion.)

This paper immediately elevated the twenty-five year old Fermi to the top tier of theoretical physicists. It provided the foundation for understanding of the behaviour of electrons in solids, and thus the semiconductor technology upon which all our modern computing and communications equipment is based. Finally, Fermi won what he had aspired to: a physics professorship in Rome. In 1928, he married Laura Capon, whom he had first met in 1924. The daughter of an admiral in the World War I Italian navy, she was a member of one of the many secular and assimilated Jewish families in Rome. She was less than impressed on first encountering Fermi:

He shook hands and gave me a friendly grin. You could call it nothing but a grin, for his lips were exceedingly thin and fleshless, and among his upper teeth a baby tooth too lingered on, conspicuous in its incongruity. But his eyes were cheerful and amused.

Both Laura and Enrico shared the ability to see things precisely as they were, then see beyond that to what they could become.

In Rome, Fermi became head of the mathematical physics department at the Sapienza University of Rome, which his mentor, Corbino, saw as Italy's best hope to become a world leader in the field. He helped Fermi recruit promising physicists, all young and ambitious. They gave each other nicknames: ecclesiastical in nature, befitting their location in Rome. Fermi was dubbed Il Papa (The Pope), not only due to his leadership and seniority, but because he had already developed a reputation for infallibility: when he made a calculation or expressed his opinion on a technical topic, he was rarely if ever wrong. Meanwhile, Mussolini was increasing his grip on the country. In 1929, he announced the appointment of the first thirty members of the Royal Italian Academy, with Fermi among the laureates. In return for a lifetime stipend which would put an end to his financial worries, he would have to join the Fascist party. He joined. He did not take the Academy seriously and thought its comic opera uniforms absurd, but appreciated the money.

By the 1930s, one of the major mysteries in physics was beta decay. When a radioactive nucleus decayed, it could emit one or more kinds of radiation: alpha, beta, or gamma. Alpha particles had been identified as the nuclei of helium, beta particles as electrons, and gamma rays as photons: like light, but with a much shorter wavelength and correspondingly higher energy. When a given nucleus decayed by alpha or gamma, the emission always had the same energy: you could calculate the energy carried off by the particle emitted and compare it to the nucleus before and after, and everything added up according to Einstein's equation of E=mc². But something appeared to be seriously wrong with beta (electron) decay. Given a large collection of identical nuclei, the electrons emitted flew out with energies all over the map: from very low to an upper limit. This appeared to violate one of the most fundamental principles of physics: the conservation of energy. If the nucleus after plus the electron (including its kinetic energy) didn't add up to the energy of the nucleus before, where did the energy go? Few physicists were ready to abandon conservation of energy, but, after all, theory must ultimately conform to experiment, and if a multitude of precision measurements said that energy wasn't conserved in beta decay, maybe it really wasn't.

Fermi thought otherwise. In 1933, he proposed a theory of beta decay in which the emission of a beta particle (electron) from a nucleus was accompanied by emission of a particle he called a neutrino, which had been proposed earlier by Pauli. In one leap, Fermi introduced a third force, alongside gravity and electromagnetism, which could transform one particle into another, plus a new particle: without mass or charge, and hence extraordinarily difficult to detect, which nonetheless was responsible for carrying away the missing energy in beta decay. But Fermi did not just propose this mechanism in words: he presented a detailed mathematical theory of beta decay which made predictions for experiments which had yet to be performed. He submitted the theory in a paper to Nature in 1934. The editors rejected it, saying “it contained abstract speculations too remote from physical reality to be of interest to the reader.” This was quickly recognised and is now acknowledged as one of the most epic face-plants of peer review in theoretical physics. Fermi's theory rapidly became accepted as the correct model for beta decay. In 1956, the neutrino (actually, antineutrino) was detected with precisely the properties predicted by Fermi. This theory remained the standard explanation for beta decay until it was extended in the 1970s by the theory of the electroweak interaction, which is valid at higher energies than were available to experimenters in Fermi's lifetime.

Perhaps soured on theoretical work by the initial rejection of his paper on beta decay, Fermi turned to experimental exploration of the nucleus, using the newly-discovered particle, the neutron. Unlike alpha particles emitted by the decay of heavy elements like uranium and radium, neutrons had no electrical charge and could penetrate the nucleus of an atom without being repelled. Fermi saw this as the ideal probe to examine the nucleus, and began to use neutron sources to bombard a variety of elements to observe the results. One experiment directed neutrons at a target of silver and observed the creation of isotopes of silver when the neutrons were absorbed by the silver nuclei. But something very odd was happening: the results of the experiment seemed to differ when it was run on a laboratory bench with a marble top compared to one of wood. What was going on? Many people might have dismissed the anomaly, but Fermi had to know. He hypothesised that the probability a neutron would interact with a nucleus depended upon its speed (or, equivalently, energy): a slower neutron would effectively have more time to interact than one which whizzed through more rapidly. Neutrons which were reflected by the wood table top were “moderated” and had a greater probability of interacting with the silver target.

Fermi quickly tested this supposition by using paraffin wax and water as neutron moderators and measuring the dramatically increased probability of interaction (or as we would say today, neutron capture cross section) when neutrons were slowed down. This is fundamental to the design of nuclear reactors today. It was for this work that Fermi won the Nobel Prize in Physics for 1938.

By 1938, conditions for Italy's Jewish population had seriously deteriorated. Laura Fermi, despite her father's distinguished service as an admiral in the Italian navy, was now classified as a Jew, and therefore subject to travel restrictions, as were their two children. The Fermis went to their local Catholic parish, where they were (re-)married in a Catholic ceremony and their children baptised. With that paperwork done, the Fermi family could apply for passports and permits to travel to Stockholm to receive the Nobel prize. The Fermis locked their apartment, took a taxi, and boarded the train. Unbeknownst to the fascist authorities, they had no intention of returning.

Fermi had arranged an appointment at Columbia University in New York. His Nobel Prize award was US$45,000 (US$789,000 today). If he returned to Italy with the sum, he would have been forced to convert it to lire and then only be able to take the equivalent of US$50 out of the country on subsequent trips. Professor Fermi may not have been much interested in politics, but he could do arithmetic. The family went from Stockholm to Southampton, and then on an ocean liner to New York, with nothing other than their luggage, prize money, and, most importantly, freedom.

In his neutron experiments back in Rome, there had been curious results he and his colleagues never explained. When bombarding nuclei of uranium, the heaviest element then known, with neutrons moderated by paraffin wax, they had observed radioactive results which didn't make any sense. They expected to create new elements, heavier than uranium, but what they saw didn't agree with the expectations for such elements. Another mystery…in those heady days of nuclear physics, there was one wherever you looked. At just about the time Fermi's ship was arriving in New York, news arrived from Germany about what his group had observed, but not understood, four years before. Slow neutrons, which Fermi's group had pioneered, were able to split, or fission the nucleus of uranium into two lighter elements, releasing not only a large amount of energy, but additional neutrons which might be able to propagate the process into a “chain reaction”, producing either a large amount of energy or, perhaps, an enormous explosion.

As one of the foremost researchers in neutron physics, it was immediately apparent to Fermi that his new life in America was about to take a direction he'd never anticipated. By 1941, he was conducting experiments at Columbia with the goal of evaluating the feasibility of creating a self-sustaining nuclear reaction with natural uranium, using graphite as a moderator. In 1942, he was leading a project at the University of Chicago to build the first nuclear reactor. On December 2nd, 1942, Chicago Pile-1 went critical, producing all of half a watt of power. But the experiment proved that a nuclear chain reaction could be initiated and controlled, and it paved the way for both civil nuclear power and plutonium production for nuclear weapons. At the time he achieved one of the first major milestones of the Manhattan Project, Fermi's classification as an “enemy alien” had been removed only two months before. He and Laura Fermi did not become naturalised U.S. citizens until July of 1944.

Such was the breakneck pace of the Manhattan Project that even before the critical test of the Chicago pile, the DuPont company was already at work planning for the industrial scale production of plutonium at a facility which would eventually be built at the Hanford site near Richland, Washington. Fermi played a part in the design and commissioning of the X-10 Graphite Reactor in Oak Ridge, Tennessee, which served as a pathfinder and began operation in November, 1943, operating at a power level which was increased over time to 4 megawatts. This reactor produced the first substantial quantities of plutonium for experimental use, revealing the plutonium-240 contamination problem which necessitated the use of implosion for the plutonium bomb. Concurrently, he contributed to the design of the B Reactor at Hanford, which went critical in September 1944, running at 250 megawatts, that produced the plutonium for the Trinity test and the Fat Man bomb dropped on Nagasaki.

During the war years, Fermi divided his time among the Chicago research group, Oak Ridge, Hanford, and the bomb design and production group at Los Alamos. As General Leslie Groves, head of Manhattan Project, had forbidden the top atomic scientists from travelling by air, “Henry Farmer”, his wartime alias, spent much of his time riding the rails, accompanied by a bodyguard. As plutonium production ramped up, he increasingly spent his time with the weapon designers at Los Alamos, where Oppenheimer appointed him associate director and put him in charge of “Division F” (for Fermi), which acted as a consultant to all of the other divisions of the laboratory.

Fermi believed that while scientists could make major contributions to the war effort, how their work and the weapons they created were used were decisions which should be made by statesmen and military leaders. When appointed in May 1945 to the Interim Committee charged with determining how the fission bomb was to be employed, he largely confined his contributions to technical issues such as weapons effects. He joined Oppenheimer, Compton, and Lawrence in the final recommendation that “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use.”

On July 16, 1945, Fermi witnessed the Trinity test explosion in New Mexico at a distance of ten miles from the shot tower. A few seconds after the blast, he began to tear little pieces of paper from from a sheet and drop them toward the ground. When the shock wave arrived, he paced out the distance it had blown them and rapidly computed the yield of the bomb as around ten kilotons of TNT. Nobody familiar with Fermi's reputation for making off-the-cuff estimates of physical phenomena was surprised that his calculation, done within a minute of the explosion, agreed within the margin of error with the actual yield of 20 kilotons, determined much later.

After the war, Fermi wanted nothing more than to return to his research. He opposed the continuation of wartime secrecy to postwar nuclear research, but, unlike some other prominent atomic scientists, did not involve himself in public debates over nuclear weapons and energy policy. When he returned to Chicago, he was asked by a funding agency simply how much money he needed. From his experience at Los Alamos he wanted both a particle accelerator and a big computer. By 1952, he had both, and began to produce results in scattering experiments which hinted at the new physics which would be uncovered throughout the 1950s and '60s. He continued to spend time at Los Alamos, and between 1951 and 1953 worked two months a year there, contributing to the hydrogen bomb project and analysis of Soviet atomic tests.

Everybody who encountered Fermi remarked upon his talents as an explainer and teacher. Seven of his students: six from Chicago and one from Rome, would go on to win Nobel Prizes in physics, in both theory and experiment. He became famous for posing “Fermi problems”, often at lunch, exercising the ability to make and justify order of magnitude estimates of difficult questions. When Freeman Dyson met with Fermi to present a theory he and his graduate students had developed to explain the scattering results Fermi had published, Fermi asked him how many free parameters Dyson had used in his model. Upon being told the number was four, he said, “I remember my old friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” Chastened, Dyson soon concluded his model was a blind alley.

After returning from a trip to Europe in the fall of 1954, Fermi, who had enjoyed robust good health all his life, began to suffer from problems with digestion. Exploratory surgery found metastatic stomach cancer, for which no treatment was possible at the time. He died at home on November 28, 1954, two months past his fifty-third birthday. He had made a Fermi calculation of how long to rent the hospital bed in which he died: the rental expired two days after he did.

There was speculation that Fermi's life may have been shortened by his work with radiation, but there is no evidence of this. He was never exposed to unusual amounts of radiation in his work, and none of his colleagues, who did the same work at his side, experienced any medical problems.

This is a masterful biography of one of the singular figures in twentieth century science. The breadth of his interests and achievements is reflected in the list of things named after Enrico Fermi. Given the hyper-specialisation of modern science, it is improbable we will ever again see his like.

 Permalink

Schulman, J. Neil. The Robert Heinlein Interview. Pahrump, NV: Pulpless.Com, [1990, 1996, 1999] 2017. ISBN 978-1-58445-015-3.
Today, J. Neil Schulman is an accomplished novelist, filmmaker, screenwriter, actor, journalist, and publisher: winner of the Prometheus Award for libertarian science fiction. In the summer of 1973, he was none of those things: just an avid twenty year old science fiction fan who credited the works of Robert A. Heinlein for saving his life—replacing his teenage depression with visions of a future worth living for and characters worthy of emulation who built that world. As Schulman describes it, Heinlein was already in his head, and he wanted nothing more in his ambition to follow in the steps of Heinlein than to get into the head of the master storyteller. He managed to parlay a book review into a commission to interview Heinlein for the New York Sunday News. Heinlein consented to a telephone interview, and on June 30, 1973, Schulman and Heinlein spoke for three and a half hours, pausing only for hourly changes of cassettes.

The agenda for the interview had been laid out in three pages of questions Schulman had mailed Heinlein a few days before, but the letter had only arrived shortly before the call and Heinlein hadn't yet read the questions, so he read them as they spoke. After the interview, Schulman prepared a transcript, which was edited by Robert Heinlein and Virginia, his wife. The interview was published by the newspaper in a much abridged and edited form, and did not see print in its entirety until 1990, two years after Heinlein's death. On the occasion of its publication, Virginia Heinlein said “To my knowledge, this is the longest interview Robert ever gave. Here is a book that should be on the shelves of everyone interested in science fiction. Libertarians will be using it as a source for years to come.”

Here you encounter the authentic Heinlein, consistent with the description from many who knew him over his long career: simultaneously practical, visionary, contrary, ingenious, inner-directed, confident, and able to observe the world and humanity without the filter of preconceived notions. Above all, he was a master storyteller who never ceased to be amazed people would pay him to spin yarns. As Schulman describes it, “Talking with Robert Heinlein is talking with the Platonic archetype of all his best characters.”

If you have any interest in Heinlein or the craft of science fiction, this should be on your reading list. I will simply quote a few morsels chosen from the wealth of insights and wisdom in these pages.

On aliens and first contact:
The universe might turn out to be a hell of a sight nastier and tougher place than we have any reason to guess at this point. That first contact just might wipe out the human race, because we would encounter somebody who was meaner and tougher, and not at all inclined to be bothered by genocide. Be no more bothered by genocide than I am when I put out ant poison in the kitchen when the ants start swarming in.
On the search for deep messages in his work:
[Quoting Schulman's question] “Isn't ‘Coventry’ still an attempt by the state (albeit a relatively benign one) to interfere with the natural market processes and not let the victim have his restitution?” Well, “Coventry” was an attempt on the part of a writer to make a few hundred dollars to pay off a mortgage.
On fans who complain his new work isn't consistent with his earlier writing:
Over the course of some thirty-four years of writing, every now and then I receive things from people condemning me for not having written a story just like my last one. I never pay attention to this, Neil, because it has been my intention—my purpose—to make every story I've written—never to write a story just like my last one…I'm going to write what it suits me to write and if I write another story that's just like any other story I've ever written, I'll be slipping. … I'm trying to write to please not even as few as forty thousand people in the hardcover, but a million and up in the softcover. If an author let these self-appointed mentors decide for him what he's going to write and how he's going to write it, he'd never get anywhere….
On his writing and editing habits:
I've never written more than about three months of the year the whole time I've been writing. Part of that is because I never rewrite. I cut, but I don't rewrite.
On the impact of technologies:
When I see how far machine computation has gone since that time [the 1930s], I find it the most impressive development—more impressive than the atom bomb, more impressive than space travel—in its final consequences.
On retirement:
Well, Tony Boucher pointed that out to me years ago. He said that there are retired everything else—retired schoolteachers, retired firemen, retired bankers—but there are no retired writers. There are simply writers who are no longer selling. [Heinlein's last novel, To Sail Beyond the Sunset, was published in 1987, the year before his death at age 80. —JW]
On the conflict between high technology and personal liberty:
The question of how many mega-men [millions of population] it takes to maintain a high-technology society and how many mega-men it takes to produce oppressions simply through the complexity of the society is a matter I have never satisfactorily solved in my own mind. But I am quite sure that one works against the other, that it takes a large-ish population for a high technology, but if you get large populations human liberties are automatically restricted even if you don't have legislation about it. In fact, the legislation in many cases is intended to—and sometimes does—lubricate the frictions that take place between people simply because they're too close together.
On seeking solutions to problems:
I got over looking for final solutions a good, long time ago because once you get this point shored up, something breaks out somewhere else. The human race gets along by the skin of its teeth, and it's been doing so for some hundreds of thousands or millions of years. … It is the common human condition all through history that every time you solve a problem you discover that you've created a new problem.

I did not cherry pick these: they are but a few of a multitude from the vast cherry tree which is this interview. Enjoy! Also included in the book are other Heinlein-related material by Schulman: book reviews, letters, and speeches.

I must caution prospective readers that the copy-editing of this book is embarrassingly bad. I simply do not understand how a professional author—one who owns his own publishing house—can bring a book to market which clearly nobody has ever read with a critical eye, even at a cursory level. There are dozens of howlers here: not subtle things, but words run together, sentences which don't begin with a capital letter, spaces in the middle of hyphenated words, commas where periods were intended, and apostrophes transformed into back-tick characters surrounded by spaces. And this is not a bargain-bin special—the paperback has a list price of US$19.95 and is listed at this writing at US$18.05 at Amazon. The Heinlein interview was sufficiently enlightening I was willing to put up with the production values, which made something which ought to be a triumph look just shabby and sad, but then I obtained the Kindle edition for free (see below). If I'd paid full freight for the paperback, I'm not sure even my usually mellow disposition would have remained unperturbed by the desecration of the words of an author I cherish and the feeling my pocket had been picked.

The Kindle edition is available for free to Kindle Unlimited subscribers.

 Permalink

Mills, Kyle. The Survivor. New York: Pocket Books, 2015. ISBN 978-1-4767-8346-8.
Over the last fifteen years, CIA counter-terrorism operative Mitch Rapp (warning—the article at this link contains minor spoilers) has survived myriad adventures and attempts to take him out by terrorists, hostile governments, subversive forces within his own agency, and ambitious and unscrupulous Washington politicians looking to nail his scalp to their luxuriously appointed office walls, chronicled in the thirteen thrillers by his creator, Vince Flynn. Now, Rapp must confront one of the most formidable challenges any fictional character can face—outliving the author who invented him. With the death of Vince Flynn in 2013 from cancer, the future of the Mitch Rapp series was uncertain. Subsequently, Flynn's publisher announced that veteran thriller writer Kyle Mills, with fourteen novels already published, would be continuing the Mitch Rapp franchise. This is the first novel in the series by Mills. Although the cover has Flynn's name in much larger type than Mills', the latter is the sole author.

In this installment of the Rapp saga, Mills opted to dive right in just days after the events in the conclusion of the previous novel, The Last Man (February 2013). The CIA is still reeling from its genius black operations mastermind, Joseph Rickman, having gone rogue, faked his own kidnapping, and threatened to reveal decades of the CIA's secrets, including deep cover agents in place around the world and operations in progress, potentially crippling the CIA and opening up enough cans of worms to sustain the congressional committee surrender-poultry for a decade. With the immediate Rickman problem dealt with in the previous novel, the CIA is dismayed to learn that the ever-clever Rickman is himself a survivor, and continues to wreak his havoc on the agency from beyond the grave, using an almost impenetrable maze of digital and human cut-outs devised by his wily mind.

Not only is the CIA at risk of embarrassment and exposure of its most valuable covert assets, an ambitious spymaster in Pakistan sees the Rickman intelligence trove as not only a way to destroy the CIA's influence in his country and around the world, but the means to co-opt its network for his own ends, providing his path to slither to the top of the seething snake-mountain which is Pakistani politics, and, with control over his country's nuclear arsenal and the CIA's covert resources, become a player on the regional, if not world scale.

Following Rickman's twisty cyber trail as additional disclosure bombshells drop on the CIA, Rapp and his ailing but still prickly mentor Stan Hurley must make an uneasy but unavoidable alliance with Louis Gould, the murderer of Rapp's wife and unborn child, who almost killed him in the previous novel, in order to penetrate the armed Swiss compound (which has me green with envy and scribbling notes) of Leo Obrecht, rogue private banker implicated in the Rickman operation and its Pakistani connections.

The action takes Rapp and his team to a remote location in Russia, and finally to a diplomatic banquet in Islamabad where Rapp reminds an American politician which fork to use, and how.

Mitch Rapp has survived. I haven't read any of Kyle Mills' other work, so I don't know whether it's a matter of his already aligning with Vince Flynn's style or, as a professional author, adopting it along with Flynn's worldview, but had I not known this was the work of a different author, I'd never have guessed. I enjoyed this story and look forward to further Mitch Rapp adventures by Kyle Mills.

 Permalink

van Creveld, Martin. Hitler in Hell. Kouvola, Finland: Castalia House, 2017. ASIN B0738YPW2M.
Martin van Creveld is an Israeli military theorist and historian, professor emeritus at Hebrew University in Jerusalem, and author of seventeen books of military history and strategy, including The Transformation of War, which has been hailed as one of the most significant recent works on strategy. In this volume he turns to fiction, penning the memoirs of the late, unlamented Adolf Hitler from his current domicile in Hell, “the place to which the victors assign their dead opponents.” In the interest of concision, in the following discussion I will use “Hitler” to mean the fictional Hitler in this work.

Hitler finds Hell more boring than hellish—“in some ways it reminds me of Landsberg Prison”. There is no torture or torment, just a never-changing artificial light and routine in which nothing ever happens. A great disappointment is that neither Eva Braun nor Blondi is there to accompany him. As to the latter, apparently all dogs go to heaven. Rudolf Hess is there, however, and with that 1941 contretemps over the flight to Scotland put behind them, has resumed helping Hitler with his research and writing as he did during the former's 1924 imprisonment. Hell has broadband!—Hitler is even able to access the “Black Internetz” and read, listen to, and watch everything up to the present day. (That sounds pretty good—my own personal idea of Hell would be an Internet connection which only allows you to read Wikipedia.)

Hitler tells the story of his life: from childhood, his days as a struggling artist in Vienna and Munich, the experience of the Great War, his political awakening in the postwar years, rise to power, implementation of his domestic and foreign policies, and the war and final collapse of Nazi Germany. These events, and the people involved in them, are often described from the viewpoint of the present day, with parallels drawn to more recent history and figures.

What makes this book work so well is that van Creveld's Hitler makes plausible arguments supporting decisions which many historians argue were irrational or destructive: going to war over Poland, allowing the British evacuation from Dunkirk, attacking the Soviet Union while Britain remained undefeated in the West, declaring war on the U.S. after Pearl Harbor, forbidding an orderly retreat from Stalingrad, failing to commit armour to counter the Normandy landings, and fighting to the bitter end, regardless of the consequences to Germany and the German people. Each decision is justified with arguments which are plausible when viewed from what is known of Hitler's world view, the information available to him at the time, and the constraints under which he was operating.

Much is made of those constraints. Although embracing totalitarianism (“My only regret is that, not having enough time, we did not make it more totalitarian still”), he sees himself surrounded by timid and tradition-bound military commanders and largely corrupt and self-serving senior political officials, yet compelled to try to act through them, as even a dictator can only dictate, then hope others implement his wishes. “Since then, I have often wondered whether, far from being too ruthless, I had been too soft and easygoing.” Many apparent blunders are attributed to lack of contemporary information, sometimes due to poor intelligence, but often simply by not having the historians' advantage of omniscient hindsight.

This could have been a parody, but in the hands of a distinguished historian like the author, who has been thinking about Hitler for many years (he wrote his 1971 Ph.D. thesis on Hitler's Balkan strategy in World War II), it provides a serious look at how Hitler's policies and actions, far from being irrational or a madman's delusions, may make perfect sense when one starts from the witches' brew of bad ideas and ignorance which the real Hitler's actual written and spoken words abundantly demonstrate. The fictional Hitler illustrates this in many passages, including this particularly chilling one where, after dismissing those who claim he was unaware of the extermination camps, says “I particularly needed to prevent the resurgence of Jewry by exterminating every last Jewish man, woman, and child I could. Do you say they were innocent? Bedbugs are innocent! They do what nature has destined them to, no more, no less. But is that any reason to spare them?” Looking backward, he observes that notwithstanding the utter defeat of the Third Reich, the liberal democracies that vanquished it have implemented many of his policies in the areas of government supervision of the economy, consumer protection, public health (including anti-smoking policies), environmentalism, shaping the public discourse (then, propaganda, now political correctness), and implementing a ubiquitous surveillance state of which the Gestapo never dreamed.

In an afterword, van Creveld explains that, after on several occasions having started to write a biography of Hitler and then set the project aside, concluding he had nothing to add to existing works, in 2015 it occurred to him that the one perspective which did not exist was Hitler's own, and that the fictional device of a memoir from Hell, drawing parallels between historical and contemporary events, would provide a vehicle to explore the reasoning which led to the decisions Hitler made. The author concludes, “…my goal was not to set forth my own ideas. Instead, I tried to understand Hitler's actions, views, and thoughts as I think he, observing the past and the present from Hell, would have explained them. So let the reader judge how whether I have succeeded in this objective.” In the opinion of this reader, he has succeeded, and brilliantly.

This book is presently available only in a Kindle edition; it is free for Kindle Unlimited subscribers.

 Permalink

Cowie, Ian, Dim Jones, and Chris Long, eds. Out of the Blue. Farnborough, UK, 2011. ISBN 978-0-9570928-0-8.
Flying an aircraft has long been described by those who do it for a living as hours of boredom punctuated by moments of stark terror. The ratio of terror to boredom depends upon the equipment and mission the pilot is flying, and tends to be much higher as these approach the ragged edge, as is the case for military aviation in high-performance aircraft. This book collects ninety anecdotes from pilots in the Royal Air Force, most dating from the Cold War era, illustrating that you never know for sure what is going to happen when you strap into an airplane and take to the skies, and that any lapse in attention to detail, situational awareness, or resistance to showing off may be swiftly rewarded not only with stark terror but costly, unpleasant, and career-limiting consequences. All of the stories are true (or at least those relating them say they are—with pilots you never know for sure), and most are just a few pages. You can pick the book up at any point; except for a few two-parters, the chapters are unrelated to one another. This is thus an ideal “bathroom book”, or way to fill a few minutes' downtime in a high distraction environment.

Because most of the flying takes place in Britain and in NATO deployments in Germany and other countries in northern Europe, foul weather plays a part in many of these adventures. Those who fly in places like Spain and California seldom find themselves watching the fuel gauge count down toward zero while divert field after divert field goes RED weather just as they arrive and begin their approach—that happens all the time in the RAF.

Other excitement comes from momentary lapses of judgment or excessive enthusiasm, such as finding yourself at 70,000 feet over Germany in a Lightning whose two engines have flamed out after passing the plane's service ceiling of 54,000 feet. While in this case the intrepid aeronaut got away without a scratch (writing up the altimeter as reading much too high), other incidents ended up in ejecting from aircraft soon to litter the countryside with flaming debris. Then there's ejecting from a perfectly good Hunter after a spurious fire warning light and the Flight Commander wingman ordering an ejection after observing “lots of smoke” which turned out, after the fact, to be just hydraulic fluid automatically dumped after a precautionary engine shutdown.

Sometimes you didn't do anything wrong and still end up in a spot of bother. There's the crew of a Victor which, shortly after departing RAF Gan in the Maldive Islands had a hydraulic system failure. No big thing—the Victor has two completely independent hydraulic systems, so there wasn't any great worry as the plane turned around to return to Gan. But when the second hydraulic system then proceeded to fail, there was worry aplenty, because that meant there was no nose-wheel steering and a total of eight applications of the brakes before residual pressure in the system was exhausted. Then came the call from Gan: a series of squalls were crossing the atoll, with crosswinds approaching the Victor's limit and heavy rain on the runway. On landing, a gust of wind caught the drag parachute and sent the bomber veering off the edge of the runway, and without nose-wheel steering, nothing could be done to counteract it. The Victor ended up ploughing a furrow in the base's just-refurbished golf course before coming to a stop. Any landing you walk away from…. The two hydraulic systems were determined to have failed from completely independent and unrelated causes, something that “just can't happen”—until it happens to you.

Then there's RAF pilot Alan Pollock, who, upset at the RAF's opting in 1968 not to commemorate the 50th anniversary of its founding, decided to mount his own celebration of the milestone. He flew his Hunter at high subsonic speed and low altitude down the Thames, twisting and turning with the river, and circling the Houses of Parliament as Big Ben struck noon. He then proceeded up the Thames and, approaching Tower Bridge, became the first and so far only pilot to fly between the two spans of the London landmark. This concluded his RAF career: he was given a medical discharge, which avoided a court martial that would have likely have sparked public support for his unauthorised aerial tattoo. His first-hand recollection of the exploit appears here.

Other stories recount how a tiny blob of grease where it didn't belong turned a Hunter into rubble in Cornwall, the strange tale of the world's only turbine powered biplane, the British pub on the Italian base at Decimomannu, Sardinia: “The Pig and Tapeworm”, and working as an engineer on the Shackleton maritime patrol aircraft: “Along the way, you will gain the satisfaction of ensuring the continued airworthiness of a bona fide museum piece, so old that the pointed bit is at the back, and so slow that birds collide with the trailing edge of the wing.” There's nothing profound here, but it's a lot of fun.

The paperback is currently out of print, but used copies are available at reasonable cost. The Kindle edition is available, and is free for Kindle Unlimited subscribers.

 Permalink

Howey, Hugh. Wool. New York: Simon & Schuster, [2011] 2013. ISBN 978-1-4767-3395-1.
Wool was originally self-published as a stand-alone novella. The series grew into a total of six novellas, collected into three books. This “Omnibus Edition” contains all three books, now designated “Volume 1 of the Silo Trilogy”. Two additional volumes in the series: Shift and Dust are respectively a prequel and sequel to the present work.

The Silo is the universe to its inhabitants. It consists of a cylinder whose top is level with the surrounding terrain and extends downward into the Earth for 144 levels, with a central spiral staircase connecting them. Transport among the levels is purely by foot traffic on the staircase, and most news and personal messages are carried by porters who constantly ascend and descend the stairs. Electronic messages can be sent, but are costly and rarely used. Levels are divided by functionality, and those who live in them essentially compose castes defined by occupation. Population is strictly controlled and static. Administration is at the top (as is usually the case), while the bottom levels are dedicated to the machines which produce power, circulate and purify the air, pump out ground water which would otherwise flood the structure, and drill for energy and mine resources required to sustain the community. Intermediate levels contain farms, hospitals and nurseries, schools, and the mysterious and secretive IT (never defined, but one assumes “Information Technology”, which many suspect is the real power behind the scenes [isn't it always?]). There is some mobility among levels and occupations, but many people live most of their lives within a few levels of where they were born, taking occasional rare (and exhausting) trips to the top levels for special occasions.

The most special of occasions is a “cleaning”. From time to time, some resident of the silo demands to leave or, more often, is deemed a threat to the community due to challenging the social order, delving too deeply into its origins, or expressing curiosity about what exists outside, and is condemned to leave the silo wearing a protective suit against the forbiddingly hostile environment outside, to clean the sensors which provide denizens their only view of the surroundings: a barren landscape with a ruined city in the distance. The suit invariably fails, and the cleaner's body joins those of others scattered along the landscape. Why do those condemned always clean? They always have, and it's expected they always will.

The silo's chief is the mayor, and order is enforced by the sheriff, to whom deputies in offices at levels throughout the silo report. The current sheriff's own wife was sent to cleaning just three years earlier, after becoming obsessed with what she believed to be a grand deception by IT and eventually breaking down in public. Sheriff Holston's own obsession grows until he confronts the same fate.

This is a claustrophobic, dystopian novel in which the reader begins as mystified with what is going on and why as are the residents of the silo, at least those who dare to be curious. As the story progresses, much of which follows the career of a new sheriff appointed from the depths of the silo, we piece together, along with the characters, what is happening and how it came to be and, with them, glimpse a larger world and its disturbing history. The writing is superb and evocative of the curious world in which the characters find themselves.

Spoiler warning: Plot and/or ending details follow.  
There are numerous mysteries in this story, many of which are explained as the narrative progresses, but there's one central enigma which is never addressed. I haven't read the prequel nor the sequel, and perhaps they deal with it, but this book was written first as a stand-alone, and read as one, it leaves this reader puzzled.

The silo has abundant energy produced from oil wells drilled from the lower levels, sufficient to provide artificial lighting throughout including enough to grow crops on the farm levels. There is heavy machinery: pumps, generators, air circulation and purification systems, advanced computer technology in IT, and the infrastructure to maintain all of this along with a logistics, maintenance, and spares operation to keep it all running. And, despite all of this, there's no elevator! The only way to move people and goods among the levels is to manually carry them up and down the circular staircase. Now, I can understand how important this is to the plot of the novel, but it would really help if the reader were given a clue why this is and how it came to be. My guess is that it was part of the design of the society: to impose a stratification and reinforce its structure like the rule of a monastic community (indeed, we later discover the silo is regulated according to a book of Order). I get it—if there's an elevator, much of the plot goes away, but it would be nice to have a clue why there isn't one, when it would be the first thing anybody with the technology to build something like the silo would design into what amounts to a 144 storey building.

Spoilers end here.  

The Kindle edition is presented in a very unusual format. It is illustrated with drawings, some of which are animated—not full motion, but perspectives change, foregrounds and backgrounds shift, and light sources move around. The drawings do not always correspond to descriptions in the text. The illustrations appear to have been adapted from a graphic novel based upon the book. The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Fulton, Steve and Jeff Fulton. HTML5 Canvas. Sebastopol, CA: O'Reilly, 2013. ISBN 978-1-4493-3498-7.
I only review computer books if I've read them in their entirety, as opposed to using them as references while working on projects. For much of 2017 I've been living with this book open, referring to it as I performed a comprehensive overhaul of my Fourmilab site, and I just realised that by now I have actually read every page, albeit not in linear order, so a review is in order; here goes.

The original implementation of World Wide Web supported only text and, shortly thereafter, embedded images in documents. If you wanted to do something as simple as embed an audio or video clip, you were on your own, wading into a morass of browser- and platform-specific details, plug-ins the user may have to install and then forever keep up to date, and security holes due to all of this non-standard and often dodgy code. Implementing interactive content on the Web, for example scientific simulations for education, required using an embedded language such as Java, whose initial bright promise of “Write once, run anywhere” quickly added the rejoinder “—yeah, right” as bloat in the language, incessant security problems, cross-platform incompatibilities, the need for the user to forever keep external plug-ins updated lest existing pages cease working, caused Java to be regarded as a joke—a cruel joke upon those who developed Web applications based upon it. By the latter half of the 2010s, the major browsers had either discontinued support for Java or announced its removal in future releases.

Fortunately, in 2014 the HTML5 standard was released. For the first time, native, standardised support was added to the Web's fundamental document format to support embedded audio, video, and interactive content, along with Application Programming Interfaces (APIs) in the JavaScript language, interacting with the document via the Document Object Model (DOM), which has now been incorporated into the HTML5 standard. For the first time it became possible, using only standards officially adopted by the World Wide Web Consortium, to create interactive Web pages incorporating multimedia content. The existence of this standard provides a strong incentive for browser vendors to fully implement and support it, and increases the confidence of Web developers that pages they create which are standards-compliant will work on the multitude of browsers, operating systems, and hardware platforms which exist today.

(That encomium apart, I find much to dislike about HTML5. In my opinion its sloppy syntax [not requiring quotes on tag attributes nor the closing of many tags] is a great step backward from XHTML 1.0, which strictly conforms to XML syntax and can be parsed by a simple and generic XML parser, without the Babel-sized tower of kludges and special cases which are required to accommodate the syntactic mumbling of HTML5. A machine-readable language should be easy to read and parse by a machine, especially in an age where only a small minority of Web content creators actually write HTML themselves, as opposed to using a content management system of some kind. Personally, I continue to use XHTML 1.0 for all content on my Web site which does not require the new features in HTML5, and I observe that the home page of the World Wide Web Consortium is, itself, in XHTML 1.0 Strict. And there's no language version number in the header of an HTML5 document. Really—what's up with that? But HTML5 is the standard we've got, so it's the standard we have to use in order to benefit from the capabilities it provides: onward.)

One of the most significant new features in HTML5 is its support for the Canvas element. A canvas is a rectangular area within a page which is treated as an RGBA bitmap (the “A” denotes “alpha”, which implements transparency for overlapping objects). A canvas is just what its name implies: a blank area on which you can draw. The drawing is done in JavaScript code via the Canvas API, which is documented in this book, along with tutorials and abundant examples which can be downloaded from the publisher's Web site. The API provides the usual functions of a two-dimensional drawing model, including lines, arcs, paths, filled objects, transformation matrices, clipping, and colours, including gradients. A text API allows drawing text on the canvas, using a subset of CSS properties to define fonts and their display attributes.

Bitmap images may be painted on the canvas, scaled and rotated, if you wish, using the transformation matrix. It is also possible to retrieve the pixel data from a canvas or portion of it, manipulate it at low-level, and copy it back to that or another canvas using JavaScript typed arrays. This allows implementation of arbitrary image processing. You might think that pixel-level image manipulation in JavaScript would be intolerably slow, but with modern implementations of JavaScript in current browsers, it often runs within a factor of two of the speed of optimised C code and, unlike the C code, works on any platform from within a Web page which requires no twiddling by the user to build and install on their computer.

The canvas API allows capturing mouse and keyboard events, permitting user interaction. Animation is implemented using JavaScript's standard setTimeout method. Unlike some other graphics packages, the canvas API does not maintain a display list or refresh buffer. It is the responsibility of your code to repaint the image on the canvas from scratch whenever it changes. Contemporary browsers buffer the image under construction to prevent this process from being seen by the user.

HTML5 audio and video are not strictly part of the canvas facility (although you can display a video on a canvas), but they are discussed in depth here, each in its own chapter. Although the means for embedding this content into Web pages are now standardised, the file formats for audio and video are, more than a quarter century after the creation of the Web, “still evolving”. There is sage advice for developers about how to maximise portability of pages across browsers and platforms.

Two chapters, 150 pages of this 750 page book (don't be intimidated by its length—a substantial fraction is code listings you don't need to read unless you're interested in the details), are devoted to game development using the HTML5 canvas and multimedia APIs. A substantial part of this covers topics such as collision detection, game physics, smooth motion, and detecting mouse hits in objects, which are generic subjects in computer graphics and not specific to its HTML5 implementation. Reading them, however, may give you some tips useful in non-game applications.

Projects at Fourmilab which now use HTML5 canvas are:

Numerous other documents on the site have been updated to HTML5, using the audio and video embedding capabilities described in the book.

All of the information on the APIs described in the book is available on the Web for free. But you won't know what to look for unless you've read an explanation of how they work and looked at sample code which uses them. This book provides that information, and is useful as a desktop reference while you're writing code.

A Kindle edition is available, which you can rent for a limited period of time if you only need to refer to it for a particular project.

 Permalink

Smith, L. Neil. Blade of p'Na. Rockville, MD: Phoenix Pick, 2017. ISBN 978-1-61242-218-3.
This novel is set in the “Elders” universe, originally introduced in the 1990 novels Contact and Commune and Converse and Conflict, and now collected in an omnibus edition with additional material, Forge of the Elders. Around four hundred million years ago the Elders, giant mollusc-like aquatic creatures with shells the size of automobiles, conquered aging, and since then none has died except due to accident or violence. And precious few have succumbed to those causes: accident because the big squid are famously risk averse, and violence because, after a societal adolescence in which they tried and rejected many political and economic bad ideas, they settled on p'Na as the central doctrine of their civilisation: the principle that nobody has the right to initiate physical force against anybody else for any reason—much like the Principle of Non-Aggression, don't you know.

On those rare occasions order is disturbed, the services of a p'Nan “debt assessor” are required. Trained in the philosophy of p'Na, martial arts, psychology, and burnished through a long apprenticeship, assessors are called in either after an event in which force has been initiated or by those contemplating a course which might step over the line. The assessor has sole discretion in determining culpability, the form and magnitude of restitution due, and when no other restitution is possible, enforcing the ultimate penalty on the guilty. The assessor's sword, the Blade of p'Na, is not just a badge of office but the means of restitution in such cases.

The Elders live on one of a multitude, possibly infinite, parallel Earths in a multiverse where each planet's history has diverged due to contingent events in its past. Some millennia after adopting p'Na, they discovered the means of observing, then moving among these different universes and their variant Earths. Some millennia after achieving biological immortality and peace through p'Na, their curiosity and desire for novelty prompted them to begin collecting beings from across the multiverse. Some were rescues of endangered species, while others would be more accurately described as abductions. They referred to this with the euphemism of “appropriation”, as if that made any difference. The new arrivals: insectoid, aquatic, reptilian, mammalian, avian, and even sentient plants, mostly seemed happy in their new world, where the Elders managed to create the most diverse and peaceful society known in the universe.

This went on for a million years or so until, just like the revulsion against slavery in the 19th century in our timeline, somesquid happened to notice that the practice violated the fundamental principle of their society. Appropriations immediately ceased, debt assessors were called in, and before long all of the Elders implicated in appropriation committed suicide (some with a little help). But that left the question of restitution to the appropriated. Dumping them back into their original universes, often war-torn, barbarous, primitive, or with hostile and unstable environments after up to a million years of peace and prosperity on the Elders' planet didn't make the ethical cut. They settled on granting full citizenship to all the appropriated, providing them the gift of biological immortality, cortical implants to upgrade the less sentient to full intelligence, and one more thing…. The Elders had developed an unusual property: the tips of their tentacles could be detached and sent on errands on behalf of their parent bodies. While not fully sentient, the tentacles could, by communicating via cortical implants, do all kinds of useful work and allow the Elders to be in multiple places at once (recall that the Elders, like terrestrial squid, have ten tentacles—if they had twelve, they'd call them twelvicles, wouldn't they?). So for each of the appropriated species, the Elders chose an appropriate symbiote who, upgraded in intelligence and self-awareness and coupled to the host by their own implant, provided a similar benefit to them. For humanoids, it was dogs, or their species' canids.

(You might think that all of this constitutes spoilers, but it's just the background for the Elders' universe which is laid out in the first few chapters for the benefit of readers who haven't read the earlier books in the series.)

Hundreds of millions of years after the Great Restitution Eichra Oren (those of his humanoid species always use both names) is a p'Na debt assessor. His symbiote, Oasam Otusam, a super-intelligent, indiscriminately libidinous, and wisecracking dog, prefers to go by “Sam”. So peaceful is the planet of the Elders that most of the cases Eichra Oren is called upon to resolve are routine and mundane, such as the current client, an arachnid about the size of a dinner table, seeking help in tracking down her fiancé, who has vanished three days before the wedding. This raises some ethical issues because, among their kind, traditionally “Saying ‘I do’ is the same as saying ‘bon appétit’ ”. Many, among sapient spiders, have abandoned the Old Ways, but some haven't. After discussion, in which Sam says, “You realize that in the end, she's going to eat him”, they decide, nonetheless, to take the case.

The caseload quickly grows as the assessor is retained by investors in a project led by an Elder named Misterthoggosh, whose fortune comes from importing reality TV from other universes (there is no multiverse copyright convention—the p'Na is cool with cultural appropriation) and distributing it to the multitude of species on the Elders' world. He (little is known of the Elders' biology…some say the females are non-sentient and vestigial) is now embarking on a new project, and the backers want a determination by an assessor that it will not violate p'Na, for which they would be jointly and separately responsible. The lead investor is a star-nosed mole obsessed by golf.

Things become even more complicated after a mysterious attack which appears to have been perpetrated by the “greys”, creatures who inhabit the mythology and nightmares of a million sapient species, and the suspicion and fear that somewhere else in the multiverse, another species has developed the technology of opening gates between universes, something so far achieved only by the now-benign Elders, with wicked intent by the newcomers.

What follows is a romp filled with interesting questions. Should you order the vegan plate in a restaurant run by intelligent plants? What are the ethical responsibilities of a cyber-assassin who is conscious yet incapable of refusing orders to kill? What is a giant squid's idea of a pleasure yacht? If two young spiders are amorously attracted, it only pupæ love? The climax forces the characters to confront the question of the extent to which beings which are part of a hive mind are responsible for the actions of the collective.

L. Neil Smith's books have sometimes been criticised for being preachy libertarian tracts with a garnish of science fiction. I've never found them to be such, but you certainly can't accuse this one of that. It's set in a world governed for æons by the principle of non-aggression, but that foundation of civil society works so well that it takes an invasion from another universe to create the conflict which is central to the plot. Readers are treated to the rich and sometime zany imagination of a world inhabited by almost all imaginable species where the only tensions among them are due to atavistic instincts such as those of dogs toward tall plants, combined with the humour, ranging from broad to wry, of our canine narrator, Sam.

 Permalink

August 2017

Rahe, Paul A. The Spartan Regime. New Haven, CT: Yale University Press, 2016. ISBN 978-0-300-21901-2.
This thin volume (just 232 pages in the hardcover edition, only around 125 of which are the main text and appendices—the rest being extensive source citations, notes, and indices of subjects and people and place names) is intended as the introduction to an envisioned three volume work on Sparta covering its history from the archaic period through the second battle of Mantinea in 362 b.c. where defeat of a Sparta-led alliance at the hands of the Thebans paved the way for the Macedonian conquest of Sparta.

In this work, the author adopts the approach to political science used in antiquity by writers such as Thucydides, Xenophon, and Aristotle: that the principal factor in determining the character of a political community is its constitution, or form of government, the rules which define membership in the community and which its members were expected to obey, their character being largely determined by the system of education and moral formation which shape the citizens of the community.

Discerning these characteristics in any ancient society is difficult, but especially so in the case of Sparta, which was a society of warriors, not philosophers and historians. Almost all of the contemporary information we have about Sparta comes from outsiders who either visited the city at various times in its history or based their work upon the accounts of others who had. Further, the Spartans were famously secretive about the details of their society, so when ancient accounts differ, it is difficult to determine which, if any, is correct. One gets the sense that all of the direct documentary information we have about Sparta would fit on one floppy disc: everything else is interpretations based upon that meagre foundation. In recent centuries, scholars studying Sparta have seen it as everything from the prototype of constitutional liberty to a precursor of modern day militaristic totalitarianism.

Another challenge facing the modern reader and, one suspects, many ancients, in understanding Sparta was how profoundly weird it was. On several occasions whilst reading the book, I was struck that rarely in science fiction does one encounter a description of a society so thoroughly alien to those with which we are accustomed from our own experience or a study of history. First of all, Sparta was tiny: there were never as many as ten thousand full-fledged citizens. These citizens were descended from Dorians who had invaded the Peloponnese in the archaic period and subjugated the original inhabitants, who became helots: essentially serfs who worked the estates of the Spartan aristocracy in return for half of the crops they produced (about the same fraction of the fruit of their labour the helots of our modern enlightened self-governing societies are allowed to retain for their own use). Every full citizen, or Spartiate, was a warrior, trained from boyhood to that end. Spartiates not only did not engage in trade or work as craftsmen: they were forbidden to do so—such work was performed by non-citizens. With the helots outnumbering Spartiates by a factor of from four to seven (and even more as the Spartan population shrunk toward the end), the fear of an uprising was ever-present, and required maintenance of martial prowess among the Spartiates and subjugation of the helots.

How were these warriors formed? Boys were taken from their families at the age of seven and placed in a barracks with others of their age. Henceforth, they would return to their families only as visitors. They were subjected to a regime of physical and mental training, including exercise, weapons training, athletics, mock warfare, plus music and dancing. They learned the poetry, legends, and history of the city. All learned to read and write. After intense scrutiny and regular tests, the young man would face a rite of passage, krupteίa, in which, for a full year, armed only with a dagger, he had to survive on his own in the wild, stealing what he needed, and instilling fear among the helots, who he was authorised to kill if found in violation of curfew. Only after surviving this ordeal would the young Spartan be admitted as a member of a sussιtίon, a combination of a men's club, a military mess, and the basic unit in the Spartan army. A Spartan would remain a member of this same group all his life and, even after marriage and fatherhood, would live and dine with them every day until the age of forty-five.

From the age of twelve, boys in training would usually have a patron, or surrogate father, who was expected to initiate him into the world of the warrior and instruct him in the duties of citizenship. It was expected that there would be a homosexual relationship between the two, and that this would further cement the bond of loyalty to his brothers in arms. Upon becoming a full citizen and warrior, the young man was expected to take on a boy and continue the tradition. As to many modern utopian social engineers, the family was seen as an obstacle to the citizen's identification with the community (or, in modern terminology, the state), and the entire process of raising citizens seems to have been designed to transfer this inherent biological solidarity with kin to peers in the army and the community as a whole.

The political structure which sustained and, in turn, was sustained by these cultural institutions was similarly alien and intricate—so much so that I found myself wishing that Professor Rahe had included a diagram to help readers understand all of the moving parts and how they interacted. After finishing the book, I found this one on Wikipedia.

Structure of Government in Sparta
Image by Wikipedia user Putinovac licensed under the
Creative Commons Attribution 3.0 Unported license.

The actual relationships are even more complicated and subtle than expressed in this diagram, and given the extent to which scholars dispute the details of the Spartan political institutions (which occupy many pages in the end notes), it is likely the author may find fault with some aspects of this illustration. I present it purely because it provides a glimpse of the complexity and helped me organise my thoughts about the description in the text.

Start with the kings. That's right, “kings”—there were two of them—both traditionally descended from Hercules, but through different lineages. The kings shared power and acted as a check on each other. They were commanders of the army in time of war, and high priests in peace. The kingship was hereditary and for life.

Five overseers, or ephors were elected annually by the citizens as a whole. Scholars dispute whether ephors could serve more than one term, but the author notes that no ephor is known to have done so, and it is thus likely they were term limited to a single year. During their year in office, the board of five ephors (one from each of the villages of Sparta) exercised almost unlimited power in both domestic and foreign affairs. Even the kings were not immune to their power: the ephors could arrest a king and bring him to trial on a capital charge just like any other citizen, and this happened. On the other hand, at the end of their one year term, ephors were subject to a judicial examination of their acts in office and liable for misconduct. (Wouldn't be great if present-day “public servants” received the same kind of scrutiny at the end of their terms in office? It would be interesting to see what a prosecutor could discover about how so many of these solons manage to amass great personal fortunes incommensurate with their salaries.) And then there was the “fickle meteor of doom” rule.

Every ninth year, the five [ephors] chose a clear and moonless night and remained awake to watch the sky. If they saw a shooting star, they judged that one or both kings had acted against the law and suspended the man or men from office. Only the intervention of Delphi or Olympia could effect a restoration.

I can imagine the kings hoping they didn't pick a night in mid-August for their vigil!

The ephors could also summon the council of elders, or gerousίa, into session. This body was made up of thirty men: the two kings, plus twenty-eight others, all sixty years or older, who were elected for life by the citizens. They tended to be wealthy aristocrats from the oldest families, and were seen as protectors of the stability of the city from the passions of youth and the ambition of kings. They proposed legislation to the general assembly of all citizens, and could veto its actions. They also acted as a supreme court in capital cases. The general assembly of all citizens, which could also be summoned by the ephors, was restricted to an up or down vote on legislation proposed by the elders, and, perhaps, on sentences of death passed by the ephors and elders.

All of this may seem confusing, if not downright baroque, especially for a community which, in the modern world, would be considered a medium-sized town. Once again, it's something which, if you encountered it in a science fiction novel, you might expect the result of a Golden Age author, paid by the word, making ends meet by inventing fairy castles of politics. But this is how Sparta seems to have worked (again, within the limits of that single floppy disc we have to work with, and with almost every detail a matter of dispute among those who have spent their careers studying Sparta over the millennia). Unlike the U.S. Constitution, which was the product of a group of people toiling over a hot summer in Philadelphia, the Spartan constitution, like that of Britain, evolved organically over centuries, incorporating tradition, the consequences of events, experience, and cultural evolution. And, like the British constitution, it was unwritten. But it incorporated, among all its complexity and ambiguity, something very important, which can be seen as a milestone in humankind's millennia-long struggle against arbitrary authority and quest for individual liberty: the separation of powers. Unlike almost all other political systems in antiquity and all too many today, there was no pyramid with a king, priest, dictator, judge, or even popular assembly at the top. Instead, there was a complicated network of responsibility, in which any individual player or institution could be called to account by others. The regimentation, destruction of the family, obligatory homosexuality, indoctrination of the youth into identification with the collective, foundation of the society's economics on serfdom, suppression of individual initiative and innovation were, indeed, almost a model for the most dystopian of modern tyrannies, yet darned if they didn't get the separation of powers right! We owe much of what remains of our liberties to that heritage.

Although this is a short book and this is a lengthy review, there is much more here to merit your attention and consideration. It's a chore getting through the end notes, as much of them are source citations in the dense jargon of classical scholars, but embedded therein are interesting discussions and asides which expand upon the text.

In the Kindle edition, all of the citations and index references are properly linked to the text. Some Greek letters with double diacritical marks are rendered as images and look odd embedded in text; I don't know if they appear correctly in print editions.

 Permalink

Gleick, James. Time Travel. New York: Pantheon Books, 2016. ISBN 978-0-307-90879-7.
In 1895, a young struggling writer who earned his precarious living by writing short humorous pieces for London magazines, often published without a byline, buckled down and penned his first long work, a longish novella of some 33,000 words. When published, H. G. Wells's The Time Machine would not only help to found a new literary genre—science fiction, but would introduce a entirely new concept to storytelling: time travel. Many of the themes of modern fiction can be traced to the myths of antiquity, but here was something entirely new: imagining a voyage to the future to see how current trends would develop, or back into the past, perhaps not just to observe history unfold and resolve its persistent mysteries, but possibly to change the past, opening the door to paradoxes which have been the subject not only of a multitude of subsequent stories but theories and speculation by serious scientists. So new was the concept of travel through time that the phrase “time travel” first appeared in the English language only in 1914, in a reference to Wells's story.

For much of human history, there was little concept of a linear progression of time. People lived lives much the same as those of their ancestors, and expected their descendants to inhabit much the same kind of world. Their lives seemed to be governed by a series of cycles: day and night, the phases of the Moon, the seasons, planting and harvesting, and successive generations of humans, rather than the ticking of an inexorable clock. Even great disruptive events such as wars, plagues, and natural disasters seemed to recur over time, even if not on a regular, predictable schedule. This led to the philosophical view of “eternal return”, which appears in many ancient cultures and in Western philosophy from Pythagoras to Neitzsche. In mathematics, the Poincaré recurrence theorem formally demonstrated that an isolated finite system will eventually (although possibly only after a time much longer than the age of the universe), return to a given state and repeat its evolution an infinite number of times.

But nobody (except perhaps a philosopher) who had lived through the 19th century in Britain could really believe that. Over the space of a human lifetime, the world and the human condition had changed radically and seemed to be careening into a future difficult to envision. Steam power, railroads, industrialisation of manufacturing, the telegraph and telephone, electricity and the electric light, anaesthesia, antiseptics, steamships and global commerce, submarine cables and near-instantaneous international communications, had all remade the world. The idea of progress was not just an abstract concept of the Enlightenment, but something anybody could see all around them.

But progress through what? In the fin de siècle milieu that Wells inhabited, through time: a scroll of history being written continually by new ideas, inventions, creative works, and the social changes flowing from these events which changed the future in profound and often unknowable ways. The intellectual landscape was fertile for utopian ideas, many of which Wells championed. Among the intellectual élite, the fourth dimension was much in vogue, often a fourth spatial dimension but also the concept of time as a dimension comparable to those of space. This concept first appears in the work of Edgar Allan Poe in 1848, but was fully fleshed out by Wells in The Time Machine: “ ‘Clearly,’ the Time Traveller proceeded, ‘any real body must have extension in four dimensions: it must have Length, Breadth, Thickness, and—Duration.’ ” But if we can move freely through the three spatial directions (although less so in the vertical in Wells's day than the present), why cannot we also move back and forth in time, unshackling our consciousness and will from the tyranny of the timepiece just as the railroad, steamship, and telegraph had loosened the constraints of locality?

Just ten years after The Time Machine, Einstein's special theory of relativity resolved puzzles in electrodynamics and mechanics by demonstrating that time and space mixed depending upon the relative states of motion of observers. In 1908, Hermann Minkowski reformulated Einstein's theory in terms of a four dimensional space-time. He declared, “Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” (Einstein was, initially, less than impressed with this view, calling it “überflüssige Gelehrsamkeit”: superfluous learnedness, but eventually accepted the perspective and made it central to his 1915 theory of gravitation.) But further, embedded within special relativity, was time travel—at least into the future.

According to the equations of special relativity, which have been experimentally verified as precisely as anything in science and are fundamental to the operation of everyday technologies such as the Global Positioning System, a moving observer will measure time to flow more slowly than a stationary observer. We don't observe this effect in everyday life because the phenomenon only becomes pronounced at velocities which are a substantial fraction of the speed of light, but even at the modest velocity of orbiting satellites, it cannot be neglected. Due to this effect of time dilation, if you had a space ship able to accelerate at a constant rate of one Earth gravity (people on board would experience the same gravity as they do while standing on the Earth's surface), you would be able to travel from the Earth to the Andromeda galaxy and back to Earth, a distance of around four million light years, in a time, measured by the ship's clock and your own subjective and biological perception of time, in less than six and a half years. But when you arrived back at the Earth, you'd discover that in its reference frame, more than four million years of time would have elapsed. What wonders would our descendants have accomplished in that distant future, or would they be digging for grubs with blunt sticks while living in a sustainable utopia having finally thrown off the shackles of race, class, and gender which make our present civilisation a living Hell?

This is genuine time travel into the future and, although it's far beyond our present technological capabilities, it violates no law of physics and, to a more modest yet still measurable degree, happens every time you travel in an automobile or airplane. But what about travel into the past? Travel into the future doesn't pose any potential paradoxes. It's entirely equivalent to going into hibernation and awaking after a long sleep—indeed, this is a frequently-used literary device in fiction depicting the future. Travel into the past is another thing entirely. For example, consider the grandfather paradox: suppose you have a time machine able to transport you into the past. You go back in time and kill your own grandfather (it's never the grandmother—beats me). Then who are you, and how did you come into existence in the first place? The grandfather paradox exists whenever altering an event in the past changes conditions in the future so as to be inconsistent with the alteration of that event.

Or consider the bootstrap paradox or causal loop. An elderly mathematician (say, age 39), having struggled for years and finally succeeded in proving a difficult theorem, travels back in time and provides a key hint to his twenty year old self to set him on the path to the proof—the same hint he remembers finding on his desk that morning so many years before. Where did the idea come from? In 1991, physicist David Deutsch demonstrated that a computer incorporating travel back in time (formally, a closed timelike curve) could solve NP problems in polynomial time. I wonder where he got that idea….

All of this would be academic were time travel into the past just a figment of fictioneers' imagination. This has been the view of many scientists, and the chronology protection conjecture asserts that the laws of physics conspire to prevent travel to the past which, in the words of a 1992 paper by Stephen Hawking, “makes the universe safe for historians.” But the laws of physics, as we understand them today, do not rule out travel into the past! Einstein's 1915 general theory of relativity, which so far has withstood every experimental test for over a century, admits solutions, such as the Gödel metric, discovered in 1949 by Einstein's friend and colleague Kurt Gödel, which contain closed timelike curves. In the Gödel universe, which consists of a homogeneous sea of dust particles, rotating around a centre point and with a nonzero cosmological constant, it is possible, by travelling on a closed path and never reaching or exceeding the speed of light, to return to a point in one's own past. Now, the Gödel solution is highly contrived, and there is no evidence that it describes the universe we actually inhabit, but the existence of such a solution leaves the door open that somewhere in the other exotica of general relativity such as spinning black holes, wormholes, naked singularities, or cosmic strings, there may be a loophole which allows travel into the past. If you discover one, could you please pop back and send me an E-mail about it before I finish this review?

This book is far more about the literary and cultural history of time travel than scientific explorations of its possibility and consequences. Thinking about time travel forces one to confront questions which can usually be swept under the rug: is the future ours to change, or do we inhabit a block universe where our perception of time is just a delusion as the cursor of our consciousness sweeps out a path in a space-time whose future is entirely determined by its past? If we have free will, where does it come from, when according to the laws of physics the future can be computed entirely from the past? If we can change the future, why not the past? If we changed the past, would it change the present for those living in it, or create a fork in the time line along which a different history would develop? All of these speculations are rich veins to be mined in literature and drama, and are explored here. Many technical topics are discussed only briefly, if at all, for example the Wheeler-Feynman absorber theory, which resolves a mystery in electrodynamics by positing a symmetrical solution to Maxwell's equations in which the future influences the past just as the present influences the future. Gleick doesn't go anywhere near my own experiments with retrocausality or the “presponse” experiments of investigators such as Dick Bierman and Dean Radin. I get it—pop culture beats woo-woo on the bestseller list.

The question of time has puzzled people for millennia. Only recently have we thought seriously about travel in time and its implications for our place in the universe. Time travel has been, and will doubtless continue to be the source of speculation and entertainment, and this book is an excellent survey of its short history as a genre of fiction and the science upon which it is founded.

 Permalink

Cline, Ernest. Ready Player One. New York: Broadway Books, 2011. ISBN 978-0-307-88744-3.
By the mid-21st century, the Internet has become largely subsumed as the transport layer for the OASIS (Ontologically Anthropocentric Sensory Immersive Simulation), a massively multiuser online virtual reality environment originally developed as a multiplayer game, but which rapidly evolved into a platform for commerce, education, social interaction, and entertainment used by billions of people around the world. The OASIS supports immersive virtual reality, limited only by the user's budget for hardware used to access the network. With top-of-the-line visors and sound systems, body motion sensors, and haptic feedback, coupled to a powerful interface console, a highly faithful experience was possible. The OASIS was the creation of James Halliday, a legendary super-nerd who made his first fortune designing videogames for home computers in the 1980s, and then re-launched his company in 2012 as Gregarious Simulation Systems (GSS), with the OASIS as its sole product. The OASIS was entirely open source: users could change things within the multitude of worlds within the system (within the limits set by those who created them), or create their own new worlds. Using a distributed computing architecture which pushed much of the processing power to the edge of the network, on users' own consoles, the system was able to grow without bound without requiring commensurate growth in GSS data centres. And it was free, or almost so. To access the OASIS, you paid only a one-time lifetime sign-up fee of twenty-five cents, just like the quarter you used to drop into the slot of an arcade videogame. Users paid nothing to use the OASIS itself: their only costs were the hardware they used to connect (which varied widely in cost and quality of the experience) and the bandwidth to connect to the network. But since most of the processing was done locally, the latter cost was modest. GSS made its money selling or renting virtual real estate (“surreal estate”) within the simulation. If you wanted to open, say, a shopping mall or build your own Fortress of Solitude on an asteroid, you had to pay GSS for the territory. GSS also sold virtual goods: clothes, magical artefacts, weapons, vehicles of all kinds, and buildings. Most were modestly priced, but since they cost nothing to manufacture, were pure profit to the company.

As the OASIS permeated society, GSS prospered. Halliday remained the majority shareholder in the company, having bought back the share once owned by his co-founder and partner Ogden (“Og”) Morrow, after what was rumoured to be a dispute between the two the details of which had never been revealed. By 2040, Halliday's fortune, almost all in GSS stock, had grown to more than two hundred and forty billion dollars. And then, after fifteen years of self-imposed isolation which some said was due to insanity, Halliday died of cancer. He was a bachelor, with no living relatives, no heirs, and, it was said, no friends. His death was announced on the OASIS in a five minute video titled Anaorak's Invitation (“Anorak” was the name of Halliday's all-powerful avatar within the OASIS). In the film, Halliday announces that his will places his entire fortune in escrow until somebody completes the quest he has programmed within the OASIS:

Three hidden keys open three secret gates,
Wherein the errant will be tested for worthy traits,
And those with the skill to survive these straits,
Will reach The End where the prize awaits.

The prize is Halliday's entire fortune and, with it, super-user control of the principal medium of human interaction, business, and even politics. Before fading out, Halliday shows three keys: copper, jade, and crystal, which must be obtained to open the three gates. Only after passing through the gates and passing the tests within them, will the intrepid paladin obtain the Easter egg hidden within the OASIS and gain control of it. Halliday provided a link to Anorak's Almanac, more than a thousand pages of journal entries made during his life, many of which reflect his obsession with 1980s popular culture, science fiction and fantasy, videogames, movies, music, and comic books. The clues to finding the keys and the Egg were widely believed to be within this rambling, disjointed document.

Given the stakes, and the contest's being open to anybody in the OASIS, what immediately came to be called the Hunt became a social phenomenon, all-consuming to some. Egg hunters, or “gunters”, immersed themselves in Halliday's journal and every pop culture reference within it, however obscure. All of this material was freely available on the OASIS, and gunters memorised every detail of anything which had caught Halliday's attention. As time passed, and nobody succeeded in finding even the copper key (Halliday's memorial site displayed a scoreboard of those who achieved goals in the Hunt, so far blank), many lost interest in the Hunt, but a dedicated hard core persisted, often to the exclusion of all other diversions. Some gunters banded together into “clans”, some very large, agreeing to exchange information and, if one found the Egg, to share the proceeds with all members. More sinister were the activities of Innovative Online Industries—IOI—a global Internet and communications company which controlled much of the backbone that underlay the OASIS. It had assembled a large team of paid employees, backed by the research and database facilities of IOI, with their sole mission to find the Egg and turn control of the OASIS over to IOI. These players, all with identical avatars and names consisting of their six-digit IOI employee numbers, all of which began with the digit “6”, were called “sixers” or, more often in the gunter argot, “Sux0rz”.

Gunters detested IOI and the sixers, because it was no secret that if they found the Egg, IOI's intention was to close the architecture of the OASIS, begin to charge fees for access, plaster everything with advertising, destroy anonymity, snoop indiscriminately, and use their monopoly power to put their thumb on the scale of all forms of communication including political discourse. (Fortunately, that couldn't happen to us with today's enlightened, progressive Silicon Valley overlords.) But IOI's financial resources were such that whenever a rare and powerful magical artefact (many of which had been created by Halliday in the original OASIS, usually requiring the completion of a quest to obtain, but freely transferrable thereafter) came up for auction, IOI was usually able to outbid even the largest gunter clans and add it to their arsenal.

Wade Watts, a lone gunter whose avatar is named Parzival, became obsessed with the Hunt on the day of Halliday's death, and, years later, devotes almost every minute of his life not spent sleeping or in school (like many, he attends school in the OASIS, and is now in the last year of high school) on the Hunt, reading and re-reading Anorak's Almanac, reading, listening to, playing, and viewing everything mentioned therein, to the extent he can recite the dialogue of the movies from memory. He makes copious notes in his “grail diary”, named after the one kept by Indiana Jones. His friends, none of whom he has ever met in person, are all gunters who congregate on-line in virtual reality chat rooms such as that run by his best friend, Aech.

Then, one day, bored to tears and daydreaming in Latin class, Parzival has a flash of insight. Putting together a message buried in the Almanac that he and many other gunters had discovered but failed to understand, with a bit of Latin and his encyclopedic knowledge of role playing games, he decodes the clue and, after a demanding test, finds himself in possession of the Copper Key. His name, alone, now appears at the top of the scoreboard, with 10,000 points. The path to the First Gate was now open.

Discovery of the Copper Key was a sensation: suddenly Parzival, a humble level 10 gunter, is a worldwide celebrity (although his real identity remains unknown, as he refuses all media offers which would reveal or compromise it). Knowing that the key can be found re-energises other gunters, not to speak of IOI, and Parzival's footprints in the OASIS are scrupulously examined for clues to his achievement. (Finding a key and opening a gate does not render it unavailable to others. Those who subsequently pass the tests will receive their own copies of the key, although there is a point bonus for finding it first.)

So begins an epic quest by Parzival and other gunters, contending with the evil minions of IOI, whose potential gain is so high and ethics so low that the risks may extend beyond the OASIS into the real world. For the reader, it is a nostalgic romp through every aspect of the popular culture of the 1980s: the formative era of personal computing and gaming. The level of detail is just staggering: this may be the geekiest nerdfest ever published. Heck, there's even a reference to an erstwhile Autodesk employee! The only goof I noted is a mention of the “screech of a 300-baud modem during the log-in sequence”. Three hundred baud modems did not have the characteristic squawk and screech sync-up of faster modems which employ trellis coding. While there are a multitude of references to details which will make people who were there, then, smile, readers who were not immersed in the 1980s and/or less familiar with its cultural minutiæ can still enjoy the challenges, puzzles solved, intrigue, action, and epic virtual reality battles which make up the chronicle of the Hunt. The conclusion is particularly satisfying: there may be a bigger world than even the OASIS.

A movie based upon the novel, directed by Steven Spielberg, is scheduled for release in March 2018.

 Permalink

Hirsi Ali, Ayaan. The Challenge of Dawa. Stanford, CA: Hoover Institution Press, 2017.
Ayaan Hirsi Ali was born in Somalia in 1969. In 1992 she was admitted to the Netherlands and granted political asylum on the basis of escaping an arranged marriage. She later obtained Dutch citizenship, and was elected to the Dutch parliament, where she served from 2001 through 2006. In 2004, she collaborated with Dutch filmmaker Theo van Gogh on the short film Submission, about the abuse of women in Islamic societies. After release of the film, van Gogh was assassinated, with a note containing a death threat for Hirsi Ali pinned to his corpse with a knife. Thereupon, she went into hiding with a permanent security detail to protect her against ongoing threats. In 2006, she moved to the U.S., taking a position at the American Enterprise Institute. She is currently a Fellow at the Hoover Institution.

In this short book (or long pamphlet: it is just 105 pages, with 70 pages of main text), Hirsi Ali argues that almost all Western commentators on the threat posed by Islam have fundamentally misdiagnosed the nature of the challenge it poses to Western civilisation and the heritage of the Enlightenment, and, failing to understand the tactics of Islam's ambition to dominate the world, dating to Mohammed's revelations in Medina and his actions in that period of his life, have adopted strategies which are ineffective and in some cases counterproductive in confronting the present danger.

The usual picture of Islam presented by politicians and analysts in the West (at least those who admit there is any problem at all) is that most Muslims are peaceful, productive people who have no problems becoming integrated in Western societies, but there is a small minority, variously called “radical”, “militant”, “Islamist”, “fundamentalist”, or other names, who are bent on propagating their religion by means of violence, either in guerrilla or conventional wars, or by terror attacks on civilian populations. This view has led to involvement in foreign wars, domestic surveillance, and often intrusive internal security measures to counter the threat, which is often given the name of “jihad”. A dispassionate analysis of these policies over the last decade and a half must conclude that they are not working: despite trillions of dollars spent and thousands of lives lost, turning air travel into a humiliating and intimidating circus, and invading the privacy of people worldwide, the Islamic world seems to be, if anything, more chaotic than it was in the year 2000, and the frequency and seriousness of so-called “lone wolf” terrorist attacks against soft targets does not seem to be abating. What if we don't really understand what we're up against? What if jihad isn't the problem, or only a part of something much larger?

Dawa (or dawah, da'wah, daawa, daawah—there doesn't seem to be anything associated with this religion which isn't transliterated at least three different ways—the Arabic is “دعوة”) is an Arabic word which literally means “invitation”. In the context of Islam, it is usually translated as “proselytising” or spreading the religion by nonviolent means, as is done by missionaries of many other religions. But here, Hirsi Ali contends that dawa, which is grounded in the fundamental scripture of Islam: the Koran and Hadiths (sayings of Mohammed), is something very different when interpreted and implemented by what she calls “political Islam”. As opposed to a distinction between moderate and radical Islam, she argues that Islam is more accurately divided into “spiritual Islam” as revealed in the earlier Mecca suras of the Koran, and “political Islam”, embodied by those dating from Medina. Spiritual Islam defines a belief system, prayers, rituals, and duties of believers, but is largely confined to the bounds of other major religions. Political Islam, however, is a comprehensive system of politics, civil and criminal law, economics, the relationship with and treatment of nonbelievers, and military strategy, and imposes a duty to spread Islam into new territories.

Seen through the lens of political Islam, dawa and those engaged in it, often funded today by the deep coffers of petro-tyrannies, is nothing like the activities of, say, Roman Catholic or Mormon missionaries. Implemented through groups such as the Council on American-Islamic Relations (CAIR), centres on Islamic and Middle East studies on university campuses, mosques and Islamic centres in communities around the world, so-called “charities” and non-governmental organisations, all bankrolled by fundamentalist champions of political Islam, dawa in the West operates much like the apparatus of Communist subversion described almost sixty years ago by J. Edgar Hoover in Masters of Deceit. You have the same pattern of apparently nonviolent and innocuously-named front organisations, efforts to influence the influential (media figures, academics, politicians), infiltration of institutions along the lines of Antonio Gramsci's “long march”, exploitation of Western traditions such as freedom of speech and freedom of religion to achieve goals diametrically opposed to them, and redefinition of the vocabulary and intimidation of any who dare state self-evident facts (mustn't be called “islamophobic”!), all funded from abroad. Unlike communists in the heyday of the Comintern and afterward the Cold War, Islamic subversion is assisted by large scale migration of Muslims into Western countries, especially in Europe, where the organs of dawa encourage them to form their own separate communities, avoiding assimilation, and demanding the ability to implement their own sharia law and that others respect their customs. Dawa is directed at these immigrants as well, with the goal of increasing their commitment to Islam and recruiting them for its political agenda: the eventual replacement of Western institutions with sharia law and submission to a global Islamic caliphate. This may seem absurdly ambitious for communities which, in most countries, aren't much greater than 5% of the population, but they're patient: they've been at it for fourteen centuries, and they're out-breeding the native populations in almost every country where they've become established.

Hirsi Ali argues persuasively that the problem isn't jihad: jihad is a tactic which can be employed as part of dawa when persuasion, infiltration, and subversion prove insufficient, or as a final step to put the conquest over the top, but it's the commitment to global hegemony, baked right into the scriptures of Islam, which poses the most dire risk to the West, especially since so few decision makers seem to be aware of it or, if they are, dare not speak candidly of it lest they be called “islamophobes” or worse. This is something about which I don't need to be persuaded: I've been writing about it since 2015; see “Clash of Ideologies: Communism, Islam, and the West”. I sincerely hope that this work by an eloquent observer who has seen political Islam from the inside will open more eyes to the threat it poses to the West. A reasonable set of policy initiatives to confront the threat is presented at the end. The only factual error I noted is the claim on p. 57 that Joseph R. McCarthy was in charge of the House Committee on Un-American Activities—in fact, McCarthy, a Senator, presided over the Senate Permanent Subcommittee on Investigations.

This is a publication of the Hoover Institution. It has no ISBN and cannot be purchased through usual booksellers. Here is the page for the book, whence you can download the PDF file for free.

 Permalink

Egan, Greg. Dichronauts. New York: Night Shade Books, 2017. ISBN 978-1-59780-892-7.
One of the more fascinating sub-genres of science fiction is “world building”: creating the setting in which a story takes place by imagining an environment radically different from any in the human experience. This can run the gamut from life in the atmosphere of a gas giant planet (Saturn Rukh), on the surface of a neutron star (Dragon's Egg), or on an enormous alien-engineered wheel surrounding a star (Ringworld). When done well, the environment becomes an integral part of the tale, shaping the characters and driving the plot. Greg Egan is one of the most accomplished of world builders. His fiction includes numerous examples of alien environments, with the consequences worked out and woven into the story.

The present novel may be his most ambitious yet: a world in which the fundamental properties of spacetime are different from those in our universe. Unfortunately, for this reader, the execution was unequal to the ambition and the result disappointing. I'll explain this in more detail, but let's start with the basics.

We inhabit a spacetime which is well-approximated by Minkowski space. (In regions where gravity is strong, spacetime curvature must be taken into account, but this can be neglected in most circumstances including those in this novel.) Minkowski space is a flat four-dimensional space where each point is identified by three space and one time coordinate. It is thus spoken of as a 3+1 dimensional space. The space and time dimensions are not interchangeable: when computing the spacetime separation of two events, their distance or spacetime interval is given by the quantity −t²+x²+y²+z². Minkowski space is said to have a metric signature of (−,+,+,+), from the signs of the four coordinates in the distance (metric) equation.

Why does our universe have a dimensionality of 3+1? Nobody knows—string theorists who argue for a landscape of universes in an infinite multiverse speculate that the very dimensionality of a universe may be set randomly when the baby universe is created in its own big bang bubble. Max Tegmark has argued that universes with other dimensionalities would not permit the existence of observers such as us, so we shouldn't be surprised to find ourselves in one of the universes which is compatible with our own existence, nor should we rule out a multitude of other universes with different dimensionalities, all of which may be devoid of observers.

But need they necessarily be barren? The premise of this novel is, “not necessarily so”, and Egan has created a universe with a metric signature of (−,−,+,+), a 2+2 dimensional spacetime with two spacelike dimensions and two timelike dimensions. Note that “timelike” refers to the sign of the dimension in the distance equation, and the presence of two timelike dimensions is not equivalent to two time dimensions. There is still a single dimension of time, t, in which events occur in a linear order just as in our universe. The second timelike dimension, which we'll call u, behaves like a spatial dimension in that objects can move within it as they can along the other x and y spacelike dimensions, but its contribution in the distance equation is negative: −t²−u²+x²+y². This results in a seriously weird, if not bizarre world.

From this point on, just about everything I'm going to say can be considered a spoiler if your intention is to read the book from front to back and not consult the extensive background information on the author's Web site. Conversely, I shall give away nothing regarding the plot or ending which is not disclosed in the background information or the technical afterword of the novel. I do not consider this material as spoilers; in fact, I believe that many readers who do not first understand the universe in which the story is set are likely to abandon the book as simply incomprehensible. Some of the masters of world building science fiction introduce the reader to the world as an ongoing puzzle as the story unfolds but, for whatever reason, Egan did not choose to do that here, or else he did so sufficiently poorly that this reader didn't even notice the attempt. I think the publisher made a serious mistake in not alerting the reader to the existence of the technical afterword, the reading of which I consider a barely sufficient prerequisite for understanding the setting in which the novel takes place.

In the Dichronauts universe, there is a “world” around which a smaller ”star” orbits (or maybe the other way around; it's just a coordinate transformation). The geometry of the spacetime dominates everything. While in our universe we're free to move in any of the three spatial dimensions, in this spacetime motion in the x and y dimensions is as for us, but if you're facing in the positive x dimension—let's call it east—you cannot rotate outside the wedge from northeast to southeast, and as you rotate the distance equation causes a stretching to occur, like the distortions in relativistic motion in special relativity. It is no more possible to turn all the way to the northeast than it is to attain the speed of light in our universe. If you were born east-facing, the only way you can see to the west is to bend over and look between your legs. The beings who inhabit this world seem to be born randomly east- or west-facing.

Light only propagates within the cone defined by the spacelike dimensions. Any light source has a “dark cone” defined by a 45° angle around the timelike u dimension. In this region, vision does not work, so beings are blind to their sides. The creatures who inhabit the world are symbionts of bipeds who call themselves “walkers” and slug-like creatures, “siders”, who live inside their skulls and receive their nutrients from the walker's bloodstream. Siders are equipped with “pingers”, which use echolocation like terrestrial bats to sense within the dark cone. While light cannot propagate there, physical objects can move in that direction, including the density waves which carry sound. Walkers and siders are linked at the brain level and can directly perceive each other's views of the world and communicate without speaking aloud. Both symbiotes are independently conscious, bonded at a young age, and can, like married couples, have acrimonious disputes. While walkers cannot turn outside the 90° cone, they can move in the timelike north-south direction by “sidling”, relying upon their siders to detect obstacles within their cone of blindness.

Due to details of the structure of their world, the walker/sider society, which seems to be at a pre-industrial level (perhaps due to the fact that many machines would not work in the weird geometry they inhabit), is forced to permanently migrate to stay within the habitable zone between latitudes which are seared by the rays of the star and those too cold for agriculture. For many generations, the town of Baharabad has migrated along a river, but now the river appears to be drying up, creating a crisis. Seth (walker) and Theo (sider), are surveyors, charged with charting the course of their community's migration. Now they are faced with the challenge of finding a new river to follow, one which has not already been claimed by another community. On an expedition to the limits of the habitable zone, they encounter what seems to be the edge of the world. Is it truly the edge, and if not what lies beyond? They join a small group of explorers who probe regions of their world never before seen, and discover clues to the origin of their species.

This didn't work for me. If you read all of the background information first (which, if you're going to dig into this novel, I strongly encourage you to do), you'll appreciate the effort the author went to in order to create a mathematically consistent universe with two timelike dimensions, and to work out the implications of this for a world within it and the beings who live there. But there is a tremendous amount of arm waving behind the curtain which, if you peek, subverts the plausibility of everything. For example, the walker/sider creatures are described as having what seems to be a relatively normal metabolism: they eat fruit, grow crops, breathe, eat, and drink, urinate and defecate, and otherwise behave as biological organisms. But biology as we know it, and all of these biological functions, requires the complex stereochemistry of the organic molecules upon which organisms are built. If the motion of molecules were constrained to a cone, and their shape stretched with rotation, the operation of enzymes and other biochemistry wouldn't work. And yet that doesn't seem to be a problem for these beings.

Finally, the story simply stops in the middle, with the great adventure and resolution of the central crisis unresolved. There will probably be a sequel. I shall not read it.

 Permalink

Casey, Doug and John Hunt. Drug Lord. Charlottesville, VA: HighGround Books, 2017. ISBN 978-1-947449-07-7.
This is the second novel in the authors' “High Ground” series, chronicling the exploits of Charles Knight, an entrepreneur and adventurer determined to live his life according to his own moral code, constrained as little as possible by the rules and regulations of coercive and corrupt governments. The first novel, Speculator (October 2016), follows Charles's adventures in Africa as an investor in a junior gold exploration company which just might have made the discovery of the century, and in the financial markets as he seeks to profit from what he's learned digging into the details. Charles comes onto the radar of ambitious government agents seeking to advance their careers by collecting his scalp.

Charles ends up escaping with his freedom and ethics intact, but with much of his fortune forfeit. He decides he's had enough of “the land of the free” and sets out on his sailboat to explore the world and sample the pleasures and opportunities it holds for one who thinks for himself. Having survived several attempts on his life and prevented a war in Africa in the previous novel, seven years later he returns to a really dangerous place, Washington DC, populated by the Morlocks of Mordor.

Charles has an idea for a new business. The crony capitalism of the U.S. pharmaceutical-regulatory complex has inflated the price of widely-used prescription drugs to many times that paid outside the U.S., where these drugs, whose patents have expired under legal regimes less easily manipulated than that of the U.S., are manufactured in a chemically-identical form by thoroughly professional generic drug producers. Charles understands, as fully as any engineer, that wherever there is nonlinearity the possibility for gain exists, and when that nonlinearity is the result of the action of coercive government, the potential profits from circumventing its grasp on the throat of the free market can be very large, indeed.

When Charles's boat docked in the U.S., he had an undeclared cargo: a large number of those little blue pills much in demand by men of a certain age, purchased for pennies from a factory in India through a cut-out in Africa he met on his previous adventure. He has the product, and a supplier able to obtain much more. Now, all he needs is distribution. He must venture into the dark underside of DC to make the connections that can get the product to the customers, and persuade potential partners that they can make much more and far more safely by distributing his products (which don't fall under the purview of the Drug Enforcement Agency, and to which local cops not only don't pay much attention, but may be potential customers).

Meanwhile, Charles's uncle Maurice, who has been managing what was left of his fortune during his absence, has made an investment in a start-up pharmaceutical company, Visioryme, whose first product, VR-210, or Sybillene, is threading its way through the FDA regulatory gauntlet toward approval for use as an antidepressant. Sybillene works through a novel neurochemical pathway, and promises to be an effective treatment for clinical depression while avoiding the many deleterious side effects of other drugs. In fact, Sybillene doesn't appear to have any side effects at all—or hardly any—there's that one curious thing that happened in animal testing, but not wishing to commit corporate seppuku, Visioryme hasn't mentioned it to the regulators or even their major investor, Charles.

Charles pursues his two pharmaceutical ventures in parallel: one in the DC ghetto and Africa; the other in the tidy suburban office park where Visioryme is headquartered. The first business begins to prosper, and Charles must turn his ingenuity to solving the problems attendant to any burgeoning enterprise: supply, transportation, relations with competitors (who, in this sector of the economy, not only are often armed but inclined to shoot first), expanding the product offerings, growing the distribution channels, and dealing with all of the money that's coming in, entirely in cash, without coming onto the radar of any of the organs of the slavers and their pervasive snooper-state.

Meanwhile, Sybillene finally obtains FDA approval, and Visioryme begins to take off and ramp up production. Charles's connections in Africa help the company obtain the supplies of bamboo required in production of the drug. It seems like he now has two successful ventures, on the dark and light sides, respectively, of the pharmaceutical business (which is dark and which is light depending on your view of the FDA).

Then, curious reports start to come in about doctors prescribing Sybillene off-label in large doses to their well-heeled patients. Off-label prescription is completely legal and not uncommon, but one wonders what's going on. Then there's the talk Charles is picking up from his other venture of demand for a new drug on the street: Sybillene, which goes under names such as Fey, Vatic, Augur, Covfefe, and most commonly, Naked Emperor. Charles's lead distributor reports, “It helps people see lies for what they are, and liars too. I dunno. I never tried it. Lots of people are asking though. Society types. Lawyers, businessmen, doctors, even cops.” It appears that Sybillene, or Naked Emperor, taken in a high dose, is a powerful nootropic which doesn't so much increase intelligence as, the opposite of most psychoactive drugs, allows the user to think more clearly, and see through the deception that pollutes the intellectual landscape of a modern, “developed”, society.

In that fœtid city by the Potomac, the threat posed by such clear thinking dwarfs that of other “controlled substances” which merely turn their users into zombies. Those atop an empire built on deceit, deficits, and debt cannot run the risk of a growing fraction of the population beginning to see through the funny money, Ponzi financing, Potemkin military, manipulation of public opinion, erosion of the natural rights of citizens, and the sham which is replacing the last vestiges of consensual government. Perforce, Sybillene must become Public Enemy Number One, and if a bit of lying and even murder is required, well, that's the price of preserving the government's ability to lie and murder.

Suddenly, Charles is involved in two illegal pharmaceutical ventures. As any wise entrepreneur would immediately ask himself, “might there be synergies?”

Thus begins a compelling, instructive, and inspiring tale of entrepreneurship and morality confronted with dark forces constrained by no limits whatsoever. We encounter friends and foes from the first novel, as once again Charles finds himself on point position defending those in the enterprises he has created. As I said in my review of Speculator, this book reminds me of Ayn Rand's The Fountainhead, but it is even more effective because Charles Knight is not a super-hero but rather a person with a strong sense of right and wrong who is making up his life as he goes along and learning from the experiences he has: good and bad, success and failure. Charles Knight, even without Naked Emperor, has that gift of seeing things precisely as they are, unobscured by the fog, cant, spin, and lies which are the principal products of the city in which it is set.

These novels are not just page-turning thrillers, they're simultaneously an introductory course in becoming an international man (or woman), transcending the lies of the increasingly obsolescent nation-state, and finding the liberty that comes from seizing control of one's own destiny. They may be the most powerful fictional recruiting tool for the libertarian and anarcho-capitalist world view since the works of Ayn Rand and L. Neil Smith. Speculator was my fiction book of the year for 2016, and this sequel is in the running for 2017.

 Permalink

September 2017

Scoles, Sarah. Making Contact. New York: Pegasus Books, 2017. ISBN 978-1-68177-441-1.
There are few questions in our scientific inquiry into the universe and our place within it more profound than “are we alone?” As we have learned more about our world and the larger universe in which it exists, this question has become ever more fascinating. We now know that our planet, once thought the centre of the universe, is but one of what may be hundreds of billions of planets in our own galaxy, which is one of hundreds of billions of galaxies in the observable universe. Not long ago, we knew only of the planets in our own solar system, and some astronomers believed planetary systems were rare, perhaps formed by freak encounters between two stars following their orbits around the galaxy. But now, thanks to exoplanet hunters and, especially, the Kepler spacecraft, we know that it's “planets, planets, everywhere”—most stars have planets, and many stars have planets where conditions may be suitable for the origin of life.

If this be the case, then when we gaze upward at the myriad stars in the heavens, might there be other eyes (or whatever sense organs they use for the optical spectrum) looking back from planets of those stars toward our Sun, wondering if they are alone? Many are the children, and adults, who have asked themselves that question when standing under a pristine sky. For the ten year old Jill Tarter, it set her on a path toward a career which has been almost coterminous with humanity's efforts to discover communications from extraterrestrial civilisations—an effort which continues today, benefitting from advances in technology unimagined when she undertook the quest.

World War II had seen tremendous advancements in radio communications, in particular the short wavelengths (“microwaves”) used by radar to detect enemy aircraft and submarines. After the war, this technology provided the foundation for the new field of radio astronomy, which expanded astronomers' window on the universe from the traditional optical spectrum into wavelengths that revealed phenomena never before observed nor, indeed, imagined, and hinted at a universe which was much larger, complicated, and violent than previously envisioned.

In 1959, Philip Morrison and Guiseppe Cocconi published a paper in Nature in which they calculated that using only technologies and instruments already existing on the Earth, intelligent extraterrestrials could send radio messages across the distances to the nearby stars, and that these messages could be received, detected, and decoded by terrestrial observers. This was the origin of SETI—the Search for Extraterrestrial Intelligence. In 1960, Frank Drake used a radio telescope to search for signals from two nearby star systems; he heard nothing.

As they say, absence of evidence is not evidence of absence, and this is acutely the case in SETI. First of all, consider that you must first decide what kind of signal aliens might send. If it's something which can't be distinguished from natural sources, there's little hope you'll be able to tease it out of the cacophony which is the radio spectrum. So we must assume they're sending something that doesn't appear natural. But what is the variety of natural sources? There's a dozen or so Ph.D. projects just answering that question, including some surprising discoveries of natural sources nobody imagined, such as pulsars, which were sufficiently strange that when first observed they were called “LGM” sources for “Little Green Men”. On what frequency are they sending (in other words, where do we have to turn our dial to receive them, for those geezers who remember radios with dials)? The most efficient signals will be those with a very narrow frequency range, and there are billions of possible frequencies the aliens might choose. We could be pointed in the right place, at the right time, and simply be tuned to the wrong station.

Then there's that question of “the right time”. It would be absurdly costly to broadcast a beacon signal in all directions at all times: that would require energy comparable to that emitted by a star (which, if you think about it, does precisely that). So it's likely that any civilisation with energy resources comparable to our own would transmit in a narrow beam to specific targets, switching among them over time. If we didn't happen to be listening when they were sending, we'd never know they were calling.

If you put all of these constraints together, you come up with what's called an “observational phase space”—a multidimensional space of frequency, intensity, duration of transmission, angular extent of transmission, bandwidth, and other parameters which determine whether you'll detect the signal. And that assumes you're listening at all, which depends upon people coming up with the money to fund the effort and pursue it over the years.

It's beyond daunting. The space to be searched is so large, and our ability to search it so limited, that negative results, even after decades of observation, are equivalent to walking down to the seashore, sampling a glass of ocean water, and concluding that based on the absence of fish, the ocean contained no higher life forms. But suppose you find a fish? That would change everything.

Jill Tarter began her career in the mainstream of astronomy. Her Ph.D. research at the University of California, Berkeley was on brown dwarfs (bodies more massive than gas giant planets but too small to sustain the nuclear fusion reactions which cause stars to shine—a brown dwarf emits weakly in the infrared as it slowly radiates away the heat from the gravitational contraction which formed it). Her work was supported by a federal grant, which made her uncomfortable—what relevance did brown dwarfs have to those compelled to pay taxes to fund investigating them? During her Ph.D. work, she was asked by a professor in the department to help with an aged computer she'd used in an earlier project. To acquaint her with the project, the professor asked her to read the Project Cyclops report. It was a conversion experience.

Project Cyclops was a NASA study conducted in 1971 on how to perform a definitive search for radio communications from intelligent extraterrestrials. Its report [18.2 Mb PDF], issued in 1972, remains the “bible” for radio SETI, although advances in technology, particularly in computing, have rendered some of its recommendations obsolete. The product of a NASA which was still conducting missions to the Moon, it was grandiose in scale, envisioning a large array of radio telescope dishes able to search for signals from stars up to 1000 light years in distance (note that this is still a tiny fraction of the stars in the galaxy, which is around 150,000 light years in diameter). The estimated budget for the project was between 6 and 10 billion dollars (multiply those numbers by around six to get present-day funny money) spent over a period of ten to fifteen years. The report cautioned that there was no guarantee of success during that period, and that the project should be viewed as a long-term endeavour with ongoing funding to operate the system and continue the search.

The Cyclops report arrived at a time when NASA was downsizing and scaling back its ambitions: the final three planned lunar landing missions had been cancelled in 1970, and production of additional Saturn V launch vehicles had been terminated the previous year. The budget climate wasn't hospitable to Apollo-scale projects of any description, especially those which wouldn't support lots of civil service and contractor jobs in the districts and states of NASA's patrons in congress. Unsurprisingly, Project Cyclops simply landed on the pile of ambitious NASA studies that went nowhere. But to some who read it, it was an inspiration. Tarter thought, “This is the first time in history when we don't just have to believe or not believe. Instead of just asking the priests and philosophers, we can try to find an answer. This is an old and important question, and I have the opportunity to change how we try to answer it.” While some might consider searching the sky for “little green men” frivolous and/or absurd, to Tarter this, not the arcana of brown dwarfs, was something worthy of support, and of her time and intellectual effort, “something that could impact people's lives profoundly in a short period of time.”

The project to which Tarter had been asked to contribute, Project SERENDIP (a painful acronym of Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) was extremely modest compared to Cyclops. It had no dedicated radio telescopes at all, nor even dedicated time on existing observatories. Instead, it would “piggyback” on observations made for other purposes, listening to the feed from the telescope with an instrument designed to detect the kind of narrow-band beacons envisioned by Cyclops. To cope with the problem of not knowing the frequency on which to listen, the receiver would monitor 100 channels simultaneously. Tarter's job was programming the PDP 8/S computer to monitor the receiver's output and search for candidate signals. (Project SERENDIP is still in operation today, employing hardware able to simultaneously monitor 128 million channels.)

From this humble start, Tarter's career direction was set. All of her subsequent work was in SETI. It would be a roller-coaster ride all the way. In 1975, NASA had started a modest study to research (but not build) technologies for microwave SETI searches. In 1978, the program came into the sights of senator William Proxmire, who bestowed upon it his “Golden Fleece” award. The program initially survived his ridicule, but in 1982, the budget zeroed out the project. Carl Sagan personally intervened with Proxmire, and in 1983 the funding was reinstated, continuing work on a more capable spectral analyser which could be used with existing radio telescopes.

Buffeted by the start-stop support from NASA and encouraged by Hewlett-Packard executive Bernard Oliver, a supporter of SETI from its inception, Tarter decided that SETI needed its own institutional home, one dedicated to the mission and able to seek its own funding independent of the whims of congressmen and bureaucrats. In 1984, the SETI Institute was incorporated in California. Initially funded by Oliver, over the years major contributions have been made by technology moguls including William Hewlett, David Packard, Paul Allen, Gordon Moore, and Nathan Myhrvold. The SETI Institute receives no government funding whatsoever, although some researchers in its employ, mostly those working on astrobiology, exoplanets, and other topics not directly related to SETI, are supported by research grants from NASA and the National Science Foundation. Fund raising was a skill which did not come naturally to Tarter, but it was mission critical, and so she mastered the art. Today, the SETI Institute is considered one of the most savvy privately-funded research institutions, both in seeking large donations and in grass-roots fundraising.

By the early 1990s, it appeared the pendulum had swung once again, and NASA was back in the SETI game. In 1992, a program was funded to conduct a two-pronged effort: a targeted search of 800 nearby stars, and an all-sky survey looking for stronger beacons. Both would employ what were then state-of-the-art spectrum analysers able to monitor 15 million channels simultaneously. After just a year of observations, congress once again pulled the plug. The SETI Institute would have to go it alone.

Tarter launched Project Phoenix, to continue the NASA targeted search program using the orphaned NASA spectrometer hardware and whatever telescope time could be purchased from donations to the SETI Institute. In 1995, observations resumed at the Parkes radio telescope in Australia, and subsequently a telescope at the National Radio Astronomy Observatory in Green Bank, West Virginia, and the 300 metre dish at Arecibo Observatory in Puerto Rico. The project continued through 2004.

What should SETI look like in the 21st century? Much had changed since the early days in the 1960s and 1970s. Digital electronics and computers had increased in power a billionfold, not only making it possible to scan a billion channels simultaneously and automatically search for candidate signals, but to combine the signals from a large number of independent, inexpensive antennas (essentially, glorified satellite television dishes), synthesising the aperture of a huge, budget-busting radio telescope. With progress in electronics expected to continue in the coming decades, any capital investment in antenna hardware would yield an exponentially growing science harvest as the ability to analyse its output grew over time. But to take advantage of this technological revolution, SETI could no longer rely on piggyback observations, purchased telescope time, or allocations at the whim of research institutions: “SETI needs its own telescope”—one optimised for the mission and designed to benefit from advances in electronics over its lifetime.

In a series of meetings from 1998 to 2000, the specifications of such an instrument were drawn up: 350 small antennas, each 6 metres in diameter, independently steerable (and thus able to be used all together, or in segments to simultaneously observe different targets), with electronics to combine the signals, providing an effective aperture of 900 metres with all dishes operating. With initial funding from Microsoft co-founder Paul Allen (and with his name on the project, the Allen Telescope Array), the project began construction in 2004. In 2007, observations began with the first 42 dishes. By that time, Paul Allen had lost interest in the project, and construction of additional dishes was placed on hold until a new benefactor could be found. In 2011, a funding crisis caused the facility to be placed in hibernation, and the observatory was sold to SRI International for US$ 1. Following a crowdfunding effort led by the SETI Institute, the observatory was re-opened later that year, and continues operations to this date. No additional dishes have been installed: current work concentrates on upgrading the electronics of the existing antennas to increase sensitivity.

Jill Tarter retired as co-director of the SETI Institute in 2012, but remains active in its scientific, fundraising, and outreach programs. There has never been more work in SETI underway than at the present. In addition to observations with the Allen Telescope Array, the Breakthrough Listen project, funded at US$ 100 million over ten years by Russian billionaire Yuri Milner, is using thousands of hours of time on large radio telescopes, with a goal of observing a million nearby stars and the centres of a hundred galaxies. All data are available to the public for analysis. A new frontier, unimagined in the early days of SETI, is optical SETI. A pulsed laser, focused through a telescope of modest aperture, is able to easily outshine the Sun in a detector sensitive to its wavelength and pulse duration. In the optical spectrum, there's no need for fancy electronics to monitor a wide variety of wavelengths: all you need is a prism or diffraction grating. The SETI Institute has just successfully completed a US$ 100,000 Indiegogo campaign to crowdfund the first phase of the Laser SETI project, which has as its ultimate goal an all-sky, all-the-time search for short pulses of light which may be signals from extraterrestrials or new natural phenomena to which no existing astronomical instrument is sensitive.

People often ask Jill Tarter what it's like to spend your entire career looking for something and not finding it. But she, and everybody involved in SETI, always knew the search would not be easy, nor likely to succeed in the short term. The reward for engaging in it is being involved in founding a new field of scientific inquiry and inventing and building the tools which allow exploring this new domain. The search is vast, and to date we have barely scratched the surface. About all we can rule out, after more than half a century, is a Star Trek-like universe where almost every star system is populated by aliens chattering away on the radio. Today, the SETI enterprise, entirely privately funded and minuscule by the standards of “big science”, is strongly coupled to the exponential growth in computing power and hence, roughly doubles its ability to search around every two years.

The question “are we alone?” is one which has profound implications either way it is answered. If we discover one or more advanced technological civilisations (and they will almost certainly be more advanced than we—we've only had radio for a little more than a century, and there are stars and planets in the galaxy billions of years older than ours), it will mean it's possible to grow out of the daunting problems we face in the adolescence of our species and look forward to an exciting and potentially unbounded future. If, after exhaustive searches (which will take at least another fifty years of continued progress in expanding the search space), it looks like we're alone, then intelligent life is so rare that we may be its only exemplar in the galaxy and, perhaps, the universe. Then, it's up to us. Our destiny, and duty, is to ensure that this spark, lit within us, will never be extinguished.

 Permalink

October 2017

Morton, Oliver. The Planet Remade. Princeton: Princeton University Press, 2015. ISBN 978-0-691-17590-4.
We live in a profoundly unnatural world. Since the start of the industrial revolution, and rapidly accelerating throughout the twentieth century, the actions of humans have begun to influence the flow of energy and materials in the Earth's biosphere on a global scale. Earth's current human population and standard of living are made possible entirely by industrial production of nitrogen-based fertilisers and crop plants bred to efficiently exploit them. Industrial production of fixed (chemically reactive) nitrogen from the atmosphere now substantially exceeds all of that produced by the natural soil bacteria on the planet which, prior to 1950, accounted for almost all of the nitrogen required to grow plants. Fixing nitrogen by the Haber-Bosch process is energy-intensive, and consumes around 1.5 percent of all the world's energy usage and, as a feedstock, 3–5% of natural gas produced worldwide. When we eat these crops, or animals fed from them, we are, in a sense, eating fossil fuels. On the order of four out of five nitrogen molecules that make up your body were made in a factory by the Haber-Bosch process. We are the children, not of nature, but of industry.

The industrial production of fertiliser, along with crops tailored to use them, is entirely responsible for the rapid growth of the Earth's population, which has increased from around 2.5 billion in 1950, when industrial fertiliser and “green revolution” crops came into wide use, to more than 7 billion today. This was accompanied not by the collapse into global penury predicted by Malthusian doom-sayers, but rather a broad-based rise in the standard of living, with extreme poverty and malnutrition falling to all-time historical lows. In the lifetimes of many people, including this scribbler, our species has taken over the flow of nitrogen through the Earth's biosphere, replacing a process mediated by bacteria for billions of years with one performed in factories. The flow of nitrogen from atmosphere to soil, to plants and the creatures who eat them, back to soil, sea, and ultimately the atmosphere is now largely in the hands of humans, and their very lives have become dependent upon it.

This is an example of “geoengineering”—taking control of what was a natural process and replacing it with an engineered one to produce a desired outcome: in this case, the ability to feed a much larger population with an unprecedented standard of living. In the case of nitrogen fixation, there wasn't a grand plan drawn up to do all of this: each step made economic sense to the players involved. (In fact, one of the motivations for developing the Haber-Bosch process was not to produce fertiliser, but rather to produce feedstocks for the manufacture of military and industrial explosives, which had become dependent on nitrates obtained from guano imported to Europe from South America.) But the outcome was the same: ours is an engineered world. Those who are repelled by such an intervention in natural processes or who are concerned by possible detrimental consequences of it, foreseen or unanticipated, must come to terms with the reality that abandoning this world-changing technology now would result in the collapse of the human population, with at least half of the people alive today starving to death, and many of the survivors reduced to subsistence in abject poverty. Sadly, one encounters fanatic “greens” who think this would be just fine (and, doubtless, imagining they'd be among the survivors).

Just mentioning geoengineering—human intervention and management of previously natural processes on a global scale—may summon in the minds of many Strangelove-like technological megalomania or the hubris of Bond villains, so it's important to bear in mind that we're already doing it, and have become utterly dependent upon it. When we consider the challenges we face in accommodating a population which is expected to grow to ten billion by mid-century (and, absent catastrophe, this is almost a given: the parents of the ten billion are mostly alive today), who will demand and deserve a standard of living comparable to what they see in industrial economies, and while carefully weighing the risks and uncertainties involved, it may be unwise to rule out other geoengineering interventions to mitigate undesirable consequences of supporting the human population.

In parallel with the human takeover of the nitrogen cycle, another geoengineering project has been underway, also rapidly accelerating in the 20th century, driven both by population growth and industrialisation of previously agrarian societies. For hundreds of millions of years, the Earth also cycled carbon through the atmosphere, oceans, biosphere, and lithosphere. Carbon dioxide (CO₂) was metabolised from the atmosphere by photosynthetic plants, extracting carbon for their organic molecules and producing oxygen released to the atmosphere, then passed along as plants were eaten, returned to the soil, or dissolved in the oceans, where creatures incorporated carbonates into their shells, which eventually became limestone rock and, over geological time, was subducted as the continents drifted, reprocessed far below the surface, and expelled back into the atmosphere by volcanoes. (This is a gross oversimplification of the carbon cycle, but we don't need to go further into it for what follows. The point is that it's something which occurs on a time scale of tens to hundreds of millions of years and on which humans, prior to the twentieth century, had little influence.)

The natural carbon cycle is not leakproof. Only part of the carbon sequestered by marine organisms and immured in limestone is recycled by volcanoes; it is estimated that this loss of carbon will bring the era of multicellular life on Earth to an end around a billion years from now. The carbon in some plants is not returned to the biosphere when they die. Sometimes, the dead vegetation accumulates in dense beds where it is protected against oxidation and eventually forms deposits of peat, coal, petroleum, and natural gas. Other than natural seeps and releases of the latter substances, their carbon is also largely removed from the biosphere. Or at least it was until those talking apes came along….

The modern technological age has been powered by the exploitation of these fossil fuels: laid down over hundreds of millions of years, often under special conditions which only existed in certain geological epochs, in the twentieth century their consumption exploded, powering our present technological civilisation. For all of human history up to around 1850, world energy consumption was less than 20 exajoules per year, almost all from burning biomass such as wood. (What's an exajoule? Well, it's 1018 joules, which probably tells you absolutely nothing. That's a lot of energy: equivalent to 164 million barrels of oil, or the capacity of around sixty supertankers. But it's small compared to the energy the Earth receives from the Sun, which is around 4 million exajoules per year.) By 1900, the burning of coal had increased this number to 33 exajoules, and this continued to grow slowly until around 1950 when, with oil and natural gas coming into the mix, energy consumption approached 100 exajoules. Then it really took off. By the year 2000, consumption was 400 exajoules, more than 85% from fossil fuels, and today it's more than 550 exajoules per year.

Now, as with the nitrogen revolution, nobody thought about this as geoengineering, but that's what it was. Humans were digging up, or pumping out, or otherwise tapping carbon-rich substances laid down long before their clever species evolved and burning them to release energy banked by the biosystem from sunlight in ages beyond memory. This is a human intervention into the Earth's carbon cycle of a magnitude even greater than the Haber-Bosch process into the nitrogen cycle. “Look out, they're geoengineering again!” When you burn fossil fuels, the combustion products are mostly carbon dioxide and water. There are other trace products, such as ash from coal, oxides of nitrogen, and sulphur compounds, but other than side effects such as various forms of pollution, they don't have much impact on the Earth's recycling of elements. The water vapour from combustion is rapidly recycled by the biosphere and has little impact, but what about the CO₂?

Well, that's interesting. CO₂ is a trace gas in the atmosphere (less than a fiftieth of a percent), but it isn't very reactive and hence doesn't get broken down by chemical processes. Once emitted into the atmosphere, CO₂ tends to stay there until it's removed via photosynthesis by plants, weathering of rocks, or being dissolved in the ocean and used by marine organisms. Photosynthesis is an efficient consumer of atmospheric carbon dioxide: a field of growing maize in full sunlight consumes all of the CO₂ within a metre of the ground every five minutes—it's only convection that keeps it growing. You can see the yearly cycle of vegetation growth in measurements of CO₂ in the atmosphere as plants take it up as they grow and then release it after they die. The other two processes are much slower. An increase in the amount of CO₂ causes plants to grow faster (operators of greenhouses routinely enrich their atmosphere with CO₂ to promote growth), and increases the root to shoot ratio of the plants, tending to remove CO₂ from the atmosphere where it will be recycled more slowly into the biosphere.

But since the start of the industrial revolution, and especially after 1950, the emission of CO₂ by human activity over a time scale negligible on the geological scale by burning of fossil fuels has released a quantity of carbon into the atmosphere far beyond the ability of natural processes to recycle. For the last half billion years, the CO₂ concentration in the atmosphere has varied between 280 parts per million in interglacial (warm periods) and 180 parts per million during the depths of the ice ages. The pattern is fairly consistent: a rapid rise of CO₂ at the end of an ice age, then a slow decline into the next ice age. The Earth's temperature and CO₂ concentrations are known with reasonable precision in such deep time due to ice cores taken in Greenland and Antarctica, from which temperature and atmospheric composition can be determined from isotope ratios and trapped bubbles of ancient air. While there is a strong correlation between CO₂ concentration and temperature, this doesn't imply causation: the CO₂ may affect the temperature; the temperature may affect the CO₂; they both may be caused by another factor; or the relationship may be even more complicated (which is the way to bet).

But what is indisputable is that, as a result of our burning of all of that ancient carbon, we are now in an unprecedented era or, if you like, a New Age. Atmospheric CO₂ is now around 410 parts per million, which is a value not seen in the last half billion years, and it's rising at a rate of 2 parts per million every year, and accelerating as global use of fossil fuels increases. This is a situation which, in the ecosystem, is not only unique in the human experience; it's something which has never happened since the emergence of complex multicellular life in the Cambrian explosion. What does it all mean? What are the consequences? And what, if anything, should we do about it?

(Up to this point in this essay, I believe everything I've written is non-controversial and based upon easily-verified facts. Now we depart into matters more speculative, where squishier science such as climate models comes into play. I'm well aware that people have strong opinions about these issues, and I'll not only try to be fair, but I'll try to stay away from taking a position. This isn't to avoid controversy, but because I am a complete agnostic on these matters—I don't think we can either measure the raw data or trust our computer models sufficiently to base policy decisions upon them, especially decisions which might affect the lives of billions of people. But I do believe that we ought to consider the armanentarium of possible responses to the changes we have wrought, and will continue to make, in the Earth's ecosystem, and not reject them out of hand because they bear scary monikers like “geoengineering”.)

We have been increasing the fraction of CO₂ in the atmosphere to levels unseen in the history of complex terrestrial life. What can we expect to happen? We know some things pretty well. Plants will grow more rapidly, and many will produce more roots than shoots, and hence tend to return carbon to the soil (although if the roots are ploughed up, it will go back to the atmosphere). The increase in CO₂ to date will have no physiological effects on humans: people who work in greenhouses enriched to up to 1000 parts per million experience no deleterious consequences, and this is more than twice the current fraction in the Earth's atmosphere, and at the current rate of growth, won't be reached for three centuries. The greatest consequence of a growing CO₂ concentration is on the Earth's energy budget. The Earth receives around 1360 watts per square metre on the side facing the Sun. Some of this is immediately reflected back to space (much more from clouds and ice than from land and sea), and the rest is absorbed, processed through the Earth's weather and biosphere, and ultimately radiated back to space at infrared wavelengths. The books balance: the energy absorbed by the Earth from the Sun and that it radiates away are equal. (Other sources of energy on the Earth, such as geothermal energy from radioactive decay of heavy elements in the Earth's core and energy released by human activity are negligible at this scale.)

Energy which reaches the Earth's surface tends to be radiated back to space in the infrared, but some of this is absorbed by the atmosphere, in particular by trace gases such as water vapour and CO₂. This raises the temperature of the Earth: the so-called greenhouse effect. The books still balance, but because the temperature of the Earth has risen, it emits more energy. (Due to the Stefan-Boltzmann law, the energy emitted from a black body rises as the fourth power of its temperature, so it doesn't take a large increase in temperature [measured in degrees Kelvin] to radiate away the extra energy.)

So, since CO₂ is a strong absorber in the infrared, we should expect it to be a greenhouse gas which will raise the temperature of the Earth. But wait—it's a lot more complicated. Consider: water vapour is a far greater contributor to the Earth's greenhouse effect than CO₂. As the Earth's temperature rises, there is more evaporation of water from the oceans and lakes and rivers on the continents, which amplifies the greenhouse contribution of the CO₂. But all of that water, released into the atmosphere, forms clouds which increase the albedo (reflectivity) of the Earth, and reduce the amount of solar radiation it absorbs. How does all of this interact? Well, that's where the global climate models get into the act, and everything becomes very fuzzy in a vast panel of twiddle knobs, all of which interact with one another and few of which are based upon unambiguous measurements of the climate system.

Let's assume, arguendo, that the net effect of the increase in atmospheric CO₂ is an increase in the mean temperature of the Earth: the dreaded “global warming”. What shall we do? The usual prescriptions, from the usual globalist suspects, are remarkably similar to their recommendations for everything else which causes their brows to furrow: more taxes, less freedom, slower growth, forfeit of the aspirations of people in developing countries for the lifestyle they see on their smartphones of the people who got to the industrial age a century before them, and technocratic rule of the masses by their unelected self-styled betters in cheap suits from their tawdry cubicle farms of mediocrity. Now there's something to stir the souls of mankind!

But maybe there's an alternative. We've already been doing geoengineering since we began to dig up coal and deploy the steam engine. Maybe we should embrace it, rather than recoil in fear. Suppose we're faced with global warming as a consequence of our inarguable increase in atmospheric CO₂ and we conclude its effects are deleterious? (That conclusion is far from obvious: in recorded human history, the Earth has been both warmer and colder than its present mean temperature. There's an intriguing correlation between warm periods and great civilisations versus cold periods and stagnation and dark ages.) How might we respond?

Atmospheric veil. Volcanic eruptions which inject large quantities of particulates into the stratosphere have been directly shown to cool the Earth. A small fleet of high-altitude airplanes injecting sulphate compounds into the stratosphere would increase the albedo of the Earth and reflect sufficient sunlight to reduce or even cancel or reverse the effects of global warming. The cost of such a programme would be affordable by a benevolent tech billionaire or wannabe Bond benefactor (“Greenfinger”), and could be implemented in a couple of years. The effect of the veil project would be much less than a volcanic eruption, and would be imperceptible other than making sunsets a bit more colourful.

Marine cloud brightening. By injecting finely-dispersed salt water from the ocean into the atmosphere, nucleation sites would augment the reflectivity of low clouds above the ocean, increasing the reflectivity (albedo) of the Earth. This could be accomplished by a fleet of low-tech ships, and could be applied locally, for example to influence weather.

Carbon sequestration. What about taking the carbon dioxide out of the atmosphere? This sounds like a great idea, and appeals to clueless philanthropists like Bill Gates who are ignorant of thermodynamics, but taking out a trace gas is really difficult and expensive. The best place to capture it is where it's densest, such as the flue of a power plant, where it's around 10%. The technology to do this, “carbon capture and sequestration” (CCS) exists, but has not yet been deployed on any full-scale power plant.

Fertilising the oceans. One of the greatest reservoirs of carbon is the ocean, and once carbon is incorporated into marine organisms, it is removed from the biosphere for tens to hundreds of millions of years. What constrains how fast critters in the ocean can take up carbon dioxide from the atmosphere and turn it into shells and skeletons? It's iron, which is rare in the oceans. A calculation made in the 1990s suggested that if you added one tonne of iron to the ocean, the bloom of organisms it would spawn would suck a hundred thousand tonnes of carbon out of the atmosphere. Now, that's leverage which would impress even the most jaded Wall Street trader. Subsequent experiments found the ratio to be maybe a hundred times less, but then iron is cheap and it doesn't cost much to dump it from ships.

Great Mambo Chicken. All of the previous interventions are modest, feasible with existing technology, capable of being implemented incrementally while monitoring their effects on the climate, and easily and quickly reversed should they be found to have unintended detrimental consequences. But when thinking about affecting something on the scale of the climate of a planet, there's a tendency to think big, and a number of grand scale schemes have been proposed, including deploying giant sunshades, mirrors, or diffraction gratings at the L1 Lagrangian point between the Earth and the Sun. All of these would directly reduce the solar radiation reaching the Earth, and could be adjusted as required to manage the Earth's mean temperature at any desired level regardless of the composition of its atmosphere. Such mega-engineering projects are considered financially infeasible, but if the cost of space transportation falls dramatically in the future, might become increasingly attractive. It's worth observing that the cost estimates for such alternatives, albeit in the tens of billions of dollars, are small compared to re-architecting the entire energy infrastructure of every economy in the world to eliminate carbon-based fuels, as proposed by some glib and innumerate environmentalists.

We live in the age of geoengineering, whether we like it or not. Ever since we started to dig up coal and especially since we took over the nitrogen cycle of the Earth, human action has been dominant in the Earth's ecosystem. As we cope with the consequences of that human action, we shouldn't recoil from active interventions which acknowledge that our environment is already human-engineered, and that it is incumbent upon us to preserve and protect it for our descendants. Some environmentalists oppose any form of geoengineering because they feel it is unnatural and provides an alternative to restoring the Earth to an imagined pre-industrial pastoral utopia, or because it may be seized upon as an alternative to their favoured solutions such as vast fields of unsightly bird shredders. But as David Deutsch says in The Beginning of Infinity, “Problems are inevitable“; but “Problems are soluble.” It is inevitable that the large scale geoengineering which is the foundation of our developed society—taking over the Earth's natural carbon and nitrogen cycles—will cause problems. But it is not only unrealistic but foolish to imagine these problems can be solved by abandoning these pillars of modern life and returning to a “sustainable” (in other words, medieval) standard of living and population. Instead, we should get to work solving the problems we've created, employing every tool at our disposal, including new sources of energy, better means of transmitting and storing energy, and geoengineering to mitigate the consequences of our existing technologies as we incrementally transition to those of the future.

 Permalink

December 2017

Benford, Gregory. The Berlin Project. New York: Saga Press, 2017. ISBN 978-1-4814-8765-8.
In September 1938, Karl Cohen returned from a postdoctoral position in France to the chemistry department at Columbia University in New York, where he had obtained his Ph.D. two years earlier. Accompanying him was his new wife, Marthe, daughter of a senior officer in the French army. Cohen went to work for Harold Urey, professor of chemistry at Columbia and winner of the 1934 Nobel Prize in chemistry for the discovery of deuterium. At the start of 1939, the fields of chemistry and nuclear physics were stunned by the discovery of nuclear fission: researchers at the Kaiser Wilhelm Institute in Berlin had discovered that the nucleus of Uranium-235 could be split into two lighter nuclei when it absorbed a neutron, releasing a large amount of energy and additional neutrons which might be able to fission other uranium nuclei, creating a “chain reaction” which might permitting tapping the enormous binding energy of the nucleus to produce abundant power—or a bomb.

The discovery seemed to open a path to nuclear power, but it was clear from the outset that the practical challenges were going to be daunting. Natural uranium is composed of two principal isotopes, U-238 and U-235. The heavier U-238 isotope makes up 99.27% of natural uranium, while U-235 accounts for only 0.72%. Only U-235 can readily be fissioned, so in order to build a bomb, it would be necessary to separate the two isotopes and isolate near-pure U-235. Isotopes differ only in the number of neutrons in their nuclei, but have the same number of protons and electrons. Since chemistry is exclusively determined by the electron structure of an atom, no chemical process can separate two isotopes: it must be done physically, based upon their mass difference. And since U-235 and U-238 differ in mass only by around 1.25%, any process, however clever, would necessarily be inefficient and expensive. It was clear that nuclear energy or weapons would require an industrial-scale effort, not something which could be done in a university laboratory.

Several candidate processes were suggested: electromagnetic separation, thermal or gaseous diffusion, and centrifuges. Harold Urey believed a cascade of high-speed centrifuges, fed with uranium hexafluoride gas, was the best approach, and he was the world's foremost expert on gas centrifuges. The nascent uranium project, eventually to become the Manhattan Project, was inclined toward the electromagnetic and gaseous diffusion processes, since they were believed to be well-understood and only required a vast scaling up as opposed to demonstration of a novel and untested technology.

Up to this point, everything in this alternative history novel is completely factual, and all of the characters existed in the real world (Karl Cohen is the author's father in-law). Historically, Urey was unable to raise the funds to demonstrate the centrifuge technology, and the Manhattan project proceeded with the electromagnetic and gaseous diffusion routes to separate U-235 while, in parallel, pursuing plutonium production from natural uranium in graphite-moderated reactors. Benford adheres strictly to the rules of the alternative history game in that only one thing is changed, and everything else follows as consequences of that change.

Here, Karl Cohen contacts a prominent Manhattan rabbi known to his mother who, seeing a way to combine protecting Jews in Europe from Hitler, advancing the Zionist cause, and making money from patents on a strategic technology, assembles a syndicate of wealthy and like-minded investors, raising a total of a hundred thousand dollars (US$ 1.8 million in today's funny money) to fund Urey's prototype centrifuge project in return for rights to patents on the technology. Urey succeeds, and by mid-1941 the centrifuge has been demonstrated and contacts made with Union Carbide to mass-produce and operate a centrifuge separation plant. Then, in early December of that year, everything changed, and by early 1942 the Manhattan Project had bought out the investors at a handsome profit and put the centrifuge separation project in high gear. As Urey's lead on the centrifuge project, Karl Cohen finds himself in the midst of the rapidly-developing bomb project, meeting and working with all of the principals.

Thus begins the story of a very different Manhattan Project and World War II. With the centrifuge project starting in earnest shortly after Pearl Harbor, by June 6th, 1944 the first uranium bomb is ready, and the Allies decide to use it on Berlin as a decapitation strike simultaneous with the D-Day landings in Normandy. The war takes a very different course, both in Europe and the Pacific, and a new Nazi terror weapon, first hinted at in a science fiction story, complicates the conflict. A different world is the outcome, seen from a retrospective at the end.

Karl Cohen's central position in the Manhattan Project introduces us to a panoply of key players including Leslie Groves, J. Robert Oppenheimer, Edward Teller, Leo Szilard, Freeman Dyson, John W. Campbell, Jr., and Samuel Goudsmit. He participates in a secret mission to Switzerland to assess German progress toward a bomb in the company of professional baseball catcher become spy Moe Berg, who is charged with assassinating Heisenberg if Cohen judges he knows too much.

This is a masterpiece of alternative history, based firmly in fact, and entirely plausible. The description of the postwar consequences is of a world in which I would prefer to have been born. I won't discuss the details to avoid spoiling your discovery of how they all work out in the hands of a master storyteller who really knows his stuff (Gregory Benford is a Professor Emeritus of physics at the University of California, Irvine).

 Permalink

Cox, Joseph. The City on the Heights. Modiin, Israel: Big Picture Books, 2017. ISBN 978-0-9764659-6-6.
For more than two millennia the near east (which is sloppily called the “middle east” by ignorant pundits who can't distinguish north Africa from southwest Asia) has exported far more trouble than it has imported from elsewhere. You need only consult the chronicles of the Greeks, the Roman Empire, the histories of conflicts among them and the Persians, the expansion of Islam into the region, internecine conflicts among Islamic sects, the Crusades, Israeli-Arab wars, all the way to recent follies of “nation building” to appreciate that this is a perennial trouble spot.

People, and peoples hate one another there. It seems like whenever you juxtapose two religions (even sects of one), ethnicities, or self-identifications in the region, before long sanguinary conflict erupts, with each incident only triggering even greater reprisals and escalation. In the words of Lenin, What is to be done?

Now, my inclination would be simply to erect a strong perimeter around the region, let anybody who wished enter, but nobody leave without extreme scrutiny to ensure they were not a risk and follow-up as long as they remained as guests in the civilised regions of the world. This is how living organisms deal with threats to their metabolism: encyst upon it!

In this novel, the author explores another, more hopeful and optimistic, yet perhaps less realistic alternative. When your computer ends up in a hopeless dead-end of resource exhaustion, flailing software, and errors in implementation, you reboot it, or turn it off and on again. This clears out the cobwebs and provides a fresh start. It's difficult to do this in a human community, especially one where grievances are remembered not just over generations but millennia.

Here, archetypal NGO do-gooder Steven Gold has another idea. In the midst of the European religious wars, Amsterdam grew and prospered by being a place that people of any faith could come together and do business. Notwithstanding having a nominal established religion, people of all confessions were welcome as long as they participated in the peaceful commerce and exchange which made the city prosper.

Could this work in the near east? Steven Gold thought it was worth a try, and worth betting his career upon. But where should such a model city be founded? The region was a nightmarish ever-shifting fractal landscape of warring communities with a sole exception: the state of Israel. Why on Earth would Israel consider ceding some of its territory (albeit mostly outside its security perimeter) for such an idealistic project which might prove to be a dagger aimed at its own heart? Well, Steven Gold is very persuasive, and talented at recruiting allies able to pitch the project in terms those needed to support it understand.

And so, a sanctuary city on the Israel-Syria border is born. It is anything but a refugee camp. Residents are expected to become productive members of a multicultural, multi-ethnic community which will prosper along the lines of renaissance Amsterdam or, more recently, Hong Kong and Singapore. Those who wish to move to the City are carefully vetted, but they include a wide variety of people including a former commander of the Islamic State, a self-trained engineer and problem solver who is an escapee from a forced marriage, religious leaders from a variety of faiths, and supporters including a billionaire who made her fortune in Internet payment systems.

And then, since it's the near east, it all blows up. First there are assassinations, then bombings, then a sorting out into ethnic and sectarian districts within the city, and then reprisals. It almost seems like an evil genius is manipulating the communities who came there to live in peace and prosper into conflict among one another. That this might be possible never enters the mind of Steven Gold, who probably still believes in the United Nations and votes for Democrats, notwithstanding their resolute opposition to the only consensual democracy in the region.

Can an act of terrorism redeem a community? Miryam thinks so, and acts accordingly. As the consequences play out, and the money supporting the city begins to run out, a hierarchical system of courts which mix up the various contending groups is established, and an economic system based upon electronic payments which provides a seamless transition between subsidies for the poor (but always based upon earned income: never a pure dole) and taxation for the more prosperous.

A retrospective provides a look at how it all might work. I remain dubious at the prospect. There are many existing communities in the near east which are largely homogeneous in terms of religion and ethnicity (as seen by outsiders) which might be prosperous if they didn't occupy themselves with bombing and killing one another by any means available, and yet the latter is what they choose to do. Might it be possible, by establishing sanctuaries, to select for those willing to set ancient enmities aside? Perhaps, but in this novel, grounded in reality, that didn't happen.

The economic system is intriguing but, to me, ultimately unpersuasive. I understand how the income subsidy encourages low-income earners to stay within the reported income economy, but the moment you cross the tax threshold, you have a powerful incentive to take things off the books and, absent some terribly coercive top-down means to force all transactions through the electronic currency system, free (non-taxed) exchange will find a way.

These quibbles aside, this is a refreshing and hopeful look at an alternative to eternal conflict. In the near east, “the facts on the ground” are everything and the author, who lives just 128 km from the centre of civil war in Syria is far more acquainted with the reality than somebody reading his book far away. I hope his vision is viable. I hope somebody tries it. I hope it works.

 Permalink

Serling, Robert J. The Electra Story. New York: Bantam Books, [1963] 1991. ISBN 978-0-553-28845-2.
As the jet age dawned for commercial air transport, the major U.S. aircraft manufacturers found themselves playing catch-up to the British, who had put the first pure jet airliner, the De Havilland Comet, into service in 1952, followed shortly thereafter by the turboprop Vickers Viscount in 1953. The Comet's reputation was seriously damaged by a series of crashes caused by metal fatigue provoked by its pressurisation system, and while this was remedied in subsequent models, the opportunity to scoop the Americans and set the standard for passenger jet transportation was lost. The Viscount was very successful with a total of 445 built. In fact, demand so surpassed its manufacturer's production rate that delivery time stretched out, causing airlines to seek alternatives.

All of this created a golden opportunity for the U.S. airframers. Boeing and Douglas opted for four engine turbojet designs, the Boeing 707 and Douglas DC-8, which were superficially similar, entering service in 1958 and 1959 respectively. Lockheed opted for a different approach. Based upon its earlier experience designing the C-130 Hercules military transport for the U.S. Air Force, Lockheed decided to build a turboprop airliner instead of a pure jet design like the 707 or DC-8. There were a number of reasons motivating this choice. First of all, Lockheed could use essentially the same engines in the airliner as in the C-130, eliminating the risks of mating a new engine to a new airframe which have caused major troubles throughout the history of aviation. Second, a turboprop, although not as fast as a pure jet, is still much faster than a piston engined plane and able to fly above most of the weather. Turboprops are far more fuel efficient than the turbojet engines used by Boeing and Douglas, and can operate from short runways and under high altitude and hot weather conditions which ground the pure jets. All of these properties made a turboprop airliner ideal for short- and medium-range operations where speed en route was less important than the ability to operate from smaller airports. (Indeed, more than half a century later, turboprops account for a substantial portion of the regional air transport market for precisely these reasons.)

The result was the Lockheed L-188 Electra, a four engine airliner powered by Allison 501-D13 turboprop engines, able to carry 98 passengers a range of 3450 to 4455 km (depending on payload mass) at a cruise speed of 600 km/h. (By comparison, the Boeing 707 carried 174 passengers in a single class configuration a range of 6700 km at a cruise speed of 977 km/h.)

A number of U.S. airlines saw the Electra as an attractive addition to their fleet, with major orders from American Airlines, Eastern Air Lines, Braniff Airways, National Airlines, and Pacific Southwest Airlines. A number of overseas airlines placed orders for the plane. The entry into service went smoothly, and both crews and passengers were satisfied with the high speed, quiet, low-vibration, and reliable operation of the turboprop airliner.

Everything changed on the night of September 29th, 1959. Braniff Airways flight 542, an Electra bound for Dallas and then on to Washington, D.C. and New York, disintegrated in the skies above Buffalo, Texas. There were no survivors. The accident investigation quickly determined that the left wing of the airplane had separated near the wing root. But how, why? The Electra had been subjected to one of the most rigorous flight test and certification regimes of its era, and no problems had been discovered. The flight was through clear skies with no violent weather. Clearly, something terrible went wrong, but there was little evidence to suggest a probable cause. One always suspects a bomb (although less in those days before millions of medieval savages were admitted to civilised countries as “refugees”), but that was quickly ruled out due to the absence of explosive residues on the wreckage.

This was before the era of flight data recorders and cockpit voice recorders, so all the investigators had to go on was the wreckage, and intense scrutiny of it failed to yield an obvious clue. Often in engineering, there are mysteries which simply require more data, and meanwhile the Electras continued to fly. Most people deemed it “just one of those things”—airliner crashes were not infrequent in the era.

Then, on March 17th, 1960, in clear skies above Tell City, Indiana, Northwest Airlines flight 710 fell out of the sky, making a crater in a soybean field in which almost nothing was recognisable. Investigators quickly determined that the right wing had separated in flight, dooming the aircraft.

Wings are not supposed to fall off of airliners. Once is chance, but twice is indicative of a serious design or operational problem. This set into motion one of the first large-scale investigations of aircraft accidents in the modern era. Not only did federal investigators and research laboratories and Lockheed invest massive resources, even competitors Boeing and Douglas contributed expertise and diagnostic hardware because they realised that the public perception of the safety of passenger jet aviation was at stake.

After an extensive and protracted investigation, it was concluded that the Electra was vulnerable to a “whirl mode” failure, where oscillations due to a weakness in the mounting of the outboard engines could resonate with a mode of the wing and lead to failure of its attachment point to the fuselage. This conclusion was highly controversial: Lockheed pointed out that no such problem had been experienced in the C-130, while Allison, the engine manufacturer, cited the same experience to argue that Lockheed's wing design was deficient. Lawsuits and counter-suits erupted, amid an avalanche of lawsuits against Lockheed, Allison, and the airlines by families of those killed in the accidents.

The engine mountings and wings were strengthened, and the modified aircraft were put through a grueling series of tests intended to induce the whirl mode failures. They passed without incident, and the Electra was returned to service without any placard limitations on speed. No further incidents occurred, although a number of Electras were lost in accidents which had nothing to do with the design, but causes all too common in commercial aviation at the time.

Even before the Tell City crash, Lockheed had decided to close down the Electra production line. Passenger and airline preference had gone in favour of pure jet airliners (in an age of cheap oil, the substantial fuel economy of turboprops counted less than the speed of pure jets and how cool it was to fly without propellers). A total of 170 Electras were sold. Remarkably, almost a dozen remain in service today, mostly as firefighting water bombers. A derivative, the P-3 Orion marine patrol aircraft, remains in service today with a total of 757 produced.

This is an excellent contemporary view of the history of a controversial airliner and of one of the first in-depth investigations of accidents under ambiguous circumstances and intense media and political pressure. The author, an aviation journalist, is the brother of Rod Serling.

The paperback is currently out of print but used copies are available, albeit expensive. The Kindle edition is available, and is free for Kindle Unlimited subscribers. The Kindle edition was obviously scanned from a print edition, and exhibits the errors you expect in scanned text not sufficiently scrutinised by a copy editor, for example “modem” where “modern” appeared in the print edition.

 Permalink

Mills, Kyle. Order to Kill. New York: Pocket Books, 2016. ISBN 978-1-4767-8349-9.
This is the second novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. In the first novel by Mills, The Survivor (July 2017), he picked up the story of the last Vince Flynn installment, The Last Man (February 2013), right where it left off and seemed to effortlessly assume the voice of Vince Flynn and his sense for the character of Mitch Rapp. This was a most promising beginning, which augured well for further Mitch Rapp adventures.

In this, the fifteenth novel in the Mitch Rapp series (Flynn's first novel, Term Limits [November 2009], is set in the same world and shares characters with the Mitch Rapp series, but Rapp does not appear in it, so it isn't considered a Rapp novel), Mills steps out of the shadow of Vince Flynn's legacy and takes Rapp and the story line into new territory. The result is…mixed.

In keeping with current events and the adversary du jour, the troublemakers this time are the Russkies, with President Maxim Vladimirovich Krupin at the top of the tottering pyramid. And tottering it is, as the fall in oil prices has undermined Russia's resource-based economy and destabilised the enterprises run by the oligarchs who keep him in power. He may be on top, but he is as much a tool of those in the shadows as master of his nation.

But perhaps there is a grand coup, or one might even say in the new, nominally pious Russia, a Hail Mary pass, which might simultaneously rescue the Russian economy and restore Russia to its rightful place on the world stage.

The problem is those pesky Saudis. Sitting atop a large fraction of the Earth's oil, they can turn the valve on and off and set the price per barrel wherever they wish and, recently, have chosen to push the price down to simultaneously appease their customers in Europe and Asia, but also to drive the competition from hydraulic fracturing (which has a higher cost of production than simply pumping oil out from beneath the desert) out of the market. Suppose the Saudis could be taken out? But Russia could never do it directly. There would need to be a cut-out, and perfect deniability.

Well, the Islamic State (IS, or ISIS, or ISIL, or whatever they're calling this week in the Court Language of the Legacy Empire) is sworn to extend its Caliphate to the holiest places of Islam and depose the illegitimate usurpers who rule them, so what better puppet to take down the Saudi petro-hegemony? Mitch Rapp finds himself in the middle of this conspiracy, opting to endure grave physical injury to insinuate himself into its midst.

But it's the nature of the plot where everything falls apart, in one of those details which Vince Flynn and his brain trust would never have flubbed. This isn't a quibble, but a torpedo below the water line. We must, perforce, step behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
You clicked the Spoiler link, right? Now I'm going to spoil the whole thing so if you clicked it by accident, please close this box and imagine you never saw what follows.

The central plot of this novel is obtaining plutonium from Pakistani nuclear weapons and delivering it to ISIS, not to build a fission weapon but rather a “dirty bomb” which uses conventional explosives to disperse radioactive material to contaminate an area and deny it to the enemy.

But a terrorist who had done no more research than reading Wikipedia would know that plutonium is utterly useless as a radiological contaminant for a dirty bomb. The isotope of plutonium used in nuclear weapons has a half-life of around 24,000 years, and hence has such a low level of radioactivity that dispersing the amount used in the pits of several bombs would only marginally increase the background radiation in the oil fields. In other words, it would have no effect whatsoever.

If you want to make a dirty bomb, the easiest way is to use spent fuel rods from civil nuclear power stations. These are far easier to obtain (although difficult to handle safely), and rich in highly-radioactive nuclides which can effectively contaminate an area into which they are dispersed. But this blows away the entire plot and most of the novel.

Vince Flynn would never, and never did, make such a blunder. I urge Kyle Mills to reconnect with Mr Flynn's brain trust and run his plots past them, or develop an equivalent deep well of expertise to make sure things fundamentally make sense.

Spoilers end here.  

All right, we're back from the spoilers. Whether you've read them or not, this is a well-crafted thriller which works as such as long as you don't trip over the central absurdity in the plot. Rapp not only suffers grievous injury, but encounters an adversary who is not only his equal but better. He confronts his age, and its limitations. It happens to us all.

The gaping plot hole could have been easily fixed—not in the final manuscript but in the outline. Let's hope that future Mitch Rapp adventures will be subjected to the editorial scrutiny which makes them not just page-turners but ones where, as you're turning the pages, you don't laugh out loud at easily-avoided blunders.

 Permalink

Schantz, Hans G. The Hidden Truth. Huntsville, AL: ÆtherCzar, 2016. ISBN 978-1-5327-1293-7.
This is a masterpiece of alternative history techno-thriller science fiction. It is rich in detail, full of interesting characters who interact and develop as the story unfolds, sound in the technical details which intersect with our world, insightful about science, technology, economics, government and the agenda of the “progressive” movement, and plausible in its presentation of the vast, ruthless, and shadowy conspiracy which lies under the surface of its world. And, above all, it is charming—these are characters you'd like to meet, even some of the villains because you want understand what motivates them.

The protagonist and narrator is a high school junior (senior later in the tale), son of an electrical engineer who owns his own electrical contracting business, married to a chemist, daughter of one of the most wealthy and influential families in their region of Tennessee, against the wishes of her parents. (We never learn the narrator's name until the last page of the novel, so I suppose it would be a spoiler if I mentioned it here, so I won't, even if it makes this review somewhat awkward.) Our young narrator wants to become a scientist, and his father not only encourages him in his pursuit, but guides him toward learning on his own by reading the original works of great scientists who actually made fundamental discoveries rather than “suffering through the cleaned-up and dumbed-down version you get from your teachers and textbooks.” His world is not ours: Al Gore, who won the 2000 U.S. presidential election, was killed in the 2001-09-11 attacks on the White House and Capitol, and President Lieberman pushed through the “Preserving our Planet's Future Act”, popularly known as the “Gore Tax”, in his memory, and its tax on carbon emissions is predictably shackling the economy.

Pursuing his study of electromagnetism from original sources, he picks up a copy at the local library of a book published in 1909. The library was originally the collection of a respected institute of technology until destroyed by innovative educationalists and their pointy-headed progressive ideas. But the books remained, and in one of them, he reads an enigmatic passage about Oliver Heaviside having developed a theory of electromagnetic waves bouncing off one another in free space, which was to be published in a forthcoming book. This didn't make any sense: electromagnetic waves add linearly, and while they can be reflected and refracted by various media, in free space they superpose without interaction. He asks his father about the puzzling passage, and they look up the scanned text on-line and find the passage he read missing. Was his memory playing tricks?

So, back to the library where, indeed, the version of the book there contains the mention of bouncing waves. And yet the publication date and edition number of the print and on-line books were identical. As Isaac Asimov observed, many great discoveries aren't heralded by an exclamation of “Eureka!” but rather “That's odd.” This was odd….

Soon, other discrepancies appear, and along with his best friend and computer and Internet wizard Amit Patel, he embarks on a project to scan original print editions of foundational works on electromagnetism from the library and compare them with on-line versions of these public domain works. There appears to be a pattern: mentions of Heaviside's bouncing waves appear to have been scrubbed out of the readily-available editions of these books (print and on-line), and remain only in dusty volumes in forgotten provincial libraries.

As their investigations continue, it's increasingly clear they have swatted a hornets' nest. Fake feds start to follow their trail, with bogus stories of “cyber-terrorism”. And tragically, they learn that those who dig too deeply into these curiosities have a way of meeting tragic ends. Indeed, many of the early researchers into electromagnetism died young: Maxwell at age 48, Hertz at 36, FitzGerald at 39. Was there a vast conspiracy suppressing some knowledge about electromagnetism? And if so, what was the hidden truth, and why was it so important to them they were willing to kill to keep it hidden? It sure looked like it, and Amit started calling them “EVIL”: the Electromagnetic Villains International League.

The game gets deadly, and deadly serious. The narrator and Amit find some powerful and some ambiguous allies, learn about how to deal with the cops and other authority figures, and imbibe a great deal of wisdom about individuality, initiative, and liberty. There's even an attempt to recruit our hero to the dark side of collectivism where its ultimate anti-human agenda is laid bare. Throughout there are delightful tips of the hat to libertarian ideas, thinkers, and authors, including some as obscure as a reference to the Books on Benefit bookshop in Providence, Rhode Island.

The author is an inventor, entrepreneur, and scientist. He writes, “I appreciate fiction that shows how ordinary people with extraordinary courage and determination can accomplish remarkable achievements.” Mission accomplished. As the book ends, the central mystery remains unresolved. The narrator vows to get to the bottom of it and avenge those destroyed by the keepers of the secret. In a remarkable afterword and about the author section, there is a wonderful reading list for those interested in the technical topics discussed in the book and fiction with similarly intriguing and inspiring themes. When it comes to the technical content of the book, the author knows of what he writes: he has literally written the book on the design of ultrawideband antennas and is co-inventor of Near Field Electromagnetic Ranging (NFER), which you can think of as “indoor GPS”.

For a self-published work, there are only a few copy editing errors (“discrete” where “discreet” was intended, and “Capital” for “Capitol”). The Kindle edition is free for Kindle Unlimited subscribers. A sequel is now available: A Rambling Wreck which takes our hero and the story to—where else?—Georgia Tech. I shall certainly read that book. Meanwhile, go read the present volume; if your tastes are anything like mine, you're going to love it.

 Permalink