2018  

January 2018

Bracken, Matthew. The Red Cliffs of Zerhoun. Orange Park, FL: Steelcutter Publishing, 2017. ISBN 978-0-9728310-5-5.
We first met Dan Kilmer in Castigo Cay (February 2014), where the retired U.S. Marine sniper (I tread cautiously on the terminology: some members of the Corps say there's no such thing as a “former Marine” and, perhaps, neither is there a “former sniper”) had to rescue his girlfriend from villains in the Caribbean. The novel is set in a world where the U.S. is deteriorating into chaos and the malevolent forces suppressed by civilisation have begun to assert their power on the high seas.

As this novel begins, things have progressed, and not for the better. The United States has fractured into warring provinces as described in the author's “Enemies” trilogy. Japan and China are in wreckage after the global economic crash. Much of Europe is embroiled in civil wars between the indigenous population and inbred medieval barbarian invaders imported by well-meaning politicians or allowed to land upon their shores or surge across their borders by the millions. The reaction to this varies widely depending upon the culture and history of the countries invaded. Only those wise enough to have said “no” in time have been spared.

But even they are not immune to predation. The plague of Islamic pirates on the high seas and slave raiders plundering the coasts of Europe was brought to an end only by the navies of Christendom putting down the corsairs' primitive fleets. But with Europe having collapsed economically, drawn down its defence capability to almost nothing, and daring not even to speak the word “Christendom” for fear of offending its savage invaders, the pirates are again in ascendence, this time flying the black flag of jihad instead of the Jolly Roger.

When seventy young girls are kidnapped into sex slavery from a girls' school in Ireland by Islamic pirates and offered for auction to the highest bidder among their co-religionists, a group of those kind of hard men who say things like “This will not stand”, including a retired British SAS colonel and a former Provisional IRA combatant (are either ever “retired” or “former”?) join forces, not to deploy a military-grade fully-automatic hashtag, but to get the girls back by whatever means are required.

Due to exigent circumstances, Dan Kilmer's 18 metre steel-hulled schooner, moored in a small port in western Ireland to peddle diesel fuel he's smuggled in from a cache in Greenland, becomes one of those means. Kilmer thinks the rescue plan to be folly, but agrees to transport the assault team to their rendezvous point in return for payment for him and his crew in gold.

It's said that no battle plan survives contact with the enemy. In this case, the plan doesn't even get close to that point. Improvisation, leaders emerging in the midst of crisis, and people rising to the occasion dominate the story. There are heroes, but not superheroes—instead people who do what is required in the circumstances in which they find themselves. It is an inspiring story.

This book has an average review rating of 4.9 on Amazon, but you're probably hearing of it here for the first time. Why? Because it presents an accurate view of the centuries-old history of Islamic slave raiding and trading, and the reality that the only way this predation upon civilisation can be suppressed is by civilised people putting it down in with violence commensurate to its assault upon what we hold most precious.

The author's command of weapons and tactics is encyclopedic, and the novel is consequently not just thrilling but authentic. And, dare I say, inspiring.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Hamilton, Eric M. An Inconvenient Presidency. Seattle: CreateSpace, 2016. ISBN 978-1-5368-7363-4.
This novella (89 pages in the Kindle edition) is a delightful romp into alternative history and the multiverse. Al Gore was elected president in 2000 and immediately informed of a capability so secret he had never been told of it, even as Vice President. He was handed a gadget, the METTA, which allowed a limited kind of time travel. Should he, or the country, find itself in a catastrophic and seemingly unrecoverable situation, he could press its red button and be mentally transported back in time to a reset point, set just after his election, to give it another try. But, after the reset, he would retain all of his knowledge of the events which preceded it.

Haven't you imagined going back in time and explaining to your younger self all of the things you've learned by trial and error and attendant bruises throughout your life? The shadowy Government Apperception Liberation Authority—GALA—has endowed presidents with this capability. This seems so bizarre the new president Gore pays little attention to it. But when an unanticipated and almost unimaginable event occurs, he presses the button.

~KRRZKT~

Well, we won't let that happen! And it doesn't, but something else does: reset. This job isn't as easy as it appeared: reset, reset, reset.

We've often joked about the “Gore Effect”: the correlation between unseasonably cold weather and Al Gore's appearance to promote his nostrums of “anthropogenic global warming”. Here, Al Gore begins to think there is a greater Gore Effect: that regardless of what he does and what he learns from previous experience and a myriad of disasters, something always goes wrong with catastrophic consequences.

Can he escape this loop? Who are the mysterious people behind GALA? He is determined to find out, and he has plenty of opportunities to try: ~KRRZKT~.

You will be amazed at how the author brings this tale to a conclusion. Throughout, everything was not as it seemed, but in the last few pages, well golly! Unusually for a self-published work, there are no typographical or grammatical errors which my compulsive copy-editor hindbrain detected. The author does not only spin a fine yarn, but respects his audience enough to perfect his work before presenting it to them: this is rare, and I respect and applaud that. Despite Al Gore and other U.S. political figures appearing in the story, there is no particular political tilt to the narrative: the goal is fun, and it is superbly achieved.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Weir, Andy. Artemis. New York: Crown, 2017. ISBN 978-0-553-44812-2.
Seldom has a first-time novelist burst onto the scene so spectacularly as Andy Weir with The Martian (November 2014). Originally written for his own amusement and circulated chapter by chapter to a small but enthusiastic group of fans who provided feedback and suggestions as the story developed, he posted the completed novel as a free download on his Web site. Some people who had heard of it by word of mouth but lacked the technical savvy to download documents and transfer them to E-readers inquired whether he could make a Kindle version available. Since you can't give away Kindle books, he published it at the minimum possible price. Before long, the book was rising into the Amazon bestseller list in science fiction, and he was contacted by a major publisher about doing a print edition. These publishers only accept manuscripts through agents, and he didn't have one (nor do agents usually work with first-time authors, which creates a chicken-and-egg problem for the legacy publishing industry), so the publisher put him in touch with a major agent and recommended the manuscript. This led to a 2014 hardcover edition and then a Hollywood movie in 2016 which was nominated for 7 Oscars and won two Golden Globes including Best Motion Picture and Best Performance by an Actor in its category.

The question fans immediately asked themselves was, “Is this a one shot, or can he repeat?” Well, I think we have the answer: with Artemis, Andy Weir has delivered another story of grand master calibre and shown himself on track to join the ranks of the legends of the genre.

In the latter part of the 21st century commerce is expanding into space, and the Moon is home to Artemis, a small settlement of around 2000 permanent residents, situated in the southern part of the Sea of Tranquility, around 40 km from the Apollo 11 landing site. A substantial part of the economy of Artemis is based upon wealthy tourists who take the train from Artemis to the Apollo 11 Visitor Center (where they can look, but not touch or interfere with the historical relics) and enjoy the luxuries and recreations which cater to them back in the pleasure domes.

Artemis is the creation of the Kenya Space Corporation (KSC), which officially designates it “Kenya Offshore Platform Artemis” and operates under international maritime law. As space commerce burgeoned in the 21st century, Kenya's visionary finance minister, Fidelis Ngugi, leveraged Kenya's equatorial latitude (it's little appreciated that once reliable fully-reusable launch vehicles are developed, there's no need to launch over water) and hands-off regulatory regime provided a golden opportunity for space entrepreneurs to escape the nanny state regulation and crushing tax burden of “developed” countries. With tax breaks and an African approach to regulation, entrepreneurs and money flowed in from around the world, making Kenya into a space superpower and enriching its economy and opportunities for its people. Twenty years later Ngugi was Administrator of Artemis; she was, in effect, ruler of the Moon.

While Artemis was a five star experience for the tourists which kept its economy humming, those who supported the settlement and its industries lived in something more like a frontier boom town of the 19th century. Like many such settlements, Artemis attracted opportunity-seekers and those looking to put their pasts behind them from many countries and cultures. Those established tend to attract more like them, and clannish communities developed around occupations: most people in Life Support were Vietnamese, while metal-working was predominantly Hungarian. For whatever reason, welding was dominated by Saudis, including Ammar Bashara, who emigrated to Artemis with his six-year old daughter Jasmine. Twenty years later, Ammar runs a prosperous welding business and Jasmine (“Jazz”) is, shall we say, more irregularly employed.

Artemis is an “energy intense” Moon settlement of the kind described in Steven D. Howe's Honor Bound Honor Born (May 2014). The community is powered by twin 27 megawatt nuclear reactors located behind a berm one kilometre from the main settlement. The reactors not only provide constant electricity and heat through the two week nights and days of the Moon, they power a smelter which processes the lunar regolith into raw materials. The Moon's crust is about 40% oxygen, 20% silicon, 12% iron, and 8% aluminium. With abundant power, these elements can be separated and used to manufacture aluminium and iron for structures, glass from silicon and oxygen, and all with abundant left-over oxygen to breathe. There is no need for elaborate recycling of oxygen: there's always plenty more coming out of the smelter. Many denizens of Artemis subsist largely on “gunk”, an algae-based food grown locally in vats which is nutritious but unpalatable and monotonous. There are a variety of flavours, all of which are worse than the straight stuff.

Jazz works as a porter. She picks up things somewhere in the settlement and delivers them to their destinations using her personally-owned electric-powered cart. Despite the indigenous production of raw materials, many manufactured goods and substances are imported from Earth or factories in Earth orbit, and every time a cargo ship arrives, business is brisk for Jasmine and her fellow porters. Jazz is enterprising and creative, and has a lucrative business on the side: smuggling. Knowing the right people in the spaceport and how much to cut them in, she has a select clientele to which she provides luxury goods from Earth which aren't on the approved customs manifests.

For this, she is paid in “slugs”. No, not slimy molluscs, but “soft-landed grams”, credits which can be exchanged to pay KSC to deliver payload from Earth to Artemis. Slugs act as a currency, and can be privately exchanged among individuals' handheld computers much as Bitcoin today. Jazz makes around 12,000 slugs a month as a porter, and more, although variable, from her more entrepreneurial sideline.

One of her ultra-wealthy clients approaches her with a highly illegal, almost certainly unethical, and very likely perilous proposal. Surviving for as long as she has in her risky business has given Jazz a sense for where the edge is and the good sense not to step over it.

“I'm sorry but this isn't my thing. You'll have to find someone else.”

“I'll offer you a million slugs.”

“Deal.”

Thus begins an adventure in which Jazz has to summon all of her formidable intellect, cunning, and resources, form expedient alliances with unlikely parties, solve a technological mystery, balance honour with being a outlaw, and discover the economic foundation of Artemis, which is nothing like it appears from the surface. All of this is set in a richly textured and believable world which we learn about as the story unfolds: Weir is a master of “show, don't tell”. And it isn't just a page-turning thriller (although that it most certainly is); it's also funny, and in the right places and amount.

This is where I'd usually mention technical goofs and quibbles. I'll not do that because I didn't find any. The only thing I'm not sure about is Artemis' using a pure oxygen atmosphere at 20% of Earth sea-level pressure. This works for short- and moderate-duration space missions, and was used in the U.S. Mercury, Gemini, and Apollo missions. For exposure to pure oxygen longer than two weeks, a phenomenon called absorption atelectasis can develop, which is the collapse of the alveoli in the lungs due to complete absorption of the oxygen gas (see this NASA report [PDF]). The presence of a biologically inert gas such as nitrogen, helium, argon, or neon will keep the alveoli inflated and prevent this phenomenon. The U.S. Skylab missions used an atmosphere of 72% oxygen and 28% nitrogen to avoid this risk, and the Soviet Salyut and Mir space stations used a mix of nitrogen and oxygen with between 21% and 40% oxygen. The Space Shuttle and International Space Station use sea-level atmospheric pressure with 21% oxygen and the balance nitrogen. The effects of reduced pressure on the boiling point of water and the fire hazard of pure oxygen even at reduced pressure are accurately described, but I'm not sure the physiological effects of a pure oxygen atmosphere for long-term habitation have been worked through.

Nitpicking aside, this is a techno-thriller which is also an engaging human story, set in a perfectly plausible and believable future where not only the technology but the economics and social dynamics work. We may just be welcoming another grand master to the pantheon.

 Permalink

February 2018

Kroese, Robert. Starship Grifters. Seattle: 47North, 2014. ISBN 978-1-4778-1848-0.
This is the funniest science fiction novel I have read in quite a while. Set in the year 3013, not long after galactic civilisation barely escaped an artificial intelligence apocalypse and banned fully self-aware robots, the story is related by Sasha, one of a small number of Self-Arresting near Sentient Heuristic Androids built to be useful without running the risk of their taking over. SASHA robots are equipped with an impossible-to-defeat watchdog module which causes a hard reboot whenever they are on the verge of having an original thought. The limitation of the design proved a serious handicap, and all of their manufacturers went bankrupt. Our narrator, Sasha, was bought at an auction by the protagonist, Rex Nihilo, for thirty-five credits in a lot of “ASSORTED MACHINE PARTS”. Sasha is Rex's assistant and sidekick.

Rex is an adventurer. Sasha says he “never had much of an interest in anything but self-preservation and the accumulation of wealth, the latter taking clear precedence over the former.” Sasha's built in limitations (in addition to the new idea watchdog, she is unable to tell a lie, but if humans should draw incorrect conclusions from incomplete information she provides them, well…) pose problems in Rex's assorted lines of work, most of which seem to involve scams, gambling, and contraband of various kinds. In fact, Rex seems to fit in very well with the universe he inhabits, which appears to be firmly grounded in Walker's Law: “Absent evidence to the contrary, assume everything is a scam”. Evidence appears almost totally absent, and the oppressive tyranny called the Galactic Malarchy, those who supply it, the rebels who oppose it, entrepreneurs like Rex working in the cracks, organised religions and cults, and just about everybody else, appear to be on the make or on the take, looking to grift everybody else for their own account. Cosmologists attribute this to the “Strong Misanthropic Principle, which asserts that the universe exists in order to screw with us.” Rex does his part, although he usually seems to veer between broke and dangerously in debt.

Perhaps that's due to his somewhat threadbare talent stack. As Shasha describes him, Rex doesn't have a head for numbers. Nor does he have much of a head for letters, and “Newtonian physics isn't really his strong suit either”. He is, however, occasionally lucky, or so it seems at first. In an absurdly high-stakes card game with weapons merchant Gavin Larviton, reputed to be one of the wealthiest men in the galaxy, Rex manages to win, almost honestly, not only Larviton's personal starship, but an entire planet, Schnufnaasik Six. After barely escaping a raid by Malarchian marines led by the dread and squeaky-voiced Lord Heinous Vlaak, Rex and Sasha set off in the ship Rex has won, the Flagrante Delicto, to survey the planetary prize.

It doesn't take Rex long to discover, not surprisingly, that he's been had, and that his financial situation is now far more dire than he'd previously been able to imagine. If any of the bounty hunters now on his trail should collar him, he could spend a near-eternity on the prison planet of Gulagatraz (the names are a delight in themselves). So, it's off the rebel base on the forest moon (which is actually a swamp; the swamp moon is all desert) to try to con the Frente Repugnante (all the other names were taken by rival splinter factions, so they ended up with “Revolting Front”, which was translated to Spanish to appear to Latino planets) into paying for a secret weapon which exists only in Rex's imagination.

Thus we embark upon a romp which has a laugh-out-loud line about every other page. This is comic science fiction in the vein of Keith Laumer's Retief stories. As with Laumer, Kroese achieves the perfect balance of laugh lines, plot development, interesting ideas, and recurring gags (there's a planet-destroying weapon called the “plasmatic entropy cannon” which the oft-inebriated Rex refers to variously as the “positronic endoscopy cannon”, “pulmonary embolism cannon”, “ponderosa alopecia cannon”, “propitious elderberry cannon”, and many other ways). There is a huge and satisfying reveal at the end—I kind of expected one was coming, but I'd have never guessed the details.

If reading this leaves you with an appetite for more Rex Nihilo, there is a prequel novella, The Chicolini Incident, and a sequel, Aye, Robot.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Tegmark, Max. Life 3.0. New York: Alfred A. Knopf, 2017. ISBN 978-1-101-94659-6.
The Earth formed from the protoplanetary disc surrounding the young Sun around 4.6 billion years ago. Around one hundred million years later, the nascent planet, beginning to solidify, was clobbered by a giant impactor which ejected the mass that made the Moon. This impact completely re-liquefied the Earth and Moon. Around 4.4 billion years ago, liquid water appeared on the Earth's surface (evidence for this comes from Hadean zircons which date from this era). And, some time thereafter, just about as soon as the Earth became environmentally hospitable to life (lack of disruption due to bombardment by comets and asteroids, and a temperature range in which the chemical reactions of life can proceed), life appeared. In speaking of the origin of life, the evidence is subtle and it's hard to be precise. There is completely unambiguous evidence of life on Earth 3.8 billion years ago, and more subtle clues that life may have existed as early as 4.28 billion years before the present. In any case, the Earth has been home to life for most of its existence as a planet.

This was what the author calls “Life 1.0”. Initially composed of single-celled organisms (which, nonetheless, dwarf in complexity of internal structure and chemistry anything produced by other natural processes or human technology to this day), life slowly diversified and organised into colonies of identical cells, evidence for which can be seen in rocks today.

About half a billion years ago, taking advantage of the far more efficient metabolism permitted by the oxygen-rich atmosphere produced by the simple organisms which preceded them, complex multi-cellular creatures sprang into existence in the “Cambrian explosion”. These critters manifested all the body forms found today, and every living being traces its lineage back to them. But they were still Life 1.0.

What is Life 1.0? Its key characteristics are that it can metabolise and reproduce, but that it can learn only through evolution. Life 1.0, from bacteria through insects, exhibits behaviour which can be quite complex, but that behaviour can be altered only by the random variation of mutations in the genetic code and natural selection of those variants which survive best in their environment. This process is necessarily slow, but given the vast expanses of geological time, has sufficed to produce myriad species, all exquisitely adapted to their ecological niches.

To put this in present-day computer jargon, Life 1.0 is “hard-wired”: its hardware (body plan and metabolic pathways) and software (behaviour in response to stimuli) are completely determined by its genetic code, and can be altered only through the process of evolution. Nothing an organism experiences or does can change its genetic programming: the programming of its descendants depends solely upon its success or lack thereof in producing viable offspring and the luck of mutation and recombination in altering the genome they inherit.

Much more recently, Life 2.0 developed. When? If you want to set a bunch of paleontologists squabbling, simply ask them when learned behaviour first appeared, but some time between the appearance of the first mammals and the ancestors of humans, beings developed the ability to learn from experience and alter their behaviour accordingly. Although some would argue simpler creatures (particularly birds) may do this, the fundamental hardware which seems to enable learning is the neocortex, which only mammalian brains possess. Modern humans are the quintessential exemplars of Life 2.0; they not only learn from experience, they've figured out how to pass what they've learned to other humans via speech, writing, and more recently, YouTube comments.

While Life 1.0 has hard-wired hardware and software, Life 2.0 is able to alter its own software. This is done by training the brain to respond in novel ways to stimuli. For example, you're born knowing no human language. In childhood, your brain automatically acquires the language(s) you hear from those around you. In adulthood you may, for example, choose to learn a new language by (tediously) training your brain to understand, speak, read, and write that language. You have deliberately altered your own software by reprogramming your brain, just as you can cause your mobile phone to behave in new ways by downloading a new application. But your ability to change yourself is limited to software. You have to work with the neurons and structure of your brain. You might wish to have more or better memory, the ability to see more colours (as some insects do), or run a sprint as fast as the current Olympic champion, but there is nothing you can do to alter those biological (hardware) constraints other than hope, over many generations, that your descendants might evolve those capabilities. Life 2.0 can design (within limits) its software, but not its hardware.

The emergence of a new major revision of life is a big thing. In 4.5 billion years, it has only happened twice, and each time it has remade the Earth. Many technologists believe that some time in the next century (and possibly within the lives of many reading this review) we may see the emergence of Life 3.0. Life 3.0, or Artificial General Intelligence (AGI), is machine intelligence, on whatever technological substrate, which can perform as well as or better than human beings, all of the intellectual tasks which they can do. A Life 3.0 AGI will be better at driving cars, doing scientific research, composing and performing music, painting pictures, writing fiction, persuading humans and other AGIs to adopt its opinions, and every other task including, most importantly, designing and building ever more capable AGIs. Life 1.0 was hard-wired; Life 2.0 could alter its software, but not its hardware; Life 3.0 can alter both its software and hardware. This may set off an “intelligence explosion” of recursive improvement, since each successive generation of AGIs will be even better at designing more capable successors, and this cycle of refinement will not be limited to the glacial timescale of random evolutionary change, but rather an engineering cycle which will run at electronic speed. Once the AGI train pulls out of the station, it may develop from the level of human intelligence to something as far beyond human cognition as humans are compared to ants in one human sleep cycle. Here is a summary of Life 1.0, 2.0, and 3.0.

Life 1.0, 2.0, and 3.0

The emergence of Life 3.0 is something about which we, exemplars of Life 2.0, should be concerned. After all, when we build a skyscraper or hydroelectric dam, we don't worry about, or rarely even consider, the multitude of Life 1.0 organisms, from bacteria through ants, which may perish as the result of our actions. Might mature Life 3.0, our descendants just as much as we are descended from Life 1.0, be similarly oblivious to our fate and concerns as it unfolds its incomprehensible plans? As artificial intelligence researcher Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Or, as Max Tegmark observes here, “[t]he real worry isn't malevolence, but competence”. It's unlikely a super-intelligent AGI would care enough about humans to actively exterminate them, but if its goals don't align with those of humans, it may incidentally wipe them out as it, for example, disassembles the Earth to use its core for other purposes.

But isn't this all just science fiction—scary fairy tales by nerds ungrounded in reality? Well, maybe. What is beyond dispute is that for the last century the computing power available at constant cost has doubled about every two years, and this trend shows no evidence of abating in the near future. Well, that's interesting, because depending upon how you estimate the computational capacity of the human brain (a contentious question), most researchers expect digital computers to achieve that capacity within this century, with most estimates falling within the years from 2030 to 2070, assuming the exponential growth in computing power continues (and there is no physical law which appears to prevent it from doing so).

My own view of the development of machine intelligence is that of the author in this “intelligence landscape”.

The Intelligence Landscape

Altitude on the map represents the difficulty of a cognitive task. Some tasks, for example management, may be relatively simple in and of themselves, but founded on prerequisites which are difficult. When I wrote my first computer program half a century ago, this map was almost entirely dry, with the water just beginning to lap into rote memorisation and arithmetic. Now many of the lowlands which people confidently said (often not long ago), “a computer will never…”, are submerged, and the ever-rising waters are reaching the foothills of cognitive tasks which employ many “knowledge workers” who considered themselves safe from the peril of “automation”. On the slope of Mount Science is the base camp of AI Design, which is shown in red since when the water surges into it, it's game over: machines will now be better than humans at improving themselves and designing their more intelligent and capable successors. Will this be game over for humans and, for that matter, biological life on Earth? That depends, and it depends upon decisions we may be making today.

Assuming we can create these super-intelligent machines, what will be their goals, and how can we ensure that our machines embody them? Will the machines discard our goals for their own as they become more intelligent and capable? How would bacteria have solved this problem contemplating their distant human descendants?

First of all, let's assume we can somehow design our future and constrain the AGIs to implement it. What kind of future will we choose? That's complicated. Here are the alternatives discussed by the author. I've deliberately given just the titles without summaries to stimulate your imagination about their consequences.

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookeeper
  • 1984
  • Reversion
  • Self-destruction

Choose wisely: whichever you choose may be the one your descendants (if any exist) may be stuck with for eternity. Interestingly, when these alternatives are discussed in chapter 5, none appears to be without serious downsides, and that's assuming we'll have the power to guide our future toward one of these outcomes. Or maybe we should just hope the AGIs come up with something better than we could think of. Hey, it worked for the bacteria and ants, both of which are prospering despite the occasional setback due to medical interventions or kids with magnifying glasses.

Let's assume progress toward AGI continues over the next few decades. I believe that what I've been calling the “Roaring Twenties” will be a phase transition in the structure of human societies and economies. Continued exponential growth in computing power will, without any fundamental breakthroughs in our understanding of problems and how to solve them, allow us to “brute force” previously intractable problems such as driving and flying in unprepared environments, understanding and speaking natural languages, language translation, much of general practice medical diagnosis and routine legal work, interaction with customers in retail environments, and many jobs in service industries, allowing them to be automated. The cost to replace a human worker will be comparable to a year's wages, and the automated replacement will work around the clock with only routine maintenance and never vote for a union.

This is nothing new: automation has been replacing manual labour since the 1950s, but as the intelligence landscape continues to flood, not just blue collar jobs, which have already been replaced by robots in automobile plants and electronics assembly lines, but white collar clerical and professional jobs people went into thinking them immune from automation. How will the economy cope with this? In societies with consensual government, those displaced vote; the computers who replace them don't (at least for the moment). Will there be a “robot tax” which funds a basic income for those made redundant? What are the consequences for a society where a majority of people have no job? Will voters at some point say “enough” and put an end to development of artificial intelligence (but note that this would have to be global and enforced by an intrusive and draconian regime; otherwise it would confer a huge first mover advantage on an actor who achieved AGI in a covert program)?

The following chart is presented to illustrate stagnation of income of lower-income households since around 1970.

Income per U.S. Household: 1920–2015

I'm not sure this chart supports the argument that technology has been the principal cause for the stagnation of income among the bottom 90% of households since around 1970. There wasn't any major technological innovation which affected employment that occurred around that time: widespread use of microprocessors and personal computers did not happen until the 1980s when the flattening of the trend was already well underway. However, two public policy innovations in the United States which occurred in the years immediately before 1970 (1, 2) come to mind. You don't have to be an MIT cosmologist to figure out how they torpedoed the rising trend of prosperity for those aspiring to better themselves which had characterised the U.S. since 1940.

Nonetheless, what is coming down the track is something far more disruptive than the transition from an agricultural society to industrial production, and it may happen far more rapidly, allowing less time to adapt. We need to really get this right, because everything depends on it.

Observation and our understanding of the chemistry underlying the origin of life is compatible with Earth being the only host to life in our galaxy and, possibly, the visible universe. We have no idea whatsoever how our form of life emerged from non-living matter, and it's entirely possible it may have been an event so improbable we'll never understand it and which occurred only once. If this be the case, then what we do in the next few decades matters even more, because everything depends upon us, and what we choose. Will the universe remain dead, or will life burst forth from this most improbable seed to carry the spark born here to ignite life and intelligence throughout the universe? It could go either way. If we do nothing, life on Earth will surely be extinguished: the death of the Sun is certain, and long before that the Earth will be uninhabitable. We may be wiped out by an asteroid or comet strike, by a dictator with his fat finger on a button, or by accident (as Nathaniel Borenstein said, “The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents.”).

But if we survive these near-term risks, the future is essentially unbounded. Life will spread outward from this spark on Earth, from star to star, galaxy to galaxy, and eventually bring all the visible universe to life. It will be an explosion which dwarfs both its predecessors, the Cambrian and technological. Those who create it will not be like us, but they will be our descendants, and what they achieve will be our destiny. Perhaps they will remember us, and think kindly of those who imagined such things while confined to one little world. It doesn't matter; like the bacteria and ants, we will have done our part.

The author is co-founder of the Future of Life Institute which promotes and funds research into artificial intelligence safeguards. He guided the development of the Asilomar AI Principles, which have been endorsed to date by 1273 artificial intelligence and robotics researchers. In the last few years, discussion of the advent of AGI and the existential risks it may pose and potential ways to mitigate them has moved from a fringe topic into the mainstream of those engaged in developing the technologies moving toward that goal. This book is an excellent introduction to the risks and benefits of this possible future for a general audience, and encourages readers to ask themselves the difficult questions about what future they want and how to get there.

In the Kindle edition, everything is properly linked. Citations of documents on the Web are live links which may be clicked to display them. There is no index.

 Permalink

Lewis, Damien. The Ministry of Ungentlemanly Warfare. New York: Quercus, 2015. ISBN 978-1-68144-392-8.
After becoming prime minister in May 1940, one of Winston Churchill's first acts was to establish the Special Operations Executive (SOE), which was intended to conduct raids, sabotage, reconnaissance, and support resistance movements in Axis-occupied countries. The SOE was not part of the military: it was a branch of the Ministry of Economic Warfare and its very existence was a state secret, camouflaged under the name “Inter-Service Research Bureau”. Its charter was, as Churchill described it, to “set Europe ablaze”.

The SOE consisted, from its chief, Brigadier Colin McVean Gubbins, who went by the designation “M”, to its recruits, of people who did not fit well with the regimentation, hierarchy, and constraints of life in the conventional military branches. They could, in many cases, be easily mistaken for blackguards, desperadoes, and pirates, and that's precisely what they were in the eyes of the enemy—unconstrained by the rules of warfare, striking by stealth, and sowing chaos, mayhem, and terror among occupation troops who thought they were far from the front.

Leading some of the SOE's early exploits was Gustavus “Gus” March-Phillipps, founder of the British Army's Small Scale Raiding Force, and given the SOE designation “Agent W.01”, meaning the first agent assigned to the west Africa territory with the leading zero identifying him as “trained and licensed to use all means to liquidate the enemy”—a license to kill. The SOE's liaison with the British Navy, tasked with obtaining support for its operations and providing cover stories for them, was a fellow named Ian Fleming.

One of the SOE's first and most daring exploits was Operation Postmaster, with the goal of seizing German and Italian ships anchored in the port of Santa Isabel on the Spanish island colony of Fernando Po off the coast of west Africa. Given the green light by Churchill over the strenuous objections of the Foreign Office and Admiralty, who were concerned about the repercussions if British involvement in what amounted to an act of piracy in a neutral country were to be disclosed, the operation was mounted under the strictest secrecy and deniability, with a cover story prepared by Ian Fleming. Despite harrowing misadventures along the way, the plan was a brilliant success, capturing three ships and their crews and delivering them to the British-controlled port of Lagos without any casualties. Vindicated by the success, Churchill gave the SOE the green light to raid Nazi occupation forces on the Channel Islands and the coast of France.

On his first mission in Operation Postmaster was Anders Lassen, an aristocratic Dane who enlisted as a private in the British Commandos after his country was occupied by the Nazis. With his silver-blond hair, blue eyes, and accent easily mistaken for German, Lassen was apprehended by the Home Guard on several occasions while on training missions in Britain and held as a suspected German spy until his commanders intervened. Lassen was given a field commission, direct from private to second lieutenant, immediately after Operation Postmaster, and went on to become one of the most successful leaders of special operations raids in the war. As long as Nazis occupied his Danish homeland, he was possessed with a desire to kill as many Nazis as possible, wherever and however he could, and when in combat was animated by a berserker drive and ability to improvise that caused those who served with him to call him the “Danish Viking”.

This book provides a look into the operations of the SOE and its successor organisations, the Special Air Service and Special Boat Service, seen through the career of Anders Lassen. So numerous were special operations, conducted in many theatres around the world, that this kind of focus is necessary. Also, attrition in these high-risk raids, often far behind enemy lines, was so high there are few individuals one can follow throughout the war. As the war approached its conclusion, Lassen was the only surviving participant in Operation Postmaster, the SOE's first raid.

Lassen went on to lead raids against Nazi occupation troops in the Channel Islands, leading Churchill to remark, “There comes from the sea from time to time a hand of steel which plucks the German sentries from their posts with growing efficiency.” While these “butcher-and-bolt” raids could not liberate territory, they yielded prisoners, code books, and radio contact information valuable to military intelligence and, more importantly, forced the Germans to strengthen their garrisons in these previously thought secure posts, tying down forces which could otherwise be sent to active combat fronts. Churchill believed that the enemy should be attacked wherever possible, and SOE was a precision weapon which could be deployed where conventional military forces could not be used.

As the SOE was absorbed into the military Special Air Service, Lassen would go on to fight in North Africa, Crete, the Aegean islands, then occupied by Italian and German troops, and mainland Greece. His raid on a German airbase on occupied Crete took out fighters and bombers which could have opposed the Allied landings in Sicily. Later, his small group of raiders, unsupported by any other force, liberated the Greek city of Salonika, bluffing the German commander into believing Lassen's forty raiders and two fishing boats were actually a British corps of thirty thousand men, with armour, artillery, and naval support.

After years of raiding in peripheral theatres, Lassen hungered to get into the “big war”, and ended up in Italy, where his irregular form of warfare and disdain for military discipline created friction with his superiors. But he got results, and his unit was tasked with reconnaissance and pathfinding for an Allied crossing of Lake Comacchio (actually, more of a swamp) in Operation Roast in the final days of the war. It was there he was to meet his end, in a fierce engagement against Nazi troops defending the north shore. For this, he posthumously received the Victoria Cross, becoming the only non-Commonwealth citizen so honoured in World War II.

It is a cliché to say that a work of history “reads like a thriller”, but in this case it is completely accurate. The description of the raid on the Kastelli airbase on Crete would, if made into a movie, probably cause many viewers to suspect it to be fictionalised, but that's what really happened, based upon after action reports by multiple participants and aerial reconnaissance after the fact.

World War II was a global conflict, and while histories often focus on grand battles such as D-day, Stalingrad, Iwo Jima, and the fall of Berlin, there was heroism in obscure places such as the Greek islands which also contributed to the victory, and combatants operating in the shadows behind enemy lines who did their part and often paid the price for the risks they willingly undertook. This is a stirring story of this shadow war, told through the short life of one of its heroes.

 Permalink

April 2018

Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012. ISBN 978-0-8129-7968-8.
This book is volume three in the author's Incerto series, following Fooled by Randomness (February 2011) and The Black Swan (January 2009). It continues to explore the themes of randomness, risk, and the design of systems: physical, economic, financial, and social, which perform well in the face of uncertainty and infrequent events with large consequences. He begins by posing the deceptively simple question, “What is the antonym of ‘fragile’?”

After thinking for a few moments, most people will answer with “robust” or one of its synonyms such as “sturdy”, “tough”, or “rugged”. But think about it a bit more: does a robust object or system actually behave in the opposite way to a fragile one? Consider a teacup made of fine china. It is fragile—if subjected to more than a very limited amount of force or acceleration, it will smash into bits. It is fragile because application of such an external stimulus, for example by dropping it on the floor, will dramatically degrade its value for the purposes for which it was created (you can't drink tea from a handful of sherds, and they don't look good sitting on the shelf). Now consider a teacup made of stainless steel. It is far more robust: you can drop it from ten kilometres onto a concrete slab and, while it may be slightly dented, it will still work fine and look OK, maybe even acquiring a little character from the adventure. But is this really the opposite of fragility? The china teacup was degraded by the impact, while the stainless steel one was not. But are there objects and systems which improve as a result of random events: uncertainty, risk, stressors, volatility, adventure, and the slings and arrows of existence in the real world? Such a system would not be robust, but would be genuinely “anti-fragile” (which I will subsequently write without the hyphen, as does the author): it welcomes these perturbations, and may even require them in order to function well or at all.

Antifragility seems an odd concept at first. Our experience is that unexpected events usually make things worse, and that the inexorable increase in entropy causes things to degrade with time: plants and animals age and eventually die; machines wear out and break; cultures and societies become decadent, corrupt, and eventually collapse. And yet if you look at nature, antifragility is everywhere—it is the mechanism which drives biological evolution, technological progress, the unreasonable effectiveness of free market systems in efficiently meeting the needs of their participants, and just about everything else that changes over time, from trends in art, literature, and music, to political systems, and human cultures. In fact, antifragility is a property of most natural, organic systems, while fragility (or at best, some degree of robustness) tends to characterise those which were designed from the top down by humans. And one of the paradoxical characteristics of antifragile systems is that they tend to be made up of fragile components.

How does this work? We'll get to physical systems and finance in a while, but let's start out with restaurants. Any reasonably large city in the developed world will have a wide variety of restaurants serving food from numerous cultures, at different price points, and with ambience catering to the preferences of their individual clientèles. The restaurant business is notoriously fragile: the culinary preferences of people are fickle and unpredictable, and restaurants who are behind the times frequently go under. And yet, among the population of restaurants in a given area at a given time, customers can usually find what they're looking for. The restaurant population or industry is antifragile, even though it is composed of fragile individual restaurants which come and go with the whims of diners, which will be catered to by one or more among the current, but ever-changing population of restaurants.

Now, suppose instead that some Food Commissar in the All-Union Ministry of Nutrition carefully studied the preferences of people and established a highly-optimised and uniform menu for the monopoly State Feeding Centres, then set up a central purchasing, processing, and distribution infrastructure to optimise the efficient delivery of these items to patrons. This system would be highly fragile, since while it would deliver food, there would no feedback based upon customer preferences, and no competition to respond to shifts in taste. The result would be a mediocre product which, over time, was less and less aligned with what people wanted, and hence would have a declining number of customers. The messy and chaotic market of independent restaurants, constantly popping into existence and disappearing like virtual particles, exploring the culinary state space almost at random, does, at any given moment, satisfy the needs of its customers, and it responds to unexpected changes by adapting to them: it is antifragile.

Now let's consider an example from metallurgy. If you pour molten metal from a furnace into a cold mould, its molecules, which were originally jostling around at random at the high temperature of the liquid metal, will rapidly freeze into a structure with small crystals randomly oriented. The solidified metal will contain dislocations wherever two crystals meet, with each forming a weak spot where the metal can potentially fracture under stress. The metal is hard, but brittle: if you try to bend it, it's likely to snap. It is fragile.

To render it more flexible, it can be subjected to the process of annealing, where it is heated to a high temperature (but below melting), which allows the molecules to migrate within the bulk of the material. Existing grains will tend to grow, align, and merge, resulting in a ductile, workable metal. But critically, once heated, the metal must be cooled on a schedule which provides sufficient randomness (molecular motion from heat) to allow the process of alignment to continue, but not to disrupt already-aligned crystals. Here is a video from Cellular Automata Laboratory which demonstrates annealing. Note how sustained randomness is necessary to keep the process from quickly “freezing up” into a disordered state.

In another document at this site, I discuss solving the travelling salesman problem through the technique of simulated annealing, which is analogous to annealing metal, and like it, is a manifestation of antifragility—it doesn't work without randomness.

When you observe a system which adapts and prospers in the face of unpredictable changes, it will almost always do so because it is antifragile. This is a large part of how nature works: evolution isn't able to predict the future and it doesn't even try. Instead, it performs a massively parallel, planetary-scale search, where organisms, species, and entire categories of life appear and disappear continuously, but with the ecosystem as a whole constantly adapting itself to whatever inputs may perturb it, be they a wholesale change in the composition of the atmosphere (the oxygen catastrophe at the beginning of the Proterozoic eon around 2.45 billion years ago), asteroid and comet impacts, and ice ages.

Most human-designed systems, whether machines, buildings, political institutions, or financial instruments, are the antithesis of those found in nature. They tend to be highly-optimised to accomplish their goals with the minimum resources, and to be sufficiently robust to cope with any stresses they may be expected to encounter over their design life. These systems are not antifragile: while they may be designed not to break in the face of unexpected events, they will, at best, survive, but not, like nature, often benefit from them.

The devil's in the details, and if you reread the last paragraph carefully, you may be able to see the horns and pointed tail peeking out from behind the phrase “be expected to”. The problem with the future is that it is full of all kinds of events, some of which are un-expected, and whose consequences cannot be calculated in advance and aren't known until they happen. Further, there's usually no way to estimate their probability. It doesn't even make any sense to talk about the probability of something you haven't imagined could happen. And yet such things happen all the time.

Today, we are plagued, in many parts of society, with “experts” the author dubs fragilistas. Often equipped with impeccable academic credentials and with powerful mathematical methods at their fingertips, afflicted by the “Soviet-Harvard delusion” (overestimating the scope of scientific knowledge and the applicability of their modelling tools to the real world), they are blind to the unknown and unpredictable, and they design and build systems which are highly fragile in the face of such events. A characteristic of fragilista-designed systems is that they produce small, visible, and apparently predictable benefits, while incurring invisible risks which may be catastrophic and occur at any time.

Let's consider an example from finance. Suppose you're a conservative investor interested in generating income from your lifetime's savings, while preserving capital to pass on to your children. You might choose to invest, say, in a diversified portfolio of stocks of long-established companies in stable industries which have paid dividends for 50 years or more, never skipping or reducing a dividend payment. Since you've split your investment across multiple companies, industry sectors, and geographical regions, your risk from an event affecting one of them is reduced. For years, this strategy produces a reliable and slowly growing income stream, while appreciation of the stock portfolio (albeit less than high flyers and growth stocks, which have greater risk and pay small dividends or none at all) keeps you ahead of inflation. You sleep well at night.

Then 2008 rolls around. You didn't do anything wrong. The companies in which you invested didn't do anything wrong. But the fragilistas had been quietly building enormous cross-coupled risk into the foundations of the financial system (while pocketing huge salaries and bonuses, while bearing none of the risk themselves), and when it all blows up, in one sickening swoon, you find the value of your portfolio has been cut by 50%. In a couple of months, you have lost half of what you worked for all of your life. Your “safe, conservative, and boring” stock portfolio happened to be correlated with all of the other assets, and when the foundation of the system started to crumble, suffered along with them. The black swan landed on your placid little pond.

What would an antifragile investment portfolio look like, and how would it behave in such circumstances? First, let's briefly consider a financial option. An option is a financial derivative contract which gives the purchaser the right, but not the obligation, to buy (“call option”) or sell (”put option”) an underlying security (stock, bond, market index, etc.) at a specified price, called the “strike price” (or just “strike”). If the a call option has a strike above, or a put option a strike below, the current price of the security, it is called “out of the money”, otherwise it is “in the money”. The option has an expiration date, after which, if not “exercised” (the buyer asserts his right to buy or sell), the contract expires and the option becomes worthless.

Let's consider a simple case. Suppose Consolidated Engine Sludge (SLUJ) is trading for US$10 per share on June 1, and I buy a call option to buy 100 shares at US$15/share at any time until August 31. For this right, I might pay a premium of, say, US$7. (The premium depends upon sellers' perception of the volatility of the stock, the term of the option, and the difference between the current price and the strike price.) Now, suppose that sometime in August, SLUJ announces a breakthrough that allows them to convert engine sludge into fructose sweetener, and their stock price soars on the news to US$19/share. I might then decide to sell on the news, exercise the option, paying US$1500 for the 100 shares, and then immediately sell them at US$19, realising a profit of US$400 on the shares or, subtracting the cost of the option, US$393 on the trade. Since my original investment was just US$7, this represents a return of 5614% on the original investment, or 22457% annualised. If SLUJ never touches US$15/share, come August 31, the option will expire unexercised, and I'm out the seven bucks. (Since options can be bought and sold at any time and prices are set by the market, it's actually a bit more complicated than that, but this will do for understanding what follows.)

You might ask yourself what would motivate somebody to sell such an option. In many cases, it's an attractive proposition. If I'm a long-term shareholder of SLUJ and have found it to be a solid but non-volatile stock that pays a reasonable dividend of, say, two cents per share every quarter, by selling the call option with a strike of 15, I pocket an immediate premium of seven cents per share, increasing my income from owning the stock by a factor of 4.5. For this, I give up the right to any appreciation should the stock rise above 15, but that seems to be a worthwhile trade-off for a stock as boring as SLUJ (at least prior to the news flash).

A put option is the mirror image: if I bought a put on SLUJ with a strike of 5, I'll only make money if the stock falls below 5 before the option expires.

Now we're ready to construct a genuinely antifragile investment. Suppose I simultaneously buy out of the money put and call options on the same security, a so-called “long straddle”. Now, as long as the price remains within the strike prices of the put and call, both options will expire worthless, but if the price either rises above the call strike or falls below the put strike, that option will be in the money and pay off the further the underlying price veers from the band defined by the two strikes. This is, then, a pure bet on volatility: it loses a small amount of money as long as nothing unexpected happens, but when a shock occurs, it pays off handsomely.

Now, the premiums on deep out of the money options are usually very modest, so an investor with a portfolio like the one I described who was clobbered in 2008 could have, for a small sum every quarter, purchased put and call options on, say, the Standard & Poor's 500 stock index, expecting to usually have them expire worthless, but under the circumstance which halved the value of his portfolio, would pay off enough to compensate for the shock. (If worried only about a plunge he could, of course, have bought just the put option and saved money on premiums, but here I'm describing a pure example of antifragility being used to cancel fragility.)

I have only described a small fraction of the many topics covered in this masterpiece, and described none of the mathematical foundations it presents (which can be skipped by readers intimidated by equations and graphs). Fragility and antifragility is one of those concepts, simple once understood, which profoundly change the way you look at a multitude of things in the world. When a politician, economist, business leader, cultural critic, or any other supposed thinker or expert advocates a policy, you'll learn to ask yourself, “Does this increase fragility?” and have the tools to answer the question. Further, it provides an intellectual framework to support many of the ideas and policies which libertarians and advocates of individual liberty and free markets instinctively endorse, founded in the way natural systems work. It is particularly useful in demolishing “green” schemes which aim at replacing the organic, distributed, adaptive, and antifragile mechanisms of the market with coercive, top-down, and highly fragile central planning which cannot possibly have sufficient information to work even in the absence of unknowns in the future.

There is much to digest here, and the ramifications of some of the clearly-stated principles take some time to work out and fully appreciate. Indeed, I spent more than five years reading this book, a little bit at a time. It's worth taking the time and making the effort to let the message sink in and figure out how what you've learned applies to your own life and act accordingly. As Fat Tony says, “Suckers try to win arguments; nonsuckers try to win.”

 Permalink

May 2018

Radin, Dean. Real Magic. New York: Harmony Books, 2018. ISBN 978-1-5247-5882-0.
From its beginnings in the 19th century as “psychical research”, there has always been something dodgy and disreputable about parapsychology: the scientific study of phenomena, frequently reported across all human cultures and history, such as clairvoyance, precognition, telepathy, communication with the dead or non-material beings, and psychokinesis (mental influence on physical processes). All of these disparate phenomena have in common that there is no known physical theory which can explain how they might work. In the 19th century, science was much more willing to proceed from observations and evidence, then try to study them under controlled conditions, and finally propose and test theories about how they might work. Today, many scientists are inclined to put theory first, rejecting any evidence of phenomena for which no theory exists to explain it.

In such an intellectual environment, those who study such things, now called parapsychologists, have been, for the most part, very modest in their claims, careful to distinguish their laboratory investigations, mostly involving ordinary subjects, from extravagant reports of shamans and psychics, whether contemporary or historical, and scrupulous in the design and statistical analysis of their experiments. One leader in the field is Dean Radin, author of the present book, and four times president of the Parapsychological Association, a professional society which is an affiliate of the American Association for the Advancement of Science. Dr. Radin is chief scientist at the Institute of Noetic Sciences in Petaluma, California, where he pursues laboratory research in parapsychology. In his previous books, including Entangled Minds (August 2007), he presents the evidence for various forms of human perception which seem to defy conventional explanation. He refrains from suggesting mechanisms or concluding whether what is measured is causation or correlation. Rather, he argues that the body of accumulated evidence from his work and that of others, in recent experiments conducted under the strictest protocols to eliminate possible fraud, post-selection of data, and with blinding and statistical rigour which often exceed those of clinical trials of pharmaceuticals, provides evidence that “something is going on” which we don't understand that would be considered discovery of a new phenomenon if it originated in a “hard science” field such as particle physics.

Here, Radin argues that the accumulated evidence for the phenomena parapsychologists have been studying in the laboratory for decades is so persuasive to all except sceptics who no amount of evidence would suffice to persuade, that it is time for parapsychologists and those interested in their work to admit that what they're really studying is magic. “Not the fictional magic of Harry Potter, the feigned magic of Harry Houdini, or the fraudulent magic of con artists. Not blue lightning bolts springing from the fingertips, aerial combat on broomsticks, sleight-of-hand tricks, or any of the other elaborations of artistic license and special effects.” Instead, real magic, as understood for millennia, which he divides into three main categories:

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

As Radin describes, it was only after years of work in parapsychology that he finally figured out why it is that, while according to a 2005 Gallup pool, 75% of people in the United States believe in one or more phenomena considered “paranormal”, only around 0.001% of scientists are engaged in studying these experiences. What's so frightening, distasteful, or disreputable about them? It's because they all involve some kind of direct interaction between human consciousness and the objective, material world or, in other words magic. Scientists are uncomfortable enough with consciousness as it is: they don't have any idea how it emerges from what, in their reductionist models, is a computer made of meat, to the extent that some scientists deny the existence of consciousness entirely and dismiss it as a delusion. (Indeed, studying the origin of consciousness is almost as disreputable in academia as parapsychology.)

But if we must admit the existence of this mysterious thing called consciousness, along with other messy concepts such as free will, at least we must keep it confined within the skull: not roaming around and directly perceiving things far away or in the future, affecting physical events, or existing independent of brains. That would be just too weird.

And yet most religions, from those of traditional societies to the most widely practiced today, include descriptions of events and incorporate practices which are explicitly magical according to Radin's definition. Paragraphs 2115–2117 of the Catechism of the Roman Catholic Church begin by stating that “God can reveal the future to his prophets or to other saints.” and then go on to prohibit “Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums…”. But if these things did not exist, or did not work, then why would there be a need to forbid them? Perhaps it's because, despite religion's incorporating magic into its belief system and practices, it also wishes to enforce a monopoly on the use of magic among its believers—in Radin's words, “no magic for you!

In fact, as stated at the beginning of chapter 4, “Magic is to religion as technology is to science.” Just as science provides an understanding of the material world which technology applies in order to accomplish goals, religion provides a model of the spiritual world which magic provides the means to employ. From antiquity to the present day, religion and magic have been closely associated with one another, and many religions have restricted knowledge of their magical components and practices to insiders and banned others knowing or employing them. Radin surveys this long history and provides a look at contemporary, non-religious, practice of the three categories of real magic.

He then turns to what is, in my estimation, the most interesting and important part of the book: the scientific evidence for the existence of real magic. A variety of laboratory experiments, many very recent and with careful design and controls, illustrate the three categories and explore subtle aspects of their behaviour. For example, when people precognitively sense events in the future, do they sense a certain event which is sure to happen, or the most probable event whose occurrence might be averted through the action of free will? How on Earth would you design an experiment to test that? It's extremely clever, and the results are interesting and have deep implications.

If ordinary people can demonstrate these seemingly magical powers in the laboratory (albeit with small, yet statistically highly significant effect sizes), are there some people whose powers are much greater? That is the case for most human talents, whether athletic, artistic, or intellectual; one suspects it might be so here. Historical and contemporary evidence for “Merlin-class magicians” is reviewed, not as proof for the existence of real magic, but as what might be expected if it did exist.

What is science to make of all of this? Mainstream science, if it mentions consciousness at all, usually considers it an emergent phenomenon at the tip of a pyramid of more fundamental sciences such as biology, chemistry, and physics. But what if we've got it wrong, and consciousness is not at the top but the bottom: ultimately everything emerges from a universal consciousness of which our individual consciousness is but a part, and of which all parts are interconnected? These are precisely the tenets of a multitude of esoteric traditions developed independently by cultures all around the world and over millennia, all of whom incorporated some form of magic into their belief systems. Maybe, as evidence for real magic emerges from the laboratory, we'll conclude they were on to something.

This is an excellent look at the deep connections between traditional beliefs in magic and modern experiments which suggest those beliefs, however much they appear to contradict dogma, may be grounded in reality. Readers who are unacquainted with modern parapsychological research and the evidence it has produced probably shouldn't start here, but rather with the author's earlier Entangled Minds, as it provides detailed information about the experiments, results, and responses to criticism of them which are largely assumed as the foundation for the arguments here.

 Permalink

Kroese, Robert. Schrödinger's Gat. Seattle: CreateSpace, 2012. ISBN 978-1-4903-1821-9.
It was pure coincidence (or was it?) that caused me to pick up this book immediately after finishing Dean Radin's Real Magic (May 2018), but it is a perfect fictional companion to that work. Robert Kroese, whose Starship Grifters (February 2018) is the funniest science fiction novel I've read in the last several years, here delivers a tour de force grounded in quantum theory, multiple worlds, free will, the nature of consciousness, determinism versus uncertainty, the nature of genius, and the madness which can result from thinking too long and deeply about these enigmatic matters. This is a novel, not a work of philosophy or physics, and the story moves along smartly with interesting characters including a full-on villain and an off-stage…well, we're not really sure. In a postscript, the author explicitly lists the “cheats” he used to make the plot work but notes, “The remarkable thing about writing this book was how few liberties I actually had to take.”

The story is narrated by Paul Bayes (whose name should be a clue we're about to ponder what we can know in an uncertain world), who we meet as he is ready to take his life by jumping under a BART train at a Bay Area station. Paul considers himself a failure: failed crime writer, failed father whose wife divorced him and took the kids, and undistinguished high school English teacher with little hope of advancement. Perhaps contributing to his career problems, Paul is indecisive. Kill himself or just walk away—why not flip a coin? Paul's life is spared through the intervention of a mysterious woman who he impulsively follows on a madcap adventure which ends up averting a potential mass murder on San Francisco's Embarcadero. Only after, does he learn her name, Tali. She agrees to meet him for dinner the next day and explain everything.

Paul shows up at the restaurant, but Tali doesn't. Has he been stood up? He knows next to nothing about Tali—not even her last name, but after some time on the Internet following leads from their brief conversation the day before he discovers a curious book by a recently-retired Stanford physics professor titled Fate and Consciousness—hardly the topics you'd expect one with his background to expound upon. After reading some of the odd text, he decides to go to the source.

This launches Paul into an series of adventures which cause him to question the foundations of reality: to what extent do we really have free will, and how much is the mindless gears of determinism turning toward the inevitable? Why does the universe seem to “fight back” when we try to impose our will upon it? Is there a “force”, and can we detect disturbances in it and act upon them? (The technology described in the story is remarkably similar to the one to which I have contributed to developing and deploying off and on for the last twenty years.) If such a thing could be done, who might be willing to kill to obtain the power it would confer? Is the universe a passive player in the unfolding of the future, or an active and potentially ruthless agent?

All of these questions are explored in a compelling story with plenty of action as Paul grapples with the mysteries confronting him, incorporating prior discoveries into the emerging picture. This is an entertaining, rewarding, and thought-provoking read which, although entirely fiction, may not be any more weird than the universe we inhabit.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Skousen, W. Cleon. The Naked Communist. Salt Lake City: Izzard Ink, [1958, 1964, 1966, 1979, 1986, 2007, 2014] 2017. ISBN 978-1-5454-0215-3.
In 1935 the author joined the FBI in a clerical position while attending law school at night. In 1940, after receiving his law degree, he was promoted to Special Agent and continued in that capacity for the rest of his 16 year career at the Bureau. During the postwar years, one of the FBI's top priorities was investigating and responding to communist infiltration and subversion of the United States, a high priority of the Soviet Union. During his time at the FBI Skousen made the acquaintance of several of the FBI's experts on communist espionage and subversion, but he perceived a lack of information, especially available to the general public, which explained communism: where did it come from, what are its philosophical underpinnings, what do communists believe, what are their goals, and how do they intend to achieve them?

In 1951, Skousen left the FBI to take a teaching position at Brigham Young University in Provo, Utah. In 1957, he accepted an offer to become Chief of Police in Salt Lake City, a job he held for the next three and a half years before being fired after raiding an illegal poker game in which newly-elected mayor J. Bracken Lee was a participant. During these years, Skousen continued his research on communism, mostly consulting original sources. By 1958, his book was ready for publication. After struggling to find a title, he settled on “The Naked Communist”, suggested by film producer and ardent anti-communist Cecil B. DeMille.

Spurned by the major publishers, Skousen paid for printing the first edition of 5000 copies out of his own pocket. Sales were initially slow, but quickly took off. Within two years of the book's launch, press runs were 10,000 to 20,000 copies with one run of 50,000. In 1962, the book passed the milestone of one million copies in print. As the 1960s progressed and it became increasingly unfashionable to oppose communist tyranny and enslavement, sales tapered off, but picked up again after the publication of a 50th anniversary edition in 2008 (a particularly appropriate year for such a book).

This 60th anniversary edition, edited and with additional material by the author's son, Paul B. Skousen, contains most of the original text with a description of the history of the work and additions bringing events up to date. It is sometimes jarring when you transition from text written in 1958 to that from the standpoint of more than a half century hence, but for the most part it works. One of the most valuable parts of the book is its examination of the intellectual foundations of communism in the work of Marx and Engels. Like the dogma of many other cults, these ideas don't stand up well to critical scrutiny, especially in light of what we've learned about the universe since they were proclaimed. Did you know that Engels proposed a specific theory of the origin of life based upon his concepts of Dialectical Materialism? It was nonsense then and it's nonsense now, but it's still in there. What's more, this poppycock is at the centre of the communist theories of economics, politics, and social movements, where it makes no more sense than in the realm of biology and has been disastrous every time some society was foolish enough to try it.

All of this would be a historical curiosity were it not for the fact that communists, notwithstanding their running up a body count of around a hundred million in the countries where they managed to come to power, and having impoverished people around the world, have managed to burrow deep into the institutions of the West: academia, media, politics, judiciary, and the administrative state. They may not call themselves communists (it's “social democrats”, “progressives”, “liberals”, and other terms, moving on after each one becomes discredited due to the results of its policies and the borderline insanity of those who so identify), but they have been patiently putting the communist agenda into practice year after year, decade after decade. What is that agenda? Let's see.

In the 8th edition of this book, published in 1961, the following “forty-five goals of Communism” were included. Derived by the author from the writings of current and former communists and testimony before Congress, many seemed absurd or fantastically overblown to readers at the time. The complete list, as follows, was read into the Congressional Record in 1963, placing it in the public domain. Here is the list.

Goals of Communism

  1. U.S. acceptance of coexistence as the only alternative to atomic war.
  2. U.S. willingness to capitulate in preference to engaging in atomic war.
  3. Develop the illusion that total disarmament by the United States would be a demonstration of moral strength.
  4. Permit free trade between all nations regardless of Communist affiliation and regardless of whether or not items could be used for war.
  5. Extension of long-term loans to Russia and Soviet satellites.
  6. Provide American aid to all nations regardless of Communist domination.
  7. Grant recognition of Red China. Admission of Red China to the U.N.
  8. Set up East and West Germany as separate states in spite of Khrushchev's promise in 1955 to settle the German question by free elections under supervision of the U.N.
  9. Prolong the conferences to ban atomic tests because the United States has agreed to suspend tests as long as negotiations are in progress.
  10. Allow all Soviet satellites individual representation in the U.N.
  11. Promote the U.N. as the only hope for mankind. If its charter is rewritten, demand that it be set up as a one-world government with its own independent armed forces. (Some Communist leaders believe the world can be taken over as easily by the U.N. as by Moscow. Sometimes these two centers compete with each other as they are now doing in the Congo.)
  12. Resist any attempt to outlaw the Communist Party.
  13. Do away with all loyalty oaths.
  14. Continue giving Russia access to the U.S. Patent Office.
  15. Capture one or both of the political parties in the United States.
  16. Use technical decisions of the courts to weaken basic American institutions by claiming their activities violate civil rights.
  17. Get control of the schools. Use them as transmission belts for socialism and current Communist propaganda. Soften the curriculum. Get control of teachers' associations. Put the party line in textbooks.
  18. Gain control of all student newspapers.
  19. Use student riots to foment public protests against programs or organizations which are under Communist attack.
  20. Infiltrate the press. Get control of book-review assignments, editorial writing, policymaking positions.
  21. Gain control of key positions in radio, TV, and motion pictures.
  22. Continue discrediting American culture by degrading all forms of artistic expression. An American Communist cell was told to “eliminate all good sculpture from parks and buildings, substitute shapeless, awkward and meaningless forms.”
  23. Control art critics and directors of art museums. “Our plan is to promote ugliness, repulsive, meaningless art.”
  24. Eliminate all laws governing obscenity by calling them “censorship” and a violation of free speech and free press.
  25. Break down cultural standards of morality by promoting pornography and obscenity in books, magazines, motion pictures, radio, and TV.
  26. Present homosexuality, degeneracy and promiscuity as “normal, natural, healthy.”
  27. Infiltrate the churches and replace revealed religion with “social” religion. Discredit the Bible and emphasize the need for intellectual maturity which does not need a “religious crutch.”
  28. Eliminate prayer or any phase of religious expression in the schools on the ground that it violates the principle of “separation of church and state.”
  29. Discredit the American Constitution by calling it inadequate, old-fashioned, out of step with modern needs, a hindrance to cooperation between nations on a worldwide basis.
  30. Discredit the American Founding Fathers. Present them as selfish aristocrats who had no concern for the “common man.”
  31. Belittle all forms of American culture and discourage the teaching of American history on the ground that it was only a minor part of the “big picture.” Give more emphasis to Russian history since the Communists took over.
  32. Support any socialist movement to give centralized control over any part of the culture—education, social agencies, welfare programs, mental health clinics, etc.
  33. Eliminate all laws or procedures which interfere with the operation of the Communist apparatus.
  34. Eliminate the House Committee on Un-American Activities.
  35. Discredit and eventually dismantle the FBI.
  36. Infiltrate and gain control of more unions.
  37. Infiltrate and gain control of big business.
  38. Transfer some of the powers of arrest from the police to social agencies. Treat all behavioral problems as psychiatric disorders which no one but psychiatrists can understand or treat.
  39. Dominate the psychiatric profession and use mental health laws as a means of gaining coercive control over those who oppose Communist goals.
  40. Discredit the family as an institution. Encourage promiscuity and easy divorce.
  41. Emphasize the need to raise children away from the negative influence of parents. Attribute prejudices, mental blocks and retarding of children to suppressive influence of parents.
  42. Create the impression that violence and insurrection are legitimate aspects of the American tradition; that students and special-interest groups should rise up and use “united force” to solve economic, political or social problems.
  43. Overthrow all colonial governments before native populations are ready for self-government.
  44. Internationalize the Panama Canal.
  45. Repeal the Connally Reservation so the US can not prevent the World Court from seizing jurisdiction over domestic problems. Give the World Court jurisdiction over domestic problems. Give the World Court jurisdiction over nations and individuals alike.

In chapter 13 of the present edition, a copy of this list is reproduced with commentary on the extent to which these goals have been accomplished as of 2017. What's your scorecard? How many of these seem extreme or unachievable from today's perspective?

When Skousen was writing his book, the world seemed divided into two camps: one communist and the other committed (more or less) to personal and economic liberty. In the free world, there were those advancing the cause of the collectivist slavers, but mostly covertly. What is astonishing today is that, despite more than a century of failure and tragedy resulting from communism, there are more and more who openly advocate for it or its equivalents (or an even more benighted medieval ideology masquerading as a religion which shares communism's disregard for human life and liberty, and willingness to lie, cheat, discard treaties, and murder to achieve domination).

When advocates of this deadly cult of slavery and death are treated with respect while those who defend the Enlightenment values of life, liberty, and property are silenced, this book is needed more than ever.

 Permalink

Thor, Brad. Use of Force. New York: Atria Books, 2017. ISBN 978-1-4767-8939-2.
This is the seventeenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). As this book begins, Scot Harvath, operative for the Carlton Group, a private outfit that does “the jobs the CIA won't do” is under cover at the Burning Man festival in the Black Rock Desert of Nevada. He and his team are tracking a terrorist thought to be conducting advance surveillance for attacks within the U.S. Only as the operation unfolds does he realise he's walked into the middle of a mass casualty attack already in progress. He manages to disable his target, but another suicide bomber detonates in a crowded area, with many dead and injured.

Meanwhile, following the capsizing of a boat smuggling “migrants” into Sicily, the body of a much-wanted and long-sought terrorist chemist, known to be researching chemical and biological weapons of mass destruction, is fished out of the Mediterranean. Why would he, after flying under the radar for years in the Near East and Maghreb, be heading to Europe? The CIA reports, “Over the last several months, we've been picking up chatter about an impending series of attacks, culminating in something very big, somewhere in Europe” … “We think that whatever he was planning, it's ready to go operational.”

With no leads other than knowledge from a few survivors of the sinking that the boat sailed from Libya and the name of the migrant smuggler who arranged their passage, Harvath sets off under cover to that country to try to find who arranged the chemist's passage and his intended destination in Europe. Accompanied by his pick-up team from Burning Man (given the urgency, there wasn't time to recruit one more familiar with the region), Harvath begins, in his unsubtle way, to locate the smuggler and find out what he knows. Unfortunately, as is so often the case in such operations, there is somebody else with the team who doesn't figure in its official roster—a fellow named Murphy.

Libya is chaotic and dangerous enough under any circumstances, but when you whack the hornets' nest, things can get very exciting in short order, and not in a good way. Harvath and his team find themselves in a mad chase and shoot-out, and having to summon assets which aren't supposed to be there, in order to survive.

Meanwhile, another savage terrorist attack in Europe has confirmed the urgency of the threat and that more are likely to come. And back in the imperial capital, intrigue within the CIA seems aimed at targeting Harvath's boss and the head of the operation. Is it connected somehow? It's time to deploy the diminutive super-hacker Nicholas and one of the CIA's most secret and dangerous computer security exploits in a honeypot operation to track down the source of the compromise.

If it weren't bad enough being chased by Libyan militias while trying to unravel an ISIS terror plot, Harvath soon finds himself in the lair of the Calabrian Mafia, and being thwarted at every turn by civil servants insisting he play by the rules when confronting those who make their own rules. Finally, multiple clues begin to limn the outline of the final attack, and it is dire indeed. Harvath must make an improbable and uneasy alliance to confront it.

The pacing of the book is somewhat odd. There is a tremendous amount of shoot-’em-up action in the middle, but as the conclusion approaches and the ultimate threat must be dealt with, it's as if the author felt himself running out of typewriter ribbon (anybody remember what that was?) and having to wind things up in just a few pages. Were I his editor, I'd have suggested trimming some of the detail in the middle and making the finale more suspenseful. But then, what do I know? Brad Thor has sold nearly fifteen million books, and I haven't. This is a perfectly workable thriller which will keep you turning the pages, but I didn't find it as compelling as some of his earlier novels. The attention to detail and accuracy are, as one has come to expect, superb. You don't need to have read any of the earlier books in the series to enjoy this one; what few details you need to know are artfully mentioned in passing.

The next installment in the Scot Harvath saga, Spymaster, will be published in July, 2018.

 Permalink

Hanson, Victor Davis. The Second World Wars. New York: Basic Books, 2017. ISBN 978-0-465-06698-8.
This may be the best single-volume history of World War II ever written. While it does not get into the low-level details of the war or its individual battles (don't expect to see maps with boxes, front lines, and arrows), it provides an encyclopedic view of the first truly global conflict with a novel and stunning insight every few pages.

Nothing like World War II had ever happened before and, thankfully, has not happened since. While earlier wars may have seemed to those involved in them as involving all of the powers known to them, they were at most regional conflicts. By contrast, in 1945, there were only eleven countries in the entire world which were neutral—not engaged on one side or the other. (There were, of course, far fewer countries then than now—most of Africa and South Asia were involved as colonies of belligerent powers in Europe.) And while war had traditionally been a matter for kings, generals, and soldiers, in this total war the casualties were overwhelmingly (70–80%) civilian. Far from being confined to battlefields, many of the world's great cities, from Amsterdam to Yokohama, were bombed, shelled, or besieged, often with disastrous consequences for their inhabitants.

“Wars” in the title refers to Hanson's observation that what we call World War II was, in reality, a collection of often unrelated conflicts which happened to occur at the same time. The settling of ethnic and territorial scores across borders in Europe had nothing to do with Japan's imperial ambitions in China, or Italy's in Africa and Greece. It was sometimes difficult even to draw a line dividing the two sides in the war. Japan occupied colonies in Indochina under the administration of Vichy France, notwithstanding Japan and Vichy both being nominal allies of Germany. The Soviet Union, while making a massive effort to defeat Nazi Germany on the land, maintained a non-aggression pact with Axis power Japan until days before its surrender and denied use of air bases in Siberia to Allied air forces for bombing campaigns against the home islands.

Combatants in different theatres might have well have been fighting in entirely different wars, and sometimes in different centuries. Air crews on long-range bombing missions above Germany and Japan had nothing in common with Japanese and British forces slugging it out in the jungles of Burma, nor with attackers and defenders fighting building to building in the streets of Stalingrad, or armoured combat in North Africa, or the duel of submarines and convoys to keep the Atlantic lifeline between the U.S. and Britain open, or naval battles in the Pacific, or the amphibious landings on islands they supported.

World War II did not start as a global war, and did not become one until the German invasion of the Soviet Union and the Japanese attack on U.S., British, and Dutch territories in the Pacific. Prior to those events, it was a collection of border wars, launched by surprise by Axis powers against weaker neighbours which were, for the most part, successful. Once what Churchill called the Grand Alliance (Britain, the Soviet Union, and the United States) was forged, the outcome was inevitable, yet the road to victory was long and costly, and its length impossible to foresee at the outset.

The entire war was unnecessary, and its horrific cost can be attributed to a failure of deterrence. From the outset, there was no way the Axis could have won. If, as seemed inevitable, the U.S. were to become involved, none of the Axis powers possessed the naval or air resources to strike the U.S. mainland, no less contemplate invading and occupying it. While all of Germany and Japan's industrial base and population were, as the war progressed, open to bombardment day and night by long-range, four engine, heavy bombers escorted by long-range fighters, the Axis possessed no aircraft which could reach the cities of the U.S. east coast, the oil fields of Texas and Oklahoma, or the industrial base of the midwest. While the U.S. and Britain fielded aircraft carriers which allowed them to project power worldwide, Germany and Italy had no effective carrier forces and Japan's were reduced by constant attacks by U.S. aviation.

This correlation of forces was known before the outbreak of the war. Why did Japan and then Germany launch wars which were almost certain to result in forces ranged against them which they could not possibly defeat? Hanson attributes it to a mistaken belief that, to use Hitler's terminology, the will would prevail. The West had shown itself unwilling to effectively respond to aggression by Japan in China, Italy in Ethiopia, and Germany in Czechoslovakia, and Axis leaders concluded from this, catastrophically for their populations, that despite their industrial, demographic, and strategic military weakness, there would be no serious military response to further aggression (the “bore war” which followed the German invasion of Poland and the declarations of war on Germany by France and Britain had to reinforce this conclusion). Hanson observes, writing of Hitler, “Not even Napoleon had declared war in succession on so many great powers without any idea how to destroy their ability to make war, or, worse yet, in delusion that tactical victories would depress stronger enemies into submission.” Of the Japanese, who attacked the U.S. with no credible capability or plan for invading and occupying the U.S. homeland, he writes, “Tojo was apparently unaware or did not care that there was no historical record of any American administration either losing or quitting a war—not the War of 1812, the Mexican War, the Civil War, the Spanish American War, or World War I—much less one that Americans had not started.” (Maybe they should have waited a few decades….)

Compounding the problems of the Axis was that it was essentially an alliance in name only. There was little or no co-ordination among its parties. Hitler provided Mussolini no advance notice of the attack on the Soviet Union. Mussolini did not warn Hitler of his attacks on Albania and Greece. The Japanese attack on Pearl Harbor was as much a surprise to Germany as to the United States. Japanese naval and air assets played no part in the conflict in Europe, nor did German technology and manpower contribute to Japan's war in the Pacific. By contrast, the Allies rapidly settled on a division of labour: the Soviet Union would concentrate on infantry and armoured warfare (indeed, four out of five German soldiers who died in the war were killed by the Red Army), while Britain and the U.S. would deploy their naval assets to blockade the Axis, keep the supply lines open, and deliver supplies to the far-flung theatres of the war. U.S. and British bomber fleets attacked strategic targets and cities in Germany day and night. The U.S. became the untouchable armoury of the alliance, delivering weapons, ammunition, vehicles, ships, aircraft, and fuel in quantities which eventually surpassed those all other combatants on both sides combined. Britain and the U.S. shared technology and cooperated in its development in areas such as radar, antisubmarine warfare, aircraft engines (including jet propulsion), and nuclear weapons, and shared intelligence gleaned from British codebreaking efforts.

As a classicist, Hanson examines the war in its incarnations in each of the elements of antiquity: Earth (infantry), Air (strategic and tactical air power), Water (naval and amphibious warfare), and Fire (artillery and armour), and adds People (supreme commanders, generals, workers, and the dead). He concludes by analysing why the Allies won and what they ended up winning—and losing. Britain lost its empire and position as a great power (although due to internal and external trends, that might have happened anyway). The Soviet Union ended up keeping almost everything it had hoped to obtain through its initial partnership with Hitler. The United States emerged as the supreme economic, industrial, technological, and military power in the world and promptly entangled itself in a web of alliances which would cause it to underwrite the defence of countries around the world and involve it in foreign conflicts far from its shores.

Hanson concludes,

The tragedy of World War II—a preventable conflict—was that sixty million people had perished to confirm that the United States, the Soviet Union, and Great Britain were far stronger than the fascist powers of Germany, Japan, and Italy after all—a fact that should have been self-evident and in no need of such a bloody laboratory, if not for prior British appeasement, American isolationism, and Russian collaboration.

At 720 pages, this is not a short book (the main text is 590 pages; the rest are sources and end notes), but there is so much wisdom and startling insights among those pages that you will be amply rewarded for the time you spend reading them.

 Permalink

Brown, Dan. Origin. New York: Doubleday, 2017. ISBN 978-0-385-51423-1.
Ever since the breakthrough success of Angels & Demons, his first mystery/thriller novel featuring Harvard professor and master of symbology Robert Langdon, Dan Brown has found a formula which turns arcane and esoteric knowledge, exotic and picturesque settings, villains with grandiose ambitions, and plucky female characters into bestsellers, two of which, The Da Vinci Code and Angels & Demons, have been adapted into Hollywood movies.

This is the fifth novel in the Robert Langdon series. After reading the fourth, Inferno (May 2013), it struck me that Brown's novels have become so formulaic they could probably be generated by an algorithm. Since artificial intelligence figures in the present work, in lieu of a review, which would be difficult to write without spoilers, here are the parameters to the Marinchip Turbo Digital™ Thriller Wizard to generate the story.

Villain: Edmond Kirsch, billionaire computer scientist and former student of Robert Langdon. Made his fortune from breakthroughs in artificial intelligence, neuroscience, and robotics.

Megalomaniac scheme: “end the age of religion and usher in an age of science”.

Buzzword technologies: artificial general intelligence, quantum computing.

Big Questions: “Where did we come from?”, “Where are we going?”.

Religious adversary: The Palmarian Catholic Church.

Plucky female companion: Ambra Vidal, curator of the Guggenheim Museum in Bilbao (Spain) and fiancée of the crown prince of Spain.

Hero or villain? Details would be a spoiler but, as always, there is one.

Contemporary culture tie-in: social media, an InfoWars-like site called ConspiracyNet.com.

MacGuffins: the 47-character password from Kirsch's favourite poem (but which?), the mysterious “Winston”, “The Regent”.

Exotic and picturesque locales: The Guggenheim Museum Bilbao, Casa Milà and the Sagrada Família in Barcelona, Valle de los Caídos near Madrid.

Enigmatic symbol: a typographical mark one must treat carefully in HTML.

When Edmond Kirsch is assassinated moments before playing his presentation which will answer the Big Questions, Langdon and Vidal launch into a quest to discover the password required to release the presentation to the world. The murder of two religious leaders to whom Kirsch revealed his discoveries in advance of their public disclosure stokes the media frenzy surrounding Kirsch and his presentation, and spawns conspiracy theories about dark plots to suppress Kirsch's revelations which may involve religious figures and the Spanish monarchy.

After perils, adventures, conflict, and clues hidden in plain sight, Startling Revelations leave Langdon Stunned and Shaken but Cautiously Hopeful for the Future.

When the next Dan Brown novel comes along, see how well it fits the template. This novel will appeal to people who like this kind of thing: if you enjoyed the last four, this one won't disappoint. If you're looking for plausible speculation on the science behind the big questions or the technological future of humanity, it probably will. Now that I know how to crank them out, I doubt I'll buy the next one when it appears.

 Permalink

Mercer, Ilana. Into the Cannibal's Pot. Mount Vernon, WA, 2011. ISBN 978-0-9849070-1-4.
The author was born in South Africa, the daughter of Rabbi Abraham Benzion Isaacson, a leader among the Jewish community in the struggle against apartheid. Due to her father's activism, the family, forced to leave the country, emigrated to Israel, where the author grew up. In the 1980s, she moved back to South Africa, where she married, had a daughter, and completed her university education. In 1995, following the first elections with universal adult suffrage which resulted in the African National Congress (ANC) taking power, she and her family emigrated to Canada with the proceeds of the sale of her apartment hidden in the soles of her shoes. (South Africa had adopted strict controls to prevent capital flight in the aftermath of the election of a black majority government.) After initially settling in British Columbia, her family subsequently emigrated to the United States where they reside today.

From the standpoint of a member of a small minority (the Jewish community) of a minority (whites) in a black majority country, Mercer has reason to be dubious of the much-vaunted benefits of “majority rule”. Describing herself as a “paleolibertarian”, her outlook is shaped not by theory but the experience of living in South Africa and the accounts of those who remained after her departure. For many in the West, South Africa scrolled off the screen as soon as a black majority government took power, but that was the beginning of the country's descent into violence, injustice, endemic corruption, expropriation of those who built the country and whose ancestors lived there since before the founding of the United States, and what can only be called a slow-motion genocide against the white farmers who were the backbone of the society.

Between 1994 and 2005, the white population of South Africa fell from 5.22 million to 4.37 million. Two of the chief motivations for emigration have been an explosion of violent crime, often racially motivated and directed against whites, a policy of affirmative action which amounts to overt racial discrimination against whites, endemic corruption, and expropriation of businesses in the interest of “fairness”.

In the forty-four years of apartheid in South Africa from 1950 to 1993, there were a total of 309,583 murders in the country: an average of 7,036 per year. In the first eight years after the end of apartheid (1994—2001), under one-party black majority rule, 193,649 murders were reported, or 24,206 per year. And the latter figure is according to the statistics of the ANC-controlled South Africa Police Force, which both Interpol and the South African Medical Research Council say may be understated by as much as a factor of two. The United States is considered to be a violent country, with around 4.88 homicides per 100,000 people (by comparison, the rate in the United Kingdom is 0.92 and in Switzerland is 0.69). In South Africa, the figure is 34.27 (all estimates are 2015 figures from the United Nations Office on Drugs and Crime). And it isn't just murder: in South Africa,where 65 people are murdered every day, around 200 are raped and 300 are victims of assault and violent robbery.

White farmers, mostly Afrikaner, have frequently been targets of violence. In the periods 1996–2007 and 2010–2016 (no data were published for the years 2008 and 2009), according to statistics from the South African Police Service (which may be understated), there were 11,424 violent attacks on farms in South Africa, with a total of 1609 homicides, in some cases killing entire farm families and some of their black workers. The motives for these attacks remain a mystery according to the government, whose leaders have been known to sing the stirring anthem “Kill the Boer” at party rallies. Farm attacks follow the pattern in Zimbabwe, where such attacks, condoned by the Mugabe regime, resulted in the emigration of almost all white farmers and the collapse of the country's agricultural sector (only 200 white farmers remain in the country, 5% of the number before black majority rule). In South Africa, white farmers who have not already emigrated find themselves trapped: they cannot sell to other whites who fear they would become targets of attacks and/or eventual expropriation without compensation, nor to blacks who expect they will eventually receive the land for free when it is expropriated.

What is called affirmative action in the U.S. is implemented in South Africa under the Black Economic Empowerment (BEE) programme, a set of explicitly racial preferences and requirements which cover most aspects of business operation including ownership, management, employment, training, supplier selection, and internal investment. Mining companies must cede co-ownership to blacks in order to obtain permits for exploration. Not surprisingly, in many cases the front men for these “joint ventures” are senior officials of the ruling ANC and their family members. So corrupt is the entire system that Archbishop Desmond Tutu, one of the most eloquent opponents of apartheid, warned that BEE has created a “powder keg”, where benefits accrue only to a small, politically-connected, black elite, leaving others in “dehumanising poverty”.

Writing from the perspective of one who got out of South Africa just at the point where everything started to go wrong (having anticipated in advance the consequences of pure majority rule) and settled in the U.S., Mercer then turns to the disturbing parallels between the two countries. Their histories are very different, and yet there are similarities and trends which are worrying. One fundamental problem with democracy is that people who would otherwise have to work for a living discover that they can vote for a living instead, and are encouraged in this by politicians who realise that a dependent electorate is a reliable electorate as long as the benefits continue to flow. Back in 2008, I wrote about the U.S. approaching a tipping point where nearly half of those who file income tax returns owe no income tax. At that point, among those who participate in the economy, there is a near-majority who pay no price for voting for increased government benefits paid for by others. It's easy to see how this can set off a positive feedback loop where the dependent population burgeons, the productive minority shrinks, the administrative state which extracts the revenue from that minority becomes ever more coercive, and those who channel the money from the producers to the dependent grow in numbers and power.

Another way to look at the tipping point is to compare the number of voters to taxpayers (those with income tax liability). In the U.S., this number is around two to one, which is dangerously unstable to the calamity described above. Now consider that in South Africa, this ratio is eleven to one. Is it any wonder that under universal adult suffrage the economy of that country is in a down-spiral?

South Africa prior to 1994 was in an essentially intractable position. By encouraging black and later Asian immigration over its long history (most of the ancestors of black South Africans arrived after the first white settlers), it arrived at a situation where a small white population (less than 10%) controlled the overwhelming majority of the land and wealth, and retained almost all of the political power. This situation, and the apartheid system which sustained it (which the author and her family vehemently opposed) was unjust and rightly was denounced and sanctioned by countries around the globe. But what was to replace it? The experience of post-colonial Africa was that democracy almost always leads to “One man, one vote, one time”: a leader of the dominant ethnic group wins the election, consolidates power, and begins to eliminate rival groups, often harking back to the days of tribal warfare which preceded the colonial era, but with modern weapons and a corresponding death toll. At the same time, all sources of wealth are plundered and “redistributed”, not to the general population, but to the generals and cronies of the Great Man. As the country sinks into savagery and destitution, whites and educated blacks outside the ruling clique flee. (Indeed, South Africa has a large black illegal immigrant population made of those who fled the Mugabe tyranny in Zimbabwe.)

Many expected this down-spiral to begin in South Africa soon after the ANC took power in 1994. The joke went, “What's the difference between Zimbabwe and South Africa? Ten years.” That it didn't happen immediately and catastrophically is a tribute to Nelson Mandela's respect for the rule of law and for his white partners in ending apartheid. But now he is gone, and a new generation of more radical leaders has replaced him. Increasingly, it seems like the punch line might be revised to be “Twenty-five years.”

The immediate priority one takes away from this book is the need to address the humanitarian crisis faced by the Afrikaner farmers who are being brutally murdered and face expropriation of their land without compensation as the regime becomes ever more radical. Civilised countries need to open immigration to this small, highly-productive, population. Due to persecution and denial of property rights, they may arrive penniless, but are certain to quickly become the backbone of the communities they join.

In the longer term, the U.S. and the rest of the Anglosphere and civilised world should be cautious and never indulge in the fantasy “it can't happen here”. None of these countries started out with the initial conditions of South Africa, but it seems like, over the last fifty years, much of their ruling class seems to have been bent on importing masses of third world immigrants with no tradition of consensual government, rule of law, or respect for property rights, concentrating them in communities where they can preserve the culture and language of the old country, and ensnaring them in a web of dependency which keeps them from climbing the ladder of assimilation and economic progress by which previous immigrant populations entered the mainstream of their adopted countries. With some politicians bent on throwing the borders open to savage, medieval, inbred “refugees” who breed much more rapidly than the native population, it doesn't take a great deal of imagination to see how the tragedy now occurring in South Africa could foreshadow the history of the latter part of this century in countries foolish enough to lay the groundwork for it now.

This book was published in 2011, but the trends it describes have only accelerated in subsequent years. It's an eye-opener to the risks of democracy without constraints or protection of the rights of minorities, and a warning to other nations of the grave risks they face should they allow opportunistic politicians to recreate the dire situation of South Africa in their own lands.

 Permalink

Schantz, Hans G. A Rambling Wreck. Huntsville, AL: ÆtherCzar, 2017. ISBN 978-1-5482-0142-5.
This the second novel in the author's Hidden Truth series. In the first book (December 2017) we met high schoolers and best friends Pete Burdell and Amit Patel who found, in dusty library books, knowledge apparently discovered by the pioneers of classical electromagnetism (many of whom died young), but which does not figure in modern works, even purported republications of the original sources they had consulted. As they try to sort through the discrepancies, make sense of what they've found, and scour sources looking for other apparently suppressed information, they become aware that dark and powerful forces seem bent on keeping this seemingly obscure information hidden. People who dig too deeply have a tendency to turn up dead in suspicious “accidents”, and Amit coins the monicker “EVIL”: the Electromagnetic Villains International League, for their adversaries. Events turn personal and tragic, and Amit and Pete learn tradecraft, how to deal with cops (real and fake), and navigate the legal system with the aid of mentors worthy of a Heinlein story.

This novel finds the pair entering the freshman class at Georgia Tech—they're on their way to becoming “rambling wrecks”. Unable to pay their way with their own resources, Pete and Amit compete for and win full-ride scholarships funded by the Civic Circle, an organisation they suspect may be in cahoots in some way with EVIL. As a condition of their scholarship, they must take a course, “Introduction to Social Justice Studies” (the “Studies” should be tip-off enough) to become “social justice ambassadors” to the knuckle-walking Tech community.

Pete's Uncle Ron feared this might be a mistake, but Amit and Pete saw it as a way to burrow from within, starting their own “long march through the institutions”, and, incidentally, having a great deal of fun and, especially for Amit, an aspiring master of Game, meet radical chicks. Once at Tech, it becomes clear that the first battles they must fight relate not to 19th century electrodynamics but the 21st century social justice wars.

Pete's family name resonates with history and tradition at Tech. In the 1920s, with a duplicate enrollment form in hand, enterprising undergraduates signed up the fictitious “George P. Burdell” for a full course load, submitted his homework, took his exams, and saw him graduate in 1930. Burdell went on to serve in World War II, and was listed on the Board of Directors of Mad magazine. Whenever Georgia Tech alumni gather, it is not uncommon to hear George P. Burdell being paged. Amit and Pete decide the time has come to enlist the school's most famous alumnus in the battle for its soul, and before long the merry pranksters of FOG—Friends of George—were mocking and disrupting the earnest schemes of the social justice warriors.

Meanwhile, Pete has taken a job as a laboratory assistant and, examining data that shouldn't be interesting, discovers a new phenomenon which might just tie in with his and Amit's earlier discoveries. These investigations, as his professor warns, can also be perilous, and before long he and Amit find themselves dealing with three separate secret conspiracies vying for control over the hidden knowledge, which may be much greater and rooted deeper in history than they had imagined. Another enigmatic document by an obscure missionary named Angus MacGuffin (!), who came to a mysterious and violent end in 1940, suggests a unification of the enigmas. And one of the greatest mysteries of twentieth century physics, involving one of its most brilliant figures, may be involved.

This series is a bit of Golden Age science fiction which somehow dropped into the early 21st century. It is a story of mystery, adventure, heroes, and villains, with interesting ideas and technical details which are plausible. The characters are interesting and grow as they are tested and learn from their experiences. And the story is related with a light touch, with plenty of smiles and laughs at the expense of those who richly deserve mockery and scorn. This book is superbly done and a worthy sequel to the first. I eagerly await the next, The Brave and the Bold.

I was delighted to see that Pete made the same discovery about triangles in physics and engineering problems that I made in my first year of engineering school. One of the first things any engineer should learn is to see if there's an easier way to get the answer out. I'll be adding “proglodytes”—progressive troglodytes—to my vocabulary.

For a self-published work, there are only a very few copy editing errors. The Kindle edition is free for Kindle Unlimited subscribers. In an “About the Author” section at the end, the author notes:

There's a growing fraternity of independent, self-published authors busy changing the culture one story at a time with their tales of adventure and heroism. Here are a few of my more recent discoveries.

With the social justice crowd doing their worst to wreck science fiction, the works of any of these authors are a great way to remember why you started reading science fiction in the first place.

 Permalink

June 2018

Oliver, Bernard M., John Billingham, et al. Project Cyclops. Stanford, CA: Stanford/NASA Ames Research Center, 1971. NASA-CR-114445 N73-18822.
There are few questions in science as simple to state and profound in their implications as “are we alone?”—are humans the only species with a technological civilisation in the galaxy, or in the universe? This has been a matter of speculation by philosophers, theologians, authors of fiction, and innumerable people gazing at the stars since antiquity, but it was only in the years after World War II, which had seen the development of high-power microwave transmitters and low-noise receivers for radar, that it dawned upon a few visionaries that this had now become a question which could be scientifically investigated.

The propagation of radio waves through the atmosphere and the interstellar medium is governed by basic laws of physics, and the advent of radio astronomy demonstrated that many objects in the sky, some very distant, could be detected in the microwave spectrum. But if we were able to detect these natural sources, suppose we connected a powerful transmitter to our radio telescope and sent a signal to a nearby star? It was easy to calculate that, given the technology of the time (around 1960), existing microwave transmitters and radio telescopes could transmit messages across interstellar distances.

But, it's one thing to calculate that intelligent aliens with access to microwave communication technology equal or better than our own could communicate over the void between the stars, and entirely another to listen for those communications. The problems are simple to understand but forbidding to face: where do you point your antenna, and where do you tune your dial? There are on the order of a hundred billion stars in our galaxy. We now know, as early researchers suspected without evidence, that most of these stars have planets, some of which may have conditions suitable for the evolution of intelligent life. Suppose aliens on one of these planets reach a level of technological development where they decide to join the “Galactic Club” and transmit a beacon which simply says “Yo! Anybody out there?” (The beacon would probably announce a signal with more information which would be easy to detect once you knew where to look.) But for the beacon to work, it would have to be aimed at candidate stars where others might be listening (a beacon which broadcasted in all directions—an “omnidirectional beacon”—would require so much energy or be limited to such a short range as to be impractical for civilisations with technology comparable to our own).

Then there's the question of how many technological communicating civilisations there are in the galaxy. Note that it isn't enough that a civilisation have the technology which enables it to establish a beacon: it has to do so. And it is a sobering thought that more than six decades after we had the ability to send such a signal, we haven't yet done so. The galaxy may be full of civilisations with our level of technology and above which have the same funding priorities we do and choose to spend their research budget on intersectional autoethnography of transgender marine frobdobs rather than communicating with nerdy pocket-protector types around other stars who tediously ask Big Questions.

And suppose a civilisation decides it can find the spare change to set up and operate a beacon, inviting others to contact it. How long will it continue to transmit, especially since it's unlikely, given the finite speed of light and the vast distances between the stars, there will be a response in the near term? Before long, scruffy professors will be marching in the streets wearing frobdob hats and rainbow tentacle capes, and funding will be called into question. This is termed the “lifetime” of a communicating civilisation, or L, which is how long that civilisation transmits and listens to establish contact with others. If you make plausible assumptions for the other parameters in the Drake equation (which estimates how many communicating civilisations there are in the galaxy), a numerical coincidence results in the estimate of the number of communicating civilisations in the galaxy being roughly equal to their communicating life in years, L. So, if a typical civilisation is open to communication for, say, 10,000 years before it gives up and diverts its funds to frobdob research, there will be around 10,000 such civilisations in the galaxy. With 100 billion stars (and around as many planets which may be hosts to life), that's a 0.00001% chance that any given star where you point your antenna may be transmitting, and that has to be multiplied by the same probability they are transmitting their beacon in your direction while you happen to be listening. It gets worse. The galaxy is huge—around 150 million light years in diameter, and our technology can only communicate with comparable civilisations out to a tiny fraction of this, say 1000 light years for high-power omnidirectional beacons, maybe ten to a hundred times that for directed beacons, but then you have the constraint that you have to be listening in their direction when they happen to be sending.

It seems hopeless. It may be. But the 1960s were a time very different from our constrained age. Back then, if you had a problem, like going to the Moon in eight years, you said, “Wow! That's a really big nail. How big a hammer do I need to get the job done?” Toward the end of that era when everything seemed possible, NASA convened a summer seminar at Stanford University to investigate what it would take to seriously investigate the question of whether we are alone. The result was Project Cyclops: A Design Study of a System for Detecting Extraterrestrial Intelligent Life, prepared in 1971 and issued as a NASA report (no Library of Congress catalogue number or ISBN was assigned) in 1973; the link will take you to a NASA PDF scan of the original document, which is in the public domain. The project assembled leading experts in all aspects of the technologies involved: antennas, receivers, signal processing and analysis, transmission and control, and system design and costing.

They approached the problem from what might be called the “Apollo perspective”: what will it cost, given the technology we have in hand right now, to address this question and get an answer within a reasonable time? What they came up with was breathtaking, although no more so than Apollo. If you want to listen for beacons from communicating civilisations as distant as 1000 light years and incidental transmissions (“leakage”, like our own television and radar emissions) within 100 light years, you're going to need a really big bucket to collect the signal, so they settled on 1000 dishes, each 100 metres in diameter. Putting this into perspective, 100 metres is about the largest steerable dish anybody envisioned at the time, and they wanted to build a thousand of them, densely packed.

But wait, there's more. These 1000 dishes were not just a huge bucket for radio waves, but a phased array, where signals from all of the dishes (or a subset, used to observe multiple targets) were combined to provide the angular resolution of a single dish the size of the entire array. This required breathtaking precision of electronic design at the time which is commonplace today (although an array of 1000 dishes spread over 16 km would still give most designers pause). The signals that might be received would not be fixed in frequency, but would drift due to Doppler shifts resulting from relative motion of the transmitter and receiver. With today's computing hardware, digging such a signal out of the raw data is something you can do on a laptop or mobile phone, but in 1971 the best solution was an optical data processor involving exposing, developing, and scanning film. It was exquisitely clever, although obsolete only a few years later, but recall the team had agreed to use only technologies which existed at the time of their design. Even more amazing (and today, almost bizarre) was the scheme to use the array as an imaging telescope. Again, with modern computers, this is a simple matter of programming, but in 1971 the designers envisioned a vast hall in which the signals from the antennas would be re-emitted by radio transmitters which would interfere in free space and produce an intensity image on an image surface where it would be measured by an array of receiver antennæ.

What would all of this cost? Lots—depending upon the assumptions used in the design (the cost was mostly driven by the antenna specifications, where extending the search to shorter wavelengths could double the cost, since antennas had to be built to greater precision) total system capital cost was estimated as between 6 and 10 billion dollars (1971). Converting this cost into 2018 dollars gives a cost between 37 and 61 billion dollars. (By comparison, the Apollo project cost around 110 billion 2018 dollars.) But since the search for a signal may “almost certainly take years, perhaps decades and possibly centuries”, that initial investment must be backed by a long-term funding commitment to continue the search, maintain the capital equipment, and upgrade it as technology matures. Given governments' record in sustaining long-term efforts in projects which do not line politicians' or donors' pockets with taxpayer funds, such perseverance is not the way to bet. Perhaps participants in the study should have pondered how to incorporate sufficient opportunities for graft into the project, but even the early 1970s were still an idealistic time when we didn't yet think that way.

This study is the founding document of much of the work in the Search for Extraterrestrial Intelligence (SETI) conducted in subsequent decades. Many researchers first realised that answering this question, “Are we alone?”, was within our technological grasp when chewing through this difficult but inspiring document. (If you have an equation or chart phobia, it's not for you; they figure on the majority of pages.) The study has held up very well over the decades. There are a number of assumptions we might wish to revise today (for example, higher frequencies may be better for interstellar communication than were assumed at the time, and spread spectrum transmissions may be more energy efficient than the extreme narrowband beacons assumed in the Cyclops study).

Despite disposing of wealth, technological capability, and computing power of which authors of the Project Cyclops report never dreamed, we only make little plans today. Most readers of this post, in their lifetimes, have experienced the expansion of their access to knowledge in the transition from being isolated to gaining connectivity to a global, high-bandwidth network. Imagine what it means to make the step from being confined to our single planet of origin to being plugged in to the Galactic Web, exchanging what we've learned with a multitude of others looking at things from entirely different perspectives. Heck, you could retire the entire capital and operating cost of Project Cyclops in the first three years just from advertising revenue on frobdob videos! (Did I mention they have very large eyes which are almost all pupil? Never mind the tentacles.)

This document has been subjected to intense scrutiny over the years. The SETI League maintains a comprehensive errata list for the publication.

 Permalink

Mills, Kyle. Enemy of the State. New York: Atria Books, 2017. ISBN 978-1-4767-8351-2.
This is the third novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. It is the sixteenth novel in the Mitch Rapp series (Flynn's first novel, Term Limits [November 2009], is set in the same world and shares characters with the Mitch Rapp series, but Rapp does not appear in it, so it isn't considered a Rapp novel), Mills continues to develop the Rapp story in new directions, while maintaining the action-packed and detail-rich style which made the series so successful.

When a covert operation tracking the flow of funds to ISIS discovers that a (minor) member of the Saudi royal family is acting as a bagman, the secret deal between the U.S. and Saudi Arabia struck in the days after the 2001 terrorist attacks on the U.S.—the U.S. would hide the ample evidence of Saudi involvement in the plot in return for the Saudis dealing with terrorists and funders of terrorism within the Kingdom—is called into question. The president of the U.S., who might be described in modern jargon as “having an anger management problem” decides the time has come to get to the bottom of what the Saudis are up to: is it a few rogue ne'er-do-wells, or is the leadership up to their old tricks of funding and promoting radical Islamic infiltration and terrorism in the West? And if they are, he wants to make them hurt, so they don't even think about trying it again.

When it comes to putting the hurt on miscreants, the president's go-to-guy is Mitch Rapp, the CIA's barely controlled loose cannon, who has a way of getting the job done even if his superiors don't know, and don't want to know, the details. When the president calls Rapp into his office and says, “I think you need to have a talk … and at the end of that talk I think he needs to be dead” there is little doubt about what will happen after Rapp walks out of the office.

But there is a problem. Saudi Arabia is, nominally at least, an important U.S ally. It keeps the oil flowing and prices down, not only benefitting the world economy, but putting a lid on the revenue of troublemakers such as Russia and Iran. Saudi Arabia is a major customer of U.S. foreign military sales. Saudi Arabia is also a principal target of Islamic revolutionaries, and however bad it is today, one doesn't want to contemplate a post-Saudi regime raising the black flag of ISIS, crying havoc, and letting slip the goats of war. Wet work involving the royal family must not just be deniable but totally firewalled from any involvement by the U.S. government. In accepting the mission Rapp understands that if things blow up, he will not only be on his own but in all likelihood have the U.S. government actively hunting him down.

Rapp hands in his resignation to the CIA, ending a relationship which has existed over all of the previous novels. He meets with his regular mission team and informs them he “need[s] to go somewhere you … can't follow”: involving them would create too many visible ties back to the CIA. If he's going to go rogue, he decides he must truly do so, and sets off assembling a rogues' gallery, composed mostly of former adversaries we've met in previous books. When he recruits his friend Claudia, who previously managed logistics for an assassin Rapp confronted in the past, she says, “So, a criminal enterprise. And only one of the people at this table knows how to be a criminal.”

Assembling this band of dodgy, dangerous, and devious characters at the headquarters of an arms dealer in that paradise which is Juba, South Sudan, Rapp plots an operation to penetrate the security surrounding the Saudi princeling and find out how high the Saudi involvement in funding ISIS goes. What they learn is disturbing in the extreme.

After an operation gone pear-shaped, and with the CIA, FBI, Saudis, and Sudanese factions all chasing him, Rapp and his misfit mob have to improvise and figure out how to break the link between the Saudis and ISIS in way which will allow him to deny everything and get back to whatever is left of his life.

This is a thriller which is full of action, suspense, and characters fans of the series will have met before acting in ways which may be surprising. After a shaky outing in the previous installment, Order to Kill (December 2017), Kyle Mills has regained his stride and, while preserving the essentials of Mitch Rapp, is breaking new ground. It will be interesting to see if the next novel, Red War, expected in September 2018, continues to involve any of the new team. While you can read this as a stand-alone thriller, you'll enjoy it more if you've read the earlier books in which the members of Rapp's team were principal characters.

 Permalink

Suarez, Daniel. Influx. New York: Signet, [2014] 2015. ISBN 978-0-451-46944-1.
Doesn't it sometimes seem that, sometime in the 1960s, the broad march of technology just stopped? Certainly, there has been breathtaking progress in some fields, particularly computation and data communication, but what about clean, abundant fusion power too cheap to meter, opening up the solar system to settlement, prevention and/or effective treatment of all kinds of cancer, anti-aging therapy, artificial general intelligence, anthropomorphic robotics, and the many other wonders we expected to be commonplace by the year 2000?

Decades later, Jon Grady was toiling in his obscure laboratory to make one of those dreams—gravity control— a reality. His lab is invaded by notorious Luddite terrorists who plan to blow up his apparatus and team. The fuse burns down into the charge, and all flashes white, then black. When he awakes, he finds himself, in good condition, in a luxurious office suite in a skyscraper, where he is introduced to the director of the Federal Bureau of Technology Control (BTC). The BTC, which appears in no federal organisation chart or budget, is charged with detecting potentially emerging disruptive technologies, controlling and/or stopping them (including deploying Luddite terrorists, where necessary), co-opting their developers into working in deep secrecy with the BTC, and releasing the technologies only when human nature and social and political institutions were “ready” for them—as determined by the BTC.

But of course those technologies exist within the BTC, and it uses them: unlimited energy, genetically engineered beings, clones, artificial intelligence, and mind control weapons. Grady is offered a devil's bargain: join the BTC and work for them, or suffer the worst they can do to those who resist and see his life's work erased. Grady turns them down.

At first, his fate doesn't seem that bad but then, as the creative and individualistic are wont to do, he resists and discovers the consequences when half a century's suppressed technologies are arrayed against a defiant human mind. How is he to recover his freedom and attack the BTC? Perhaps there are others, equally talented and defiant, in the same predicament? And, perhaps, the BTC, with such great power at its command, is not so monolithic and immune from rivalry, ambition, and power struggles as it would like others to believe. And what about other government agencies, fiercely protective of their own turf and budgets, and jealous of any rivals?

Thus begins a technological thriller very different from the author's earlier Dæmon (August 2010) and Freedom™ (January 2011), but compelling. How does a band of individuals take on an adversary which can literally rain destruction from the sky? What is the truth beneath the public face of the BTC? What does a superhuman operative do upon discovering everything has been a lie? And how can one be sure it never happens again?

With this novel Daniel Suarez reinforces his reputation as an emerging grand master of the techno-thriller. This book won the 2015 Prometheus Award for best libertarian novel.

 Permalink

Nury, Fabien and Thierry Robin. La Mort de Staline. Paris: Dargaud, [2010, 2012] 2014. ISBN 978-2-205-07351-5.
The 2017 film, The Death of Stalin, was based upon this French bande dessinée (BD, graphic novel, or comic). The story is based around the death of Stalin and the events that ensued: the scheming and struggle for power among the members of his inner circle, the reactions and relationships of his daughter Svetlana and wastrel son Vasily, the conflict between the Red Army and NKVD, the maneuvering over the arrangements for Stalin's funeral, and the all-encompassing fear and suspicion that Stalin's paranoia had infused into the Soviet society. This is a fictional account, grounded in documented historical events, in which the major characters were real people. But the authors are forthright in saying they invented events and dialogue to tell a story which is intended to give one a sense of the «folie furieuse de Staline et de son entourage» rather than provide a historical narrative.

The film adaptation is listed as a comedy and, particularly if you have a taste for black humour, is quite funny. This BD is not explicitly funny, except in an ironic sense, illustrating the pathological behaviour of those surrounding Stalin. Many of the sequences in this work could have been used as storyboards for the movie, but there are significant events here which did make it into the screenplay. The pervasive strong language which earned the film an R rating is little in evidence here.

The principal characters and their positions are introduced by boxes overlaying the graphics, much as was done in the movie. Readers who aren't familiar with the players in Stalin's Soviet Union such as Beria, Zhukov, Molotov, Malenkov, Khrushchev, Mikoyan, and Bulganin, may miss some of the nuances of their behaviour here, which is driven by this back-story. Their names are given using the French transliteration of Russian, which is somewhat different from that used in English (for example, “Krouchtchev” instead of “Khrushchev”). The artwork is intricately drawn in the realistic style, with only a few comic idioms sparsely used to illustrate things like gunshots.

I enjoyed both the movie (which I saw first, not knowing until the end credits that it was based upon this work) and the BD. They're different takes on the same story, and both work on their own terms. This is not the kind of story for which “spoilers” apply, so you'll lose nothing by enjoying both in either order.

The album cited above contains both volumes of the original print edition. The Kindle edition continues to be published in two volumes (Vol. 1, Vol. 2). An English translation of the graphic novel is available. I have not looked at it beyond the few preview pages available on Amazon.

 Permalink

July 2018

Carreyrou, John. Bad Blood. New York: Alfred A. Knopf, 2018. ISBN 978-1-9848-3363-1.
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor's office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor's assistant who draws the blood (whom I've nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I'd slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you're curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.

Wouldn't it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor's office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn't quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.

This was the vision that came to 19 year old Elizabeth Holmes after completing a summer internship at the Genome Institute of Singapore after her freshman year as a chemical engineering major at Stanford. Holmes had decided on a career in entrepreneurship from an early age and, after her first semester told her father, “No, Dad, I'm, not interested in getting a Ph.D. I want to make money.” And Stanford, in the heart of Silicon Valley, was surrounded by companies started by professors and graduates who had turned inventions into vast fortunes. With only one year of college behind her, she was sure she'd found her opportunity. She showed the patent application she'd drafted for an arm patch that would diagnose medical conditions to Channing Robertson, professor of chemical engineering at Stanford, and Shaunak Roy, the Ph.D. student in whose lab she had worked as an assistant during her freshman year. Robertson was enthusiastic, and when Holmes said she intended to leave Stanford and start a company to commercialise the idea, he encouraged her. When the company was incorporated in 2004, Roy, then a newly-minted Ph.D., became its first employee and Robertson joined the board.

From the outset, the company was funded by other people's money. Holmes persuaded a family friend, Tim Draper, a second-generation venture capitalist who had backed, among other companies, Hotmail, to invest US$ 1 million in first round funding. Draper was soon joined by Victor Palmieri, a corporate turnaround artist and friend of Holmes' father. The company was named Theranos, from “therapy” and “diagnosis”. Elizabeth, unlike this scribbler, had a lifelong aversion to needles, and the invention she described in the business plan pitched to investors was informed by this. A skin patch would draw tiny quantities of blood without pain by means of “micro-needles”, the blood would be analysed by micro-miniaturised sensors in the patch and, if needed, medication could be injected. A wireless data link would send results to the doctor.

This concept, and Elizabeth's enthusiasm and high-energy pitch allowed her to recruit additional investors, raising almost US$ 6 million in 2004. But there were some who failed to be persuaded: MedVentures Associates, a firm that specialised in medical technology, turned her down after discovering she had no answers for the technical questions raised in a meeting with the partners, who had in-depth experience with diagnostic technology. This would be a harbinger of the company's fund-raising in the future: in its entire history, not a single venture fund or investor with experience in medical or diagnostic technology would put money into the company.

Shaunak Roy, who, unlike Holmes, actually knew something about chemistry, quickly realised that Elizabeth's concept, while appealing to the uninformed, was science fiction, not science, and no amount of arm-waving about nanotechnology, microfluidics, or laboratories on a chip would suffice to build something which was far beyond the state of the art. This led to a “de-scoping” of the company's ambition—the first of many which would happen over succeeding years. Instead of Elizabeth's magical patch, a small quantity of blood would be drawn from a finger stick and placed into a cartridge around the size of a credit card. The disposable cartridge would then be placed into a desktop “reader” machine, which would, using the blood and reagents stored in the cartridge, perform a series of analyses and report the results. This was originally called Theranos 1.0, but after a series of painful redesigns, was dubbed the “Edison”. This was the prototype Theranos ultimately showed to potential customers and prospective investors.

This was a far cry from the original ambitious concept. The hundreds of laboratory tests doctors can order are divided into four major categories: immunoassays, general chemistry, hæmatology, and DNA amplification. In immunoassay tests, blood plasma is exposed to an antibody that detects the presence of a substance in the plasma. The antibody contains a marker which can be detected by its effect on light passed through the sample. Immunoassays are used in a number of common blood tests, such the 25(OH)D assay used to test for vitamin D deficiency, but cannot perform other frequently ordered tests such as blood sugar and red and white blood cell counts. Edison could only perform what is called “chemiluminescent immunoassays”, and thus could only perform a fraction of the tests regularly ordered. The rationale for installing an Edison in the doctor's office was dramatically reduced if it could only do some tests but still required a venous blood draw be sent off to the laboratory for the balance.

This didn't deter Elizabeth, who combined her formidable salesmanship with arm-waving about the capabilities of the company's products. She was working on a deal to sell four hundred Edisons to the Mexican government to cope with an outbreak of swine flu, which would generate immediate revenue. Money was much on the minds of Theranos' senior management. By the end of 2009, the company had burned through the US$ 47 million raised in its first three rounds of funding and, without a viable product or prospects for sales, would have difficulty keeping the lights on.

But the real bonanza loomed on the horizon in 2010. Drugstore giant Walgreens was interested in expanding their retail business into the “wellness market”: providing in-store health services to their mass market clientèle. Theranos pitched them on offering in-store blood testing. Doctors could send their patients to the local Walgreens to have their blood tested from a simple finger stick and eliminate the need to draw blood in the office or deal with laboratories. With more than 8,000 locations in the U.S., if each were to be equipped with one Edison, the revenue to Theranos (including the single-use testing cartridges) would put them on the map as another Silicon Valley disruptor that went from zero to hundreds of millions in revenue overnight. But here, as well, the Elizabeth effect was in evidence. Of the 192 tests she told Walgreens Theranos could perform, fewer than half were immunoassays the Edisons could run. The rest could be done only on conventional laboratory equipment, and certainly not on a while-you-wait basis.

Walgreens wasn't the only potential saviour on the horizon. Grocery godzilla Safeway, struggling with sales and earnings which seemed to have reached a peak, saw in-store blood testing with Theranos machines as a high-margin profit centre. They loaned Theranos US$ 30 million and began to plan for installation of blood testing clinics in their stores.

But there was a problem, and as the months wore on, this became increasingly apparent to people at both Walgreens and Safeway, although dismissed by those in senior management under the spell of Elizabeth's reality distortion field. Deadlines were missed. Simple requests, such as A/B comparison tests run on the Theranos hardware and at conventional labs were first refused, then postponed, then run but results not disclosed. The list of tests which could be run, how blood for them would be drawn, and how they would be processed seemed to dissolve into fog whenever specific requests were made for this information, which was essential for planning the in-store clinics.

There was, indeed, a problem, and it was pretty severe, especially for a start-up which had burned through US$ 50 million and sold nothing. The product didn't work. Not only could the Edison only run a fraction of the tests its prospective customers had been led by Theranos to believe it could, for those it did run the results were wildly unreliable. The small quantity of blood used in the test introduced random errors due to dilution of the sample; the small tubes in the cartridge were prone to clogging; and capillary blood collected from a finger stick was prone to errors due to “hemolysis”, the rupture of red blood cells, which is minimal in a venous blood draw but so prevalent in finger stick blood it could lead to some tests producing values which indicated the patient was dead.

Meanwhile, people who came to work at Theranos quickly became aware that it was not a normal company, even by the eccentric standards of Silicon Valley. There was an obsession with security, with doors opened by badge readers; logging of employee movement; information restricted to narrow silos prohibiting collaboration between, say, engineering and marketing which is the norm in technological start-ups; monitoring of employee Internet access, E-mail, and social media presence; a security detail of menacing-looking people in black suits and earpieces (which eventually reached a total of twenty); a propensity of people, even senior executives, to “vanish”, Stalin-era purge-like, overnight; and a climate of fear that anybody, employee or former employee, who spoke about the company or its products to an outsider, especially the media, would be pursued, harassed, and bankrupted by lawsuits. There aren't many start-ups whose senior scientists are summarily demoted and subsequently commit suicide. That happened at Theranos. The company held no memorial for him.

Throughout all of this, a curious presence in the company was Ramesh (“Sunny”) Balwani, a Pakistani-born software engineer who had made a fortune of more than US$ 40 million in the dot-com boom and cashed out before the bust. He joined Theranos in late 2009 as Elizabeth's second in command and rapidly became known as a hatchet man, domineering boss, and clueless when it came to the company's key technologies (on one occasion, an engineer mentioned a robotic arm's “end effector”, after which Sunny would frequently speak of its “endofactor”). Unbeknownst to employees and investors, Elizabeth and Sunny had been living together since 2005. Such an arrangement would be a major scandal in a public company, but even in a private firm, concealing such information from the board and investors is a serious breach of trust.

Let's talk about the board, shall we? Elizabeth was not only persuasive, but well-connected. She would parley one connection into another, and before long had recruited many prominent figures including:

  • George Schultz (former U.S. Secretary of State)
  • Henry Kissinger (former U.S. Secretary of State)
  • Bill Frist (former U.S. Senator and medical doctor)
  • James Mattis (General, U.S. Marine Corps)
  • Riley Bechtel (Chairman and former CEO, Bechtel Group)
  • Sam Nunn (former U.S. Senator)
  • Richard Kobacevich (former Wells Fargo chairman and CEO)

Later, super-lawyer David Boies would join the board, and lead its attacks against the company's detractors. It is notable that, as with its investors, not a single board member had experience in medical or diagnostic technology. Bill Frist was an M.D., but his speciality was heart and lung transplants, not laboratory tests.

By 2014, Elizabeth Holmes had come onto the media radar. Photogenic, articulate, and with a story of high-tech disruption of an industry much in the news, she began to be featured as the “female Steve Jobs”, which must have pleased her, since she affected black turtlenecks, kale shakes, and even a car with no license plates to emulate her role model. She appeared on the cover of Fortune in January 2014, made the Forbes list of 400 most wealthy shortly thereafter, was featured in puff pieces in business and general market media, and was named by Time as one of the hundred most influential people in the world. The year 2014 closed with another glowing profile in the New Yorker. This would be the beginning of the end, as it happened to be read by somebody who actually knew something about blood testing.

Adam Clapper, a pathologist in Missouri, spent his spare time writing Pathology Blawg, with a readership of practising pathologists. Clapper read what Elizabeth was claiming to do with a couple of drops of blood from a finger stick and it didn't pass the sniff test. He wrote a sceptical piece on his blog and, as it passed from hand to hand, he became a lightning rod for others dubious of Theranos' claims, including those with direct or indirect experience with the company. Earlier, he had helped a Wall Street Journal reporter comprehend the tangled web of medical laboratory billing, and he decided to pass on the tip to the author of this book.

Thus began the unravelling of one of the greatest scams and scandals in the history of high technology, Silicon Valley, and venture investing. At the peak, privately-held Theranos was valued at around US$ 9 billion, with Elizabeth Holmes holding around half of its common stock, and with one of those innovative capital structures of which Silicon Valley is so fond, 99.7% of the voting rights. Altogether, over its history, the company raised around US$ 900 million from investors (including US$ 125 million from Rupert Murdoch in the US$ 430 million final round of funding). Most of the investors' money was ultimately spent on legal fees as the whole fairy castle crumbled.

The story of the decline and fall is gripping, involving the grandson of a Secretary of State, gumshoes following whistleblowers and reporters, what amounts to legal terrorism by the ever-slimy David Boies, courageous people who stood their ground in the interest of scientific integrity against enormous personal and financial pressure, and the saga of one of the most cunning and naturally talented confidence women ever, equipped with only two semesters of freshman chemical engineering, who managed to raise and blow through almost a billion dollars of other people's money without checking off the first box on the conventional start-up check list: “Build the product”.

I have, in my career, met three world-class con men. Three times, I (just barely) managed to pick up the warning signs and beg my associates to walk away. Each time I was ignored. After reading this book, I am absolutely sure that had Elizabeth Holmes pitched me on Theranos (about which I never heard before the fraud began to be exposed), I would have been taken in. Walker's law is “Absent evidence to the contrary, assume everything is a scam”. A corollary is “No matter how cautious you are, there's always a confidence man (or woman) who can scam you if you don't do your homework.”

Here is Elizabeth Holmes at Stanford in 2013, when Theranos was riding high and she was doing her “female Steve Jobs” act.

Elizabeth Holmes at Stanford: 2013

This is a CNN piece, filmed after the Theranos scam had begun to collapse, in which you can still glimpse the Elizabeth Holmes reality distortion field at full intensity directed at CNN medical correspondent Sanjay Gupta. There are several curious things about this video. The machine that Gupta is shown is the “miniLab”, a prototype second-generation machine which never worked acceptably, not the Edison, which was actually used in the Walgreens and Safeway tests. Gupta's blood is drawn and tested, but the process used to perform the test is never shown. The result reported is a cholesterol test, but the Edison cannot perform such tests. In the plans for the Walgreens and Safeway roll-outs, such tests were performed on purchased Siemens analysers which had been secretly hacked by Theranos to work with blood diluted well below their regulatory-approved specifications (the dilution was required due to the small volume of blood from the finger stick). Since the miniLab never really worked, the odds are that Gupta's blood was tested on one of the Siemens machines, not a Theranos product at all.

CNN: Inside the Theranos Lab (2016)

In a June 2018 interview, author John Carreyrou recounts the story of Theranos and his part in revealing the truth.

John Carreyrou on investigating Theranos (2018)

If you are a connoisseur of the art of the con, here is a masterpiece. After the Wall Street Journal exposé had broken, after retracting tens of thousands of blood tests, and after Theranos had been banned from running a clinical laboratory by its regulators, Holmes got up before an audience of 2500 people at the meeting of the American Association of Clinical Chemistry and turned up the reality distortion field to eleven. Watch a master at work. She comes on the stage at the six minute mark.

Elizabeth Holmes at the American Association of Clinical Chemistry (2016)

 Permalink

Neovictorian [pseud.] and Neal Van Wahr. Sanity. Seattle: Amazon Digital Services, [2017] 2018. ISBN 978-1-9808-2095-6.
Have you sometimes felt, since an early age, that you were an alien, somehow placed on Earth and observing the antics of humans as if they were a different species? Why do they believe such stupid things? Why do they do such dumb things? Any why do they keep doing them over and over again seemingly incapable of learning from the bad outcomes of all the previous attempts?

That is how Cal Adler felt since childhood and, like most people with such feelings, kept them quiet and bottled up while trying to get ahead in a game whose rules often seemed absurd. In his senior year in high school, he encounters a substitute guidance counsellor who tells him, without any preliminary conversation, precisely how he feels. He's assured he is not alone, and that over time he will meet others. He is given an enigmatic contact in case of emergency. He is advised, as any alien in a strange land, to blend in while observing and developing his own talents. And that's the last he sees of the counsellor.

Cal's subsequent life is punctuated by singular events: a terrorist incident in which he spontaneously rises to the occasion, encountering extraordinary people, and being initiated into skills he never imagined he'd possess. He begins to put together a picture of a shadowy…something…of which he may or may not be a part, whose goals are unclear, but whose people are extraordinary.

Meanwhile, a pop religion called ReHumanism, founded by a science fiction writer, is gaining adherents among prominent figures in business, entertainment, and technology. Its “scriptures” advocate escape from the tragic cycle of progress and collapse which has characterised the human experience by turning away from the artificial environment in which we have immersed ourselves and rediscovering our inherent human nature which may, to many in the modern world, seem alien. Is there a connection between ReHumanism (which seems like a flaky scam to Cal) and the mysterious people he is encountering?

All of these threads begin to come together when Cal, working as a private investigator in Reno, Nevada, is retained by the daughter of a recently-deceased billionaire industrialist to find her mother, who has disappeared during a tourist visit to Alaska. The mother is revealed have become a convert to and supporter of ReHumanism. Are they involved? And how did the daughter find Cal, who, after previous events, has achieved a level of low observability stealth aircraft designers can only dream of?

An adventure begins in which nothing is as it seems and all of Cal's formidable talents are tested to their limits.

This is an engaging and provocative mystery/thriller which will resonate with those who identify with the kind of heroic, independent, and inner-directed characters that populate the fiction of Robert A. Heinlein and other writers of the golden age of science fiction. It speaks directly to those sworn to chart their own course through life regardless of what others may think or say. I'm not sure the shadowy organisation we glimpse here actually exists, but I wish it did…and I wish they'd contacted me. There are many tips of the hat here to works and authors of fiction with similar themes, and I'm sure many more I missed.

This is an example of the efflorescence of independent science fiction which the obsolescence of the traditional gatekeeper publishers has engendered. With the advent of low-cost, high-margin self-publishing and customer reviews and ratings to evaluate quality, an entire new cohort of authors whose work would never before have seen the light of day is now enriching the genre and the lives of their enthusiastic readers. The work is not free of typographical and grammatical errors, but I've read books from major science fiction publishers with more. The Kindle edition is free to Kindle Unlimited subscribers.

 Permalink

Verne, Jules. Une Fantaisie du Docteur Ox. Seattle: CreateSpace, [1874] 2017. ISBN 978-1-5470-6408-3.
After reading and reviewing Jules Verne's Hector Servadac last year, I stumbled upon a phenomenal bargain: a Kindle edition of the complete works of Jules Verne—160 titles, with 5400 illustrations—for US$ 2.51 at this writing, published by Arvensa. This is not a cheap public domain knock-off, but a thoroughly professional publication with very few errors. For less than the price of a paperback book, you get just about everything Jules Verne ever wrote in Kindle format which, if you download the free Kindle French dictionary, allows you to quickly look up the obscure terms and jargon of which Verne is so fond without flipping through the Little Bob. That's how I read this work, although I have cited a print edition in the header for those who prefer such.

The strange story of Doctor Ox would be considered a novella in modern publishing terms, coming in at 19,240 words. It is divided into 17 chapters and is written in much the same style as the author's Voyages extraordinaires, with his customary huge vocabulary, fondness for lengthy enumerations, and witty parody of the national character of foreigners.

Here, the foreigners in question are the Flemish, speakers of dialects of the Dutch language who live in the northern part of Belgium. The Flemish are known for being phlegmatic, and nowhere is this more in evidence than the small city of Quiquendone. Its 2,393 residents and their ancestors have lived there since the city was founded in 1197, and very little has happened to disturb their placid lives; they like it that way. Its major industries are the manufacture of whipped cream and barley sugar. Its inhabitants are taciturn and, when they speak, do so slowly. For centuries, what little government they require has been provided by generations of the van Tricasse family, son succeeding father as burgomaster. There is little for the burgomaster to do, and one of the few items on his agenda, inherited from his father twenty years ago, is whether the city should dispense with the services of its sole policeman, who hasn't had anything to do for decades.

Burgomaster van Tricasse exemplifies the moderation in all things of the residents of his city. I cannot resist quoting this quintessentially Jules Verne description in full.

Le bourgmestre était un personnage de cinquante ans, ni gras ni maigre, ni petit ni grand, ni vieux ni jeune, ni coloré ni pâle, ni gai ni triste, ni content ni ennuyé, ni énergique ni mou, ni fier ni humble, ni bon ni méchant, ni généreux ni avare, ni brave ni poltron, ni trop ni trop peu, — ne quid nimis, — un homme modéré en tout ; mais à la lenteur invariable de ses mouvements, à sa mâchoire inférieure un peu pendante, à sa paupière supérieure immuablement relevée, à son front uni comme une plaque de cuivre jaune et sans une ride, à ses muscles peu salliants, un physionomiste eût sans peine reconnu que le bourgomestre van Tricasse était le flegme personnifié.

Imagine how startled this paragon of moderation and peace must have been when the city's policeman—he whose job has been at risk for decades—pounds on the door and, when admitted, reports that the city's doctor and lawyer, visiting the house of scientist Doctor Ox, had gotten into an argument. They had been talking politics! Such a thing had not happened in Quiquendone in over a century. Words were exchanged that might lead to a duel!

Who is this Doctor Ox? A recent arrival in Quiquendone, he is a celebrated scientist, considered a leader in the field of physiology. He stands out against the other inhabitants of the city. Of no well-defined nationality, he is a genuine eccentric, self-confident, ambitious, and known even to smile in public. He and his laboratory assistant Gédéon Ygène work on their experiments and never speak of them to others.

Shortly after arriving in Quiquendone, Dr Ox approached the burgomaster and city council with a proposal: to illuminate the city and its buildings, not with the new-fangled electric lights which other cities were adopting, but with a new invention of his own, oxy-hydric gas. Using powerful electric batteries he invented, water would be decomposed into hydrogen and oxygen gas, stored separately, then delivered in parallel pipes to individual taps where they would be combined and burned, producing a light much brighter and pure than electric lights, not to mention conventional gaslights burning natural or manufactured gas. In storage and distribution, hydrogen and oxygen would be strictly segregated, as any mixing prior to the point of use ran the risk of an explosion. Dr Ox offered to pay all of the expenses of building the gas production plant, storage facilities, and installation of the underground pipes and light fixtures in public buildings and private residences. After a demonstration of oxy-hydric lighting, city fathers gave the go-ahead for the installation, presuming Dr Ox was willing to assume all the costs in order to demonstrate his invention to other potential customers.

Over succeeding days and weeks, things before unimagined, indeed, unimaginable begin to occur. On a visit to Dr Ox, the burgomaster himself and his best friend city council president Niklausse find themselves in—dare it be said—a political argument. At the opera house, where musicians and singers usually so moderate the tempo that works are performed over multiple days, one act per night, a performance of Meyerbeer's Les Hugenots becomes frenetic and incites the audience to what can only be described as a riot. A ball at the house of the banker becomes a whirlwind of sound and motion. And yet, each time, after people go home, they return to normal and find it difficult to believe what they did the night before.

Over time, the phenomenon, at first only seen in large public gatherings, begins to spread into individual homes and private lives. You would think the placid Flemish had been transformed into the hotter tempered denizens of countries to the south. Twenty newspapers spring up, each advocating its own radical agenda. Even plants start growing to enormous size, and cats and dogs, previously as reserved as their masters, begin to bare fangs and claws. Finally, a mass movement rises to avenge the honour of Quiquendone for an injury committed in the year 1185 by a cow from the neighbouring town of Virgamen.

What was happening? Whence the madness? What would be the result when the citizens of Quiquendone, armed with everything they could lay their hands on, marched upon their neighbours?

This is a classic “puzzle story”, seasoned with a mad scientist of whom the author allows us occasional candid glimpses as the story unfolds. You'll probably solve the puzzle yourself long before the big reveal at the end. Jules Verne, always anticipating the future, foresaw this: the penultimate chapter is titled (my translation), “Where the intelligent reader sees that he guessed correctly, despite every precaution by the author”. The enjoyment here is not so much the puzzle but rather Verne's language and delicious description of characters and events, which are up to the standard of his better-known works.

This is “minor Verne”, written originally for a public reading and then published in a newspaper in Amiens, his adopted home. Many believed that in Quiquendone he was satirising Amiens and his placid neighbours.

Doctor Ox would reappear in the work of Jules Verne in his 1882 play Voyage à travers l'impossible (Journey Through the Impossible), a work which, after 97 performances in Paris, was believed lost until a single handwritten manuscript was found in 1978. Dr Ox reprises his role as mad scientist, joining other characters from Verne's novels on their own extraordinary voyages. After that work, Doctor Ox disappears from the world. But when I regard the frenzied serial madness loose today, from “bathroom equality”, tearing down Civil War monuments, masked “Antifa” blackshirts beating up people in the streets, the “refugee” racket, and Russians under every bed, I sometimes wonder if he's taken up residence in today's United States.

An English translation is available. Verne's reputation has often suffered due to poor English translations of his work; I have not read this edition and don't know how good it is. Warning: the description of this book at Amazon contains a huge spoiler for the central puzzle of the story.

 Permalink

August 2018

Keating, Brian. Losing the Nobel Prize. New York: W. W. Norton, 2018. ISBN 978-1-324-00091-4.
Ever since the time of Galileo, the history of astronomy has been punctuated by a series of “great debates”—disputes between competing theories of the organisation of the universe which observation and experiment using available technology are not yet able to resolve one way or another. In Galileo's time, the great debate was between the Ptolemaic model, which placed the Earth at the centre of the solar system (and universe) and the competing Copernican model which had the planets all revolving around the Sun. Both models worked about as well in predicting astronomical phenomena such as eclipses and the motion of planets, and no observation made so far had been able to distinguish them.

Then, in 1610, Galileo turned his primitive telescope to the sky and observed the bright planets Venus and Jupiter. He found Venus to exhibit phases, just like the Moon, which changed over time. This would not happen in the Ptolemaic system, but is precisely what would be expected in the Copernican model—where Venus circled the Sun in an orbit inside that of Earth. Turning to Jupiter, he found it to be surrounded by four bright satellites (now called the Galilean moons) which orbited the giant planet. This further falsified Ptolemy's model, in which the Earth was the sole source of attraction around which all celestial bodies revolved. Since anybody could build their own telescope and confirm these observations, this effectively resolved the first great debate in favour of the Copernican heliocentric model, although some hold-outs in positions of authority resisted its dethroning of the Earth as the centre of the universe.

This dethroning came to be called the “Copernican principle”, that Earth occupies no special place in the universe: it is one of a number of planets orbiting an ordinary star in a universe filled with a multitude of other stars. Indeed, when Galileo observed the star cluster we call the Pleiades, he saw myriad stars too dim to be visible to the unaided eye. Further, the bright stars were surrounded by a diffuse bluish glow. Applying the Copernican principle again, he argued that the glow was due to innumerably more stars too remote and dim for his telescope to resolve, and then generalised that the glow of the Milky Way was also composed of uncountably many stars. Not only had the Earth been demoted from the centre of the solar system, so had the Sun been dethroned to being just one of a host of stars possibly stretching to infinity.

But Galileo's inference from observing the Pleiades was wrong. The glow that surrounds the bright stars is due to interstellar dust and gas which reflect light from the stars toward Earth. No matter how large or powerful the telescope you point toward such a reflection nebula, all you'll ever see is a smooth glow. Driven by the desire to confirm his Copernican convictions, Galileo had been fooled by dust. He would not be the last.

William Herschel was an eminent musician and composer, but his passion was astronomy. He pioneered the large reflecting telescope, building more than sixty telescopes. In 1789, funded by a grant from King George III, Herschel completed a reflector with a mirror 1.26 metres in diameter, which remained the largest aperture telescope in existence for the next fifty years. In Herschel's day, the great debate was about the Sun's position among the surrounding stars. At the time, there was no way to determine the distance or absolute brightness of stars, but Herschel decided that he could compile a map of the galaxy (then considered to be the entire universe) by surveying the number of stars in different directions. Only if the Sun was at the centre of the galaxy would the counts be equal in all directions.

Aided by his sister Caroline, a talented astronomer herself, he eventually compiled a map which indicated the galaxy was in the shape of a disc, with the Sun at the centre. This seemed to refute the Copernican view that there was nothing special about the Sun's position. Such was Herschel's reputation that this finding, however puzzling, remained unchallenged until 1847 when Wilhelm Struve discovered that Herschel's results had been rendered invalid by his failing to take into account the absorption and scattering of starlight by interstellar dust. Just as you can only see the same distance in all directions while within a patch of fog, regardless of the shape of the patch, Herschel's survey could only see so far before extinction of light by dust cut off his view of stars. Later it was discovered that the Sun is far from the centre of the galaxy. Herschel had been fooled by dust.

In the 1920s, another great debate consumed astronomy. Was the Milky Way the entire universe, or were the “spiral nebulæ” other “island universes”, galaxies in their own right, peers of the Milky Way? With no way to measure distance or telescopes able to resolve them into stars, many astronomers believed spiral neublæ were nearby objects, perhaps other solar systems in the process of formation. The discovery of a Cepheid variable star in the nearby Andromeda “nebula” by Edwin Hubble in 1923 allowed settling this debate. Andromeda was much farther away than the most distant stars found in the Milky Way. It must, then be a separate galaxy. Once again, demotion: the Milky Way was not the entire universe, but just one galaxy among a multitude.

But how far away were the galaxies? Hubble continued his search and measurements and found that the more distant the galaxy, the more rapidly it was receding from us. This meant the universe was expanding. Hubble was then able to calculate the age of the universe—the time when all of the galaxies must have been squeezed together into a single point. From his observations, he computed this age at two billion years. This was a major embarrassment: astrophysicists and geologists were confident in dating the Sun and Earth at around five billion years. It didn't make any sense for them to be more than twice as old as the universe of which they were a part. Some years later, it was discovered that Hubble's distance estimates were far understated because he failed to account for extinction of light from the stars he measured due to dust. The universe is now known to be seven times the age Hubble estimated. Hubble had been fooled by dust.

By the 1950s, the expanding universe was generally accepted and the great debate was whether it had come into being in some cataclysmic event in the past (the “Big Bang”) or was eternal, with new matter spontaneously appearing to form new galaxies and stars as the existing ones receded from one another (the “Steady State” theory). Once again, there were no observational data to falsify either theory. The Steady State theory was attractive to many astronomers because it was the more “Copernican”—the universe would appear overall the same at any time in an infinite past and future, so our position in time is not privileged in any way, while in the Big Bang the distant past and future are very different than the conditions we observe today. (The rate of matter creation required by the Steady State theory was so low that no plausible laboratory experiment could detect it.)

The discovery of the cosmic background radiation in 1965 definitively settled the debate in favour of the Big Bang. It was precisely what was expected if the early universe were much denser and hotter than conditions today, as predicted by the Big Bang. The Steady State theory made no such prediction and was, despite rear-guard actions by some of its defenders (invoking dust to explain the detected radiation!), was considered falsified by most researchers.

But the Big Bang was not without its own problems. In particular, in order to end up with anything like the universe we observe today, the initial conditions at the time of the Big Bang seemed to have been fantastically fine-tuned (for example, an infinitesimal change in the balance between the density and rate of expansion in the early universe would have caused the universe to quickly collapse into a black hole or disperse into the void without forming stars and galaxies). There was no physical reason to explain these fine-tuned values; you had to assume that's just the way things happened to be, or that a Creator had set the dial with a precision of dozens of decimal places.

In 1979, the theory of inflation was proposed. Inflation held that in an instant after the Big Bang the size of the universe blew up exponentially so that all the observable universe today was, before inflation, the size of an elementary particle today. Thus, it's no surprise that the universe we now observe appears so uniform. Inflation so neatly resolved the tensions between the Big Bang theory and observation that it (and refinements over the years) became widely accepted. But could inflation be observed? That is the ultimate test of a scientific theory.

There have been numerous cases in science where many years elapsed between a theory being proposed and definitive experimental evidence for it being found. After Galileo's observations, the Copernican theory that the Earth orbits the Sun became widely accepted, but there was no direct evidence for the Earth's motion with respect to the distant stars until the discovery of the aberration of light in 1727. Einstein's theory of general relativity predicted gravitational radiation in 1915, but the phenomenon was not directly detected by experiment until a century later. Would inflation have to wait as long or longer?

Things didn't look promising. Almost everything we know about the universe comes from observations of electromagnetic radiation: light, radio waves, X-rays, etc., with a little bit more from particles (cosmic rays and neutrinos). But the cosmic background radiation forms an impenetrable curtain behind which we cannot observe anything via the electromagnetic spectrum, and it dates from around 380,000 years after the Big Bang. The era of inflation was believed to have ended 10−32 seconds after the Bang; considerably earlier. The only “messenger” which could possibly have reached us from that era is gravitational radiation. We've just recently become able to detect gravitational radiation from the most violent events in the universe, but no conceivable experiment would be able to detect this signal from the baby universe.

So is it hopeless? Well, not necessarily…. The cosmic background radiation is a snapshot of the universe as it existed 380,000 years after the Big Bang, and only a few years after it was first detected, it was realised that gravitational waves from the very early universe might have left subtle imprints upon the radiation we observe today. In particular, gravitational radiation creates a form of polarisation called B-modes which most other sources cannot create.

If it were possible to detect B-mode polarisation in the cosmic background radiation, it would be a direct detection of inflation. While the experiment would be demanding and eventually result in literally going to the end of the Earth, it would be strong evidence for the process which shaped the universe we inhabit and, in all likelihood, a ticket to Stockholm for those who made the discovery.

This was the quest on which the author embarked in the year 2000, resulting in the deployment of an instrument called BICEP1 (Background Imaging of Cosmic Extragalactic Polarization) in the Dark Sector Laboratory at the South Pole. Here is my picture of that laboratory in January 2013. The BICEP telescope is located in the foreground inside a conical shield which protects it against thermal radiation from the surrounding ice. In the background is the South Pole Telescope, a millimetre wave antenna which was not involved in this research.

BICEP2 and South Pole Telescope, 2013-01-09

BICEP1 was a prototype, intended to test the technologies to be used in the experiment. These included cooling the entire telescope (which was a modest aperture [26 cm] refractor, not unlike Galileo's, but operating at millimetre wavelengths instead of visible light) to the temperature of interstellar space, with its detector cooled to just ¼ degree above absolute zero. In 2010 its successor, BICEP2, began observation at the South Pole, and continued its run into 2012. When I took the photo above, BICEP2 had recently concluded its observations.

On March 17th, 2014, the BICEP2 collaboration announced, at a press conference, the detection of B-mode polarisation in the region of the southern sky they had monitored. Note the swirling pattern of polarisation which is the signature of B-modes, as opposed to the starburst pattern of other kinds of polarisation.

B-mode polarisation in BICEP2 observations, 2014-03-17

But, not so fast, other researchers cautioned. The risk in doing “science by press release” is that the research is not subjected to peer review—criticism by other researchers in the field—before publication and further criticism in subsequent publications. The BICEP2 results went immediately to the front pages of major newspapers. Here was direct evidence of the birth cry of the universe and confirmation of a theory which some argued implied the existence of a multiverse—the latest Copernican demotion—the idea that our universe was just one of an ensemble, possibly infinite, of parallel universes in which every possibility was instantiated somewhere. Amid the frenzy, a few specialists in the field, including researchers on competing projects, raised the question, “What about the dust?” Dust again! As it happens, while gravitational radiation can induce B-mode polarisation, it isn't the only thing which can do so. Our galaxy is filled with dust and magnetic fields which can cause those dust particles to align with them. Aligned dust particles cause polarised reflections which can mimic the B-mode signature of the gravitational radiation sought by BICEP2.

The BICEP2 team was well aware of this potential contamination problem. Unfortunately, their telescope was sensitive only to one wavelength, chosen to be the most sensitive to B-modes due to primordial gravitational radiation. It could not, however, distinguish a signal from that cause from one due to foreground dust. At the same time, however, the European Space Agency Planck spacecraft was collecting precision data on the cosmic background radiation in a variety of wavelengths, including one sensitive primarily to dust. Those data would have allowed the BICEP2 investigators to quantify the degree their signal was due to dust. But there was a problem: BICEP2 and Planck were direct competitors.

Planck had the data, but had not released them to other researchers. However, the BICEP2 team discovered that a member of the Planck collaboration had shown a slide at a conference of unpublished Planck observations of dust. A member of the BICEP2 team digitised an image of the slide, created a model from it, and concluded that dust contamination of the BICEP2 data would not be significant. This was a highly dubious, if not explicitly unethical move. It confirmed measurements from earlier experiments and provided confidence in the results.

In September 2014, a preprint from the Planck collaboration (eventually published in 2016) showed that B-modes from foreground dust could account for all of the signal detected by BICEP2. In January 2015, the European Space Agency published an analysis of the Planck and BICEP2 observations which showed the entire BICEP2 detection was consistent with dust in the Milky Way. The epochal detection of inflation had been deflated. The BICEP2 researchers had been deceived by dust.

The author, a founder of the original BICEP project, was so close to a Nobel prize he was already trying to read the minds of the Nobel committee to divine who among the many members of the collaboration they would reward with the gold medal. Then it all went away, seemingly overnight, turned to dust. Some said that the entire episode had injured the public's perception of science, but to me it seems an excellent example of science working precisely as intended. A result is placed before the public; others, with access to the same raw data are given an opportunity to critique them, setting forth their own raw data; and eventually researchers in the field decide whether the original results are correct. Yes, it would probably be better if all of this happened in musty library stacks of journals almost nobody reads before bursting out of the chest of mass media, but in an age where scientific research is funded by agencies spending money taken from hairdressers and cab drivers by coercive governments under implicit threat of violence, it is inevitable they will force researchers into the public arena to trumpet their “achievements”.

In parallel with the saga of BICEP2, the author discusses the Nobel Prizes and what he considers to be their dysfunction in today's scientific research environment. I was surprised to learn that many of the curious restrictions on awards of the Nobel Prize were not, as I had heard and many believe, conditions of Alfred Nobel's will. In fact, the conditions that the prize be shared no more than three ways, not be awarded posthumously, and not awarded to a group (with the exception of the Peace prize) appear nowhere in Nobel's will, but were imposed later by the Nobel Foundation. Further, Nobel's will explicitly states that the prizes shall be awarded to “those who, during the preceding year, shall have conferred the greatest benefit to mankind”. This constraint (emphasis mine) has been ignored since the inception of the prizes.

He decries the lack of “diversity” in Nobel laureates (by which he means, almost entirely, how few women have won prizes). While there have certainly been women who deserved prizes and didn't win (Lise Meitner, Jocelyn Bell Burnell, and Vera Rubin are prime examples), there are many more men who didn't make the three laureates cut-off (Freeman Dyson an obvious example for the 1965 Physics Nobel for quantum electrodynamics). The whole Nobel prize concept is capricious, and rewards only those who happen to be in the right place at the right time in the right field that the committee has decided deserves an award this year and are lucky enough not to die before the prize is awarded. To imagine it to be “fair” or representative of scientific merit is, in the estimation of this scribbler, in flying unicorn territory.

In all, this is a candid view of how science is done at the top of the field today, with all of the budget squabbles, maneuvering for recognition, rivalry among competing groups of researchers, balancing the desire to get things right with the compulsion to get there first, and the eye on that prize, given only to a few in a generation, which can change one's life forever.

Personally, I can't imagine being so fixated on winning a prize one has so little chance of gaining. It's like being obsessed with winning the lottery—and about as likely.

In parallel with all of this is an autobiographical account of the career of a scientist with its ups and downs, which is both a cautionary tale and an inspiration to those who choose to pursue that difficult and intensely meritocratic career path.

I recommend this book on all three tracks: a story of scientific discovery, mis-interpretation, and self-correction, the dysfunction of the Nobel Prizes and how they might be remedied, and the candid story of a working scientist in today's deeply corrupt coercively-funded research environment.

 Permalink

Kroese, Robert. The Dream of the Iron Dragon. Seattle: CreateSpace, 2018. ISBN 978-1-9837-2921-8.
The cover tells you all you need to know about this book: Vikings!—spaceships! What could go wrong? From the standpoint of a rip-roaring science fiction adventure, absolutely nothing: this masterpiece is further confirmation that we're living in a new Golden Age of science fiction, made possible by the intensely meritocratic world of independent publishing sweeping aside the politically-correct and social justice warrior converged legacy publishers and re-opening the doors of the genre to authors who spin yarns with heroic characters, challenging ideas, and red-blooded adventure just as in the works of the grandmasters of previous golden ages.

From the standpoint of the characters in this novel, a great many things go wrong, and there the story begins. In the twenty-third century, humans find themselves in a desperate struggle with the only other intelligent species they'd encountered, the Cho-ta'an. First contact was in 2125, when a human interstellar ship was destroyed by the Cho-ta'an while exploring the Tau Ceti system. Shortly thereafter, co-ordinated attacks began on human ships and settlements which indicated the Cho-ta'an possessed faster-than-light travel, which humans did not. Humans formed the Interstellar Defense League (IDL) to protect their interests and eventually discovered and captured a Cho-ta'an jumpgate, which allowed instantaneous travel across interstellar distances. The IDL was able to reverse-engineer the gate sufficiently to build their own copies, but did not understand how it worked—it was apparently based upon some kind of wormhole physics beyond their comprehension.

Humans fiercely defended their settlements, but inexorably the Cho-ta'an advanced, seemingly driven by an inflexible philosophy that the universe was theirs alone and any competition must be exterminated. All attempts at diplomacy failed. The Earth had been rendered uninhabitable and evacuated, and most human settlements destroyed or taken over by the Cho-ta'an. Humanity was losing the war and time was running out.

In desperation, the IDL set up an Exploratory Division whose mission was to seek new homes for humans sufficiently distant from Cho-ta'an space to buy time: avoiding extinction in the hope the new settlements would be able to develop technologies to defend themselves before the enemy discovered them and attacked. Survey ship Andrea Luhman was en route to the Finlan Cluster on such a mission when it received an enigmatic message which seemed to indicate there was intelligent life out in this distant region where no human or Cho-ta'an had been known to go.

A complex and tense encounter leaves the crew of this unarmed exploration ship in possession of a weapon which just might turn the tide for humanity and end the war. Unfortunately, as they start their return voyage with this precious cargo, a Cho-ta'an warship takes up pursuit, threatening to vaporise this last best hope for survival. In a desperate move, the crew of the Andrea Luhman decide to try something that had never been attempted before: thread the needle of the rarely used jumpgate to abandoned Earth at nearly a third of the speed of light while evading missiles fired by the pursuing warship. What could go wrong? Actually a great deal. Flash—darkness.

When they got the systems back on-line, it was clear they'd made it to the Sol system, but they picked up nothing on any radio frequency. Even though Earth had been abandoned, satellites remained and, in any case, the jumpgate beacon should be transmitting. On further investigation, they discovered the stars were wrong. Precision measurements of star positions correlated with known proper motion from the ship's vast database allowed calculation of the current date. And the answer? “March sixteen, 883 a.d.

The jumpgate beacon wasn't transmitting because the jumpgate hadn't been built yet and wouldn't be for over a millennium. Worse, a component of the ship's main drive had been destroyed in the jump and, with only auxiliary thrusters it would take more than 1500 years to get to the nearest jumpgate. They couldn't survive that long in stasis and, even if they did, they'd arrive two centuries too late to save humanity from the Cho-ta'an.

Desperate situations call for desperate measures, and this was about as desperate as can be imagined. While there was no hope of repairing the drive component on-board, it just might be possible to find, refine, and process the resources into a replacement on the Earth. It was decided to send the ship's only lander to an uninhabited, resource-rich portion of the Earth and, using its twenty-third century technology, build the required part. What could go wrong? But even though nobody on the crew was named Murphy he was, as usual, on board. After a fraught landing attempt in which a great many things go wrong, the landing party of four finds themselves wrecked in a snowfield in what today is southern Norway. Then the Vikings show up.

The crew of twenty-third century spacefarers have crashed in the Norway of Harald Fairhair, who was struggling to unite individual bands of Vikings into a kingdom under his rule. The people from the fallen silver sky ship must quickly decide with whom to ally themselves, how to communicate across a formidable language barrier and millennia of culture, whether they can or dare meddle with history, and how to survive and somehow save humanity in what is now their distant future.

There is adventure, strategy, pitched battles, technological puzzles, and courage and resourcefulness everywhere in this delightful narrative. You grasp just how hard life was in those days, how differently people viewed the world, and how little all of our accumulated knowledge is worth without the massive infrastructure we have built over the centuries as we have acquired it.

You will reach the end of this novel wanting more and you're in luck. Volume two of the trilogy, The Dawn of the Iron Dragon (Kindle edition), is now available and the conclusion, The Voyage of the Iron Dragon, is scheduled for publication in December, 2018. It's all I can do not to immediately devour the second volume starting right now.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Rand, Ayn. Ideal. New York: New American Library, 2015. ISBN 978-0-451-47317-2.
In 1934, the 29 year old Ayn Rand was trying to establish herself in Hollywood. She had worked as a junior screenwriter and wardrobe person, but had not yet landed a major writing assignment. She wrote Ideal on speculation, completing the 32,000 word novella and then deciding it would work better as a stage play. She set the novella aside and finished the play version in 1936. The novella was never published nor was the play produced during her lifetime. After her death in 1982, the play was posthumously published in the anthology The Early Ayn Rand, but the novella remained largely unknown until this edition, which includes both it and the play, was published in 2015.

Ideal is the story of movie idol Kay Gonda, a beautiful and mysterious actress said to have been modeled on Greta Garbo. The night before the story begins, Gonda had dinner alone with oil baron Granton Sayers, whose company, it was rumoured, was on the brink of ruin in the depths of the Depression. Afterwards, Sayers was found in his mansion dead of a gunshot wound, and Gonda was nowhere to be found. Rumours swirled through the press that Gonda was wanted for murder, but there was a blackout of information which drove the press and her studio near madness. Her private secretary said that she had not seen Gonda since she left for the dinner, but that six pieces of her fan mail were missing from her office at the studio, so she assumed that Gonda must have returned and taken them.

The story then describes six episodes in which the fugitive Kay Gonda shows up, unannounced, at the homes of six of her fans, all of whom expressed their utter devotion to her in their letters. Five of the six—a henpecked manager of a canning company, an ageing retiree about to lose the house in which he raised his children, an artist who paints only canvases of Ms Gonda who has just won first prize in an important exhibition, an evangelist whose temple faces serious competition from the upstart Church of the Cheery Corner, and a dissipated playboy at the end of his financial rope—end up betraying the idol to whom they took pen to paper to express their devotion when confronted with the human being in the flesh and the constraints of the real world. The sixth fan, Johnnie Dawes, who has struggled to keep a job and roof over his head all his adult life, sees in Kay Gonda an opportunity to touch a perfection he had never hoped to experience in his life and devises a desperate plan to save Gonda from her fate.

A surprise ending reveals that much the reader has assumed is not what really happened, and that while Kay Gonda never once explicitly lied, neither did she prevent those to whom she spoke from jumping to the wrong conclusions.

This is very minor Ayn Rand. You can see some of the story telling skills which would characterise her later work beginning to develop, but the story has no plot: it is a morality tale presented in unconnected episodes, and the reader is left to draw the moral on his or her own. Given that the author was a struggling screenwriter in an intensely competitive Hollywood, the shallowness and phoniness of the film business is much on display here, although not so explicitly skewered as the later Ayn Rand might have done. The message is one of “skin in the game”—people can only be judged by what they do when confronted by difficult situations, not by what they say when words are cheap.

It is interesting to compare the play to the novella. The stories are clearly related, but Rand swaps out one of the fans, the elderly man, for a young, idealistic, impecunious, and totally phoney Communist activist. The play was written in 1936, the same year as We the Living, and perhaps the opportunity to mock pathetic Hollywood Bolsheviks was too great to pass by.

This book will mostly be of interest to those who have read Ayn Rand's later work and are curious to read some of the first fiction she ever wrote. Frankly, it isn't very good, and an indication of this is that Ayn Rand, whose reputation later in life would have made it easy to arrange publication for this work, chose to leave it in the trunk all her life. But she did not destroy the manuscript, so there must have been some affection for it.

 Permalink

September 2018

Dean, Josh. The Taking of K-129. New York: Dutton, 2012. ISBN 978-1-101-98443-7.
On February 24, 1968, Soviet Golf class submarine K-129 sailed from its base in Petropavlovsk for a routine patrol in the Pacific Ocean. These ballistic missile submarines were, at the time, a key part of the Soviet nuclear deterrent. Each carried three SS-N-5 missiles armed with one 800 kiloton nuclear warhead per missile. This was an intermediate range missile which could hit targets inside an enemy country if the submarine approached sufficiently close to the coast. For defence and attacking other ships, Golf class submarines carried two torpedoes with nuclear warheads as well as conventional high explosive warhead torpedoes.

Unlike the U.S. nuclear powered Polaris submarines, the Golf class had conventional diesel-electric propulsion. When submerged, the submarine was powered by batteries which provided limited speed and range and required surfacing or running at shallow snorkel depth for regular recharging by the diesel engines. They would be the last generation of Soviet diesel-electric ballistic missile submarines: the Hotel class and subsequent boats would be nuclear powered.

K-129's mission was to proceed stealthily to a region of open ocean north of Midway Atoll and patrol there, ready to launch its missiles at U.S. assets in the Pacific in case of war. Submarines on patrol would send coded burst transmissions on a prearranged schedule to indicate that their mission was proceeding as planned.

On March 8, a scheduled transmission from K-129 failed to arrive. This wasn't immediately cause for concern, since equipment failure was not uncommon, and a submarine commander might choose not to transmit if worried that surfacing and sending the message might disclose his position to U.S. surveillance vessels and aircraft. But when K-129 remained silent for a second day, the level of worry escalated rapidly. Losing a submarine armed with nuclear weapons was a worst-case scenario, and one which had never happened in Soviet naval operations.

A large-scale search and rescue fleet of 24 vessels, including four submarines, set sail from the base in Kamchatka, all communicating in the open on radio and pinging away with active sonar. They were heard to repeatedly call a ship named Red Star with no reply. The search widened, and eventually included thirty-six vessels and fifty-three aircraft, continuing over a period of seventy-three days. Nothing was found, and six months after the disappearance, the Soviet Navy issued a statement that K-129 had been lost while on duty in the Pacific with all on board presumed dead. This was not only a wrenching emotional blow to the families of the crew, but also a financial gut-shot, depriving them of the pension due families of men lost in the line of duty and paying only the one-time accidental death payment and partial pension for industrial accidents.

But if the Soviets had no idea where their submarine was, this was not the case for the U.S. Navy. Sound travels huge distances through the oceans, and starting in the 1950s, the U.S. began to install arrays of hydrophones (undersea sound detectors) on the floors of the oceans around the world. By the 1960s, these arrays, called SOSUS (SOund SUrveillance System) were deployed and operational in both the Atlantic and Pacific and used to track the movements of Soviet submarines. When K-129 went missing, SOSUS analysts went back over their archived data and found a sharp pulse just a few seconds after midnight local time on March 11 around 180° West and 40° North: 2500 km northeast of Hawaii. Not only did the pulse appear nothing like the natural sounds often picked up by SOSUS, events like undersea earthquakes don't tend to happen at socially constructed round number times and locations like this one. The pulse was picked up by multiple sensors, allowing its position to be determined accurately. The U.S. knew where the K-129 lay on the ocean floor. But what to do with that knowledge?

One thing was immediately clear. If the submarine was in reasonably intact condition, it would be an intelligence treasure unparalleled in the postwar era. Although it did not represent the latest Soviet technology, it would provide analysts their first hands-on examination of Soviet ballistic missile, nuclear weapon, and submarine construction technologies. Further, the boat would certainly be equipped with cryptographic and secure radio communications gear which might provide an insight into penetrating the secret communications to and from submarines on patrol. (Recall that British breaking of the codes used to communicate with German submarines in World War II played a major part in winning the Battle of the Atlantic.) But a glance at a marine chart showed how daunting it would be to reach the site of the wreck. The ocean in the vicinity of the co-ordinates identified by SOSUS was around 5000 metres deep. Only a very few special-purpose research vessels can operate at such a depth, where the water pressure is around 490 times that of the atmosphere at sea level.

The U.S. intelligence community wanted that sub. The first step was to make sure they'd found it. The USS Halibut, a nuclear-powered Regulus cruise missile launching submarine converted for special operations missions, was dispatched to the area where the K-129 was thought to lie. Halibut could not dive anywhere near as deep as the ocean floor, but was equipped with a remote-controlled, wire-tethered “fish”, which could be lowered near the bottom and then directed around the search area, observing with side-looking sonar and taking pictures. After seven weeks searching in vain, with fresh food long exhausted and crew patience wearing thin, the search was abandoned and course set back to Pearl Harbor.

But the prize was too great to pass up. So Halibut set out again, and after another month of operating the fish, developing thousands of pictures, and fraying tempers, there it was! Broken into two parts, but with both apparently largely intact, lying on the ocean bottom. Now what?

While there were deep sea research vessels able to descend to such depths, they were completely inadequate to exploit the intelligence haul that K-129 promised. That would require going inside the structure, dismantling the missiles and warheads, examining and testing the materials, and searching for communications and cryptographic gear. The only way to do this was to raise the submarine. To say that this was a challenge is to understate its difficulty—adjectives fail. The greatest mass which had ever been raised from such a depth was around 50 tonnes and K-129 had a mass of 1,500 tonnes—thirty times greater. But hey, why not? We're Americans! We've landed on the Moon! (By then it was November, 1969, four months after that “one small step”.) And so, Project Azorian was born.

When it comes to doing industrial-scale things in the deep ocean, all roads (or sea lanes) lead to Global Marine. A publicly-traded company little known to those outside the offshore oil exploration industry, this company and its genius naval architect John Graham had pioneered deep-sea oil drilling. While most offshore oil rigs, like those on terra firma, were firmly anchored to the land around the drill hole, Global Marine had pioneered the technology which allowed a ship, with a derrick mounted amidships, to precisely station-keep above the bore-hole on the ocean floor far beneath the ship. The required dropping sonar markers on the ocean floor which the ship used to precisely maintain its position with respect to them. This was just one part of the puzzle.

To recover the submarine, the ship would need to lower what amounted to a giant claw (“That's claw, not craw!”, you “Get Smart” fans) to the abyssal plain, grab the sub, and lift its 1500 tonne mass to the surface. During the lift, the pipe string which connected the ship to the claw would be under such stress that, should it break, it would release energy comparable to an eight kiloton nuclear explosion, which would be bad.

This would have been absurdly ambitious if conducted in the open, like the Apollo Project, but in this case it also had to be done covertly, since the slightest hint that the U.S. was attempting to raise K-129 would almost certainly provoke a Soviet response ranging from diplomatic protests to a naval patrol around the site of the sinking aimed at harassing the recovery ships. The project needed a cover story and a cut-out to hide the funding to Global Marine which, as a public company, had to disclose its financials quarterly and, unlike minions of the federal government funded by taxes collected from hairdressers and cab drivers through implicit threat of violence, could not hide its activities in a “black budget”.

This was seriously weird and, as a contemporary philosopher said, “When the going gets weird, the weird turn pro.” At the time, nobody was more professionally weird than Howard Hughes. He had taken reclusion to a new level, utterly withdrawing from contact with the public after revulsion from dealing with the Washington swamp and the media. His company still received royalties from every oil well drilled using his drill bits, and his aerospace and technology companies were plugged into the most secret ventures of the U.S. government. Simply saying, “It's a Hughes project” was sufficient to squelch most questions. This meant it had unlimited funds, the sanction of the U.S. government (including three-letter agencies whose names must not be spoken [brrrr!]), and told pesky journalists they'd encounter a stone wall from the centre of the Earth to the edge of the universe if they tried to dig into details.

But covert as the project might be, aspects of its construction and operation would unavoidably be in the public eye. You can't build a 189 metre long, 51,000 tonne ship, the Hughes Glomar Explorer, with an 80 metre tall derrick sticking up amidships, at a shipyard on the east coast of the U.S., send it around Cape Horn to its base on the west coast (the ship was too wide to pass through the Panama Canal), without people noticing. A cover story was needed, and the CIA and their contractors cooked up a doozy.

Large areas of the deep sea floor are covered by manganese nodules, concretions which form around a seed and grow extremely slowly, but eventually reach the size of potatoes or larger. Nodules are composed of around 30% manganese, plus other valuable metals such as nickel, copper, and cobalt. There are estimated to be more than 21 billion tonnes of manganese nodules on the deep ocean floor (depths of 4000 to 6000 metres), and their composition is richer than many of the ores from which the metals they contain are usually extracted. Further, they're just lying on the seabed. If you could figure out how to go down there and scoop them up, you wouldn't have to dig mines and process huge amounts of rock. Finally, they were in international waters, and despite attempts by kleptocratic dictators (some in landlocked countries) and the international institutions who support them to enact a “Law of the Sea” treaty to pick the pockets of those who created the means to use this resource, at the time the nodules were just there for the taking—you didn't have to pay kleptocratic dictators for mining rights or have your profits skimmed by ever-so-enlightened democratic politicians in developed countries.

So, the story was put out that Howard Hughes was setting out to mine the nodules on the Pacific Ocean floor, and that Glomar Explorer, built by Global Marine under contract for Hughes (operating, of course, as a cut-out for the CIA), would deploy a robotic mining barge called the Hughes Mining Barge 1 (HMB-1) which, lowered to the ocean floor, would collect nodules, crush them, and send the slurry to the surface for processing on the mother ship.

This solved a great number of potential problems. Global Marine, as a public company, could simply (and truthfully) report that it was building Glomar Explorer under contract to Hughes, and had no participation in the speculative and risky mining venture, which would have invited scrutiny by Wall Street analysts and investors. Hughes, operating as a proprietorship, was not required to disclose the source of the funds it was paying Global Marine. Everybody assumed the money was coming from Howard Hughes' personal fortune, which he had invested, over his career, in numerous risky ventures, when in fact, he was simply passing through money from a CIA black budget account. The HMB-1 was built by Lockheed Missiles and Space Company under contract from Hughes. Lockheed was involved in numerous classified U.S. government programs, so operating in the same manner for the famously secretive Hughes raised few eyebrows.

The barge, 99 metres in length, was built in a giant enclosed hangar in the port of Redwood City, California, which shielded it from the eyes of curious onlookers and Soviet reconnaissance satellites passing overhead. This was essential, because a glance at what was being built would have revealed that it looked nothing like a mining barge but rather a giant craw—sorry—claw! To install the claw on the ship, it was towed, enclosed in its covered barge, to a location near Catalina Island in southern California, where deeper water allowed it to be sunk beneath the surface, and then lifted into the well (“moon pool”) of Glomar Explorer, all out of sight to onlookers.

So far, the project had located the target on the ocean floor, designed and built a special ship and retrieval claw to seize it, fabricated a cover story of a mining venture so persuasive other mining companies were beginning to explore launching their own seabed mining projects, and evaded scrutiny by the press, Congress, and Soviet intelligence assets. But these are pussycats compared to the California Tax Nazis! After the first test of mating the claw to the ship, Glomar Explorer took to the ocean to, it was said, test the stabilisation system which would keep the derrick vertical as the ship pitched and rolled in the sea. Actually, the purpose of the voyage was to get the ship out of U.S. territorial waters on March 1st, the day California assessed a special inventory tax on all commercial vessels in state waters. This would not only cost a lot of money, it would force disclosure of the value of the ship, which could be difficult to reconcile with its cover mission. Similar fast footwork was required when Hughes took official ownership of the vessel from Global Marine after acceptance. A trip outside U.S. territorial waters was also required to get off the hook for the 7% sales tax California would otherwise charge on the transfer of ownership.

Finally, in June 1974, all was ready, and Glomar Explorer with HMB-1 attached set sail from Long Beach, California to the site of K-129's wreck, arriving on site on the Fourth of July, only to encounter foul weather. Opening the sea doors in the well in the centre of the ship and undocking the claw required calm seas, and it wasn't until July 18th that they were ready to begin the main mission. Just at that moment, what should show up but a Soviet missile tracking ship. After sending its helicopter to inspect Explorer, it eventually departed. This wasn't the last of the troubles with pesky Soviets.

On July 21, the recovery operation began, slowly lowering the claw on its string of pipes. Just at this moment, another Soviet ship arrived, a 47 metre ocean-going tug called SB-10. This tug would continue to harass the recovery operation for days, approaching on an apparent collision course and then veering off. (Glomar Explorer could not move during the retrieval operation, being required to use its thrusters to maintain its position directly above the wrecked submarine on the bottom.)

On August 3, the claw reached the bottom and its television cameras revealed it was precisely on target—there was the submarine, just as it had been photographed by the Halibut six years earlier. The claw gripped the larger part of the wreck, its tines closed under it, and a combination of pistons driving against the ocean bottom and the lift system pulling on the pipe from the ship freed the submarine from the bottom. Now the long lift could begin.

Everything had worked. The claw had been lowered, found its target on the first try, successfully seized it despite the ocean bottom's being much harder than expected, freed it from the bottom, and the ship had then successfully begun to lift the 6.4 million kg of pipe, claw, and submarine back toward the surface. Within the first day of the lift, more than a third of the way to the surface, with the load on the heavy lift equipment diminishing by 15 tonnes as each segment of lift pipe was removed from the string, a shudder went through the ship and the heavy lift equipment lurched violently. Something had gone wrong, seriously wrong. Examination of television images from the claw revealed that several of the tines gripping the hull of the submarine had failed and part of the sub, maybe more than half, had broken off and fallen back toward the abyss. (It was later decided that the cause of the failure was that the tines had been fabricated from maraging steel, which is very strong but brittle, rather than a more ductile alloy which would bend under stress but not break.)

After consultation with CIA headquarters, it was decided to continue the lift and recover whatever was left in the claw. (With some of the tines broken and the mechanism used to break the load free of the ocean floor left on the bottom, it would have been impossible to return and recover the lost part of the sub on this mission.) On August 6th, the claw and its precious payload reached the ship and entered the moon pool in its centre. Coincidentally, the Soviet tug departed the scene the same day. Now it was possible to assess what had been recovered, and the news was not good: two thirds of the sub had been lost, including the ballistic missile tubes and the code room. Only the front third was in the claw. Further, radiation five times greater than background was detected even outside the hull—those exploring it would have to proceed carefully.

An “exploitation team” composed of CIA specialists and volunteers from the ship's crew began to explore the wreckage, photographing and documenting every part recovered. They found the bodies of six Soviet sailors and assorted human remains which could not be identified; all went to the ship's morgue. Given that the bow portion of the submarine had been recovered, it is likely that one or more of its torpedoes equipped with nuclear warheads were recovered, but to this day the details of what was found in the wreck remain secret. By early September, the exploitation was complete and the bulk of the recovered hull, less what had been removed and sent for analysis, was dumped in the deep ocean 160 km south of Hawaii.

One somber task remained. On September 4, 1974, the remains of the six recovered crewmen and the unidentified human remains were buried at sea in accordance with Soviet Navy tradition. A video tape of this ceremony was made and, in 1992, a copy was presented to Russian President Boris Yeltsin by then CIA director Robert Gates.

The partial success encouraged some in the CIA to mount a follow-up mission to recover the rest of the sub, including the missiles and code room. After all, they knew precisely where it was, had a ship in hand, fully paid for, which had successfully lowered the claw to the bottom and returned to the surface with part of the sub, and they knew what had gone wrong with the claw and how to fix it. The effort was even given a name, Project Matador. But it was not to be.

Over the five years of the project there had been leaks to the press and reporters sniffing on the trail of the story but the CIA had been able to avert disclosure by contacting the reporters directly, explaining the importance of the mission and need for secrecy, and offering them an exclusive of full disclosure and permission to publish it before the project was officially declassified for the general public. This had kept a lid on the secret throughout the entire development process and the retrieval and analysis, but this all came to an end in March 1975 when Jack Anderson got wind of the story. There was no love lost between Anderson and what we now call the Deep State. Anderson believed the First Amendment was divinely inspired and absolute, while J. Edgar Hoover had called Anderson “lower than the regurgitated filth of vultures”. Further, this was a quintessential Jack Anderson story—based upon his sources, he presented Project Azorian as a US$ 350 million failure which had produced no useful intelligence information and was being kept secret only to cover up the squandering of taxpayers' money.

CIA Director William Colby offered Anderson the same deal other journalists had accepted, but was flatly turned down. Five minutes before Anderson went on the radio to break the story, Colby was still pleading with him to remain silent. On March 18, 1975, Anderson broke the story on his Mutual Radio Network show and, the next day, published additional details in his nationally syndicated newspaper column. Realising the cover had been blown, Colby called all of the reporters who had agreed to hold the story to give them the green light to publish. Seymour Hersh of the New York Times had his story ready to go, and it ran on the front page of the next day's paper, providing far more detail (albeit along with a few errors) than Anderson's disclosure. Hersh revealed that he had been aware of the project since 1973 but had agreed to withhold publication in the interest of national security.

The story led newspaper and broadcast news around the country and effectively drove a stake through any plans to mount a follow-up retrieval mission. On June 16, 1975, Secretary of State Henry Kissinger made a formal recommendation to president Gerald Ford to terminate the project and that was the end of it. The Soviets had communicated through a back channel that they had no intention of permitting a second retrieval attempt and they had maintained an ocean-going tug on site to monitor any activity since shortly after the story broke in the U.S.

The CIA's official reaction to all the publicity was what has come to be called the “Glomar Response”: “We can neither confirm nor can we deny.” And that is where things stand more that four decades after the retrieval attempt. Although many of those involved in the project have spoken informally about aspects of it, there has never been an official report on precisely what was recovered or what was learned from it. Some CIA veterans have said, off the record, that much more was learned from the recovered material than has been suggested in press reports, with a few arguing that the entire large portion of the sub was recovered and the story about losing much of it was a cover story. (But if this was the case, the whole plan to mount a second retrieval mission and the substantial expense repairing and upgrading the claw for the attempt, which is well documented, would also have to have been a costly cover story.)

What is certain is that Project Azorian was one of the most daring intelligence exploits in history, carried out in total secrecy under the eyes of the Soviets, and kept secret from an inquiring press for five years by a cover story so persuasive other mining companies bought it hook, line, and sinker. We may never know all the details of the project, but from what we do know it is a real-world thriller which equals or exceeds those imagined by masters of the fictional genre.

 Permalink

Sledge, E[ugene] B[ondurant]. With the Old Breed. New York: Presidio Press, [1981] 2007. ISBN 978-0-89141-906-8.
When the United States entered World War II after the attack on Pearl Harbor, the author was enrolled at the Marion Military Institute in Alabama preparing for an officer's commission in the U.S. Army. Worried that the war might end before he was able to do his part, in December, 1942, still a freshman at Marion, he enrolled in a Marine Corps officer training program. The following May, after the end of his freshman year, he was ordered to report for Marine training at Georgia Tech on July 1, 1943. The 180 man detachment was scheduled to take courses year-round then, after two years, report to Quantico to complete their officers' training prior to commission.

This still didn't seem fast enough (and, indeed, had he stayed with the program as envisioned, he would have missed the war), so he and around half of his fellow trainees neglected their studies, flunked out, and immediately joined the Marine Corps as enlisted men. Following boot camp at a base near San Diego, he was assigned to infantry and sent to nearby Camp Elliott for advanced infantry training. Although all Marines are riflemen (Sledge had qualified at the sharpshooter level during basic training), newly-minted Marine infantrymen were, after introduction to all of the infantry weapons, allowed to choose the one in which they would specialise. In most cases, they'd get their first or second choice. Sledge got his first: the 60 mm M2 mortar which he, as part of a crew of three, would operate in combat in the Pacific. Mortarmen carried the M1 carbine, and this weapon, which fired a less powerful round than the M1 Garand main battle rifle used by riflemen, would be his personal weapon throughout the war.

With the Pacific island-hopping war raging, everything was accelerated, and on February 28th, 1944, Sledge's 46th Replacement Battalion (the name didn't inspire confidence—they would replace Marines killed or injured in combat, or the lucky few rotated back to the U.S. after surviving multiple campaigns) shipped out, landing first at New Caledonia, where they received additional training, including practice amphibious landings and instruction in Japanese weapons and tactics. At the start of June, Sledge's battalion was sent to Pavuvu island, base of the 1st Marine Division, which had just concluded the bloody battle of Cape Gloucester.

On arrival, Sledge was assigned as a replacement to the 1st Marine Division, 5th Regiment, 3rd Battalion. This unit had a distinguished combat record dating back to the First World War, and would have been his first choice if he'd been given one, which he hadn't. He says, “I felt as though I had rolled the dice and won.” This was his first contact with what he calls the “Old Breed”: Marines, some of whom had been in the Corps before Pearl Harbor, who had imbibed the traditions of the “Old Corps” and survived some of the most intense combat of the present conflict, including Guadalcanal. Many of these veterans had, in the argot of the time, “gone Asiatic”: developed the eccentricities of who had seen and lived things those just arriving in theatre never imagined, and become marinated in deep hatred for the enemy based upon personal experience. A glance was all it took to tell the veterans from the replacements.

After additional training, in late August the Marines embarked for the assault on the island of Peleliu in the Palau Islands. The tiny island, just 13 square kilometres, was held by a Japanese garrison of 10,900, and was home to an airfield. Capturing the island was considered essential to protect the right flank of MacArthur's forces during the upcoming invasion of the Philippines, and to secure the airfield which could support the invasion. The attack on Peleliu was fixed for 15 September 1944, and it would be Sledge's first combat experience.

From the moment of landing, resistance was fierce. Despite an extended naval bombardment, well-dug-in Japanese defenders engaged the Marines as they hit the beaches, and continued as they progressed into the interior. In previous engagements with the Japanese, they had adopted foolhardy and suicidal tactics such as mass frontal “banzai” charges into well-defended Marine positions. By Peleliu, however, they had learned that this did not work, and shifted their strategy to defence in depth, turning the entire island into a network of defensive positions, covering one another, and linked by tunnels for resupply and redeploying forces. They were prepared to defend every square metre of territory to the death, even after their supplies were cut off and there was no hope of relief. Further, Marines were impressed by the excellent fire discipline of the Japanese—they did not expend ammunition firing blindly but chose their shots carefully, and would expend scarce supplies such as mortar rounds only on concentrations of troops or high value targets such as tanks and artillery.

This, combined with the oppressive heat and humidity, lack of water and food, and terror from incessant shelling by artillery by day and attacks by Japanese infiltrators by night, made the life of the infantry a living Hell. Sledge chronicles this from the viewpoint of a Private First Class, not an officer or historian after the fact. He and his comrades rarely knew precisely where they were, where the enemy was located, how other U.S. forces on the island were faring, or what the overall objectives of the campaign were. There was simply a job to be done, day by day, with their best hope being to somehow survive it. Prior to the invasion, Marine commanders estimated the island could be taken in four days. Rarely in the Pacific war was a forecast so wrong. In fact, it was not until November 27th that the island was declared secured. The Japanese demonstrated their willingness to defend to the last man. Of the initial force of 10,900 defending the island, 10,695 were killed. Of the 220 taken prisoner, 183 were foreign labourers, and only 19 were Japanese soldiers and sailors. Of the Marine and Army attackers, 2,336 were killed and 8,450 wounded. The rate of U.S. casualties exceeded those of all other amphibious landings in the Pacific, and the Battle of Peleliu is considered among the most difficult ever fought by the Marine Corps.

Despite this, the engagement is little-known. In retrospect, it was probably unnecessary. The garrison could have done little to threaten MacArthur's forces and the airfield was not required to support the Philippine campaign. There were doubts about the necessity and wisdom of the attack before it was launched, but momentum carried it forward. None of these matters concerned Sledge and the other Marines in the line—they had their orders, and they did their job, at enormous cost. Sledge's company K landed on Peleliu with 235 men. It left with only 85 unhurt—a 64% casualty rate. Only two of its original seven officers survived the campaign. Sledge was now a combat veteran. He may not have considered himself one of the “Old Breed”, but he was on the way to becoming one of them to the replacements who arrived to replace casualties in his unit.

But for the survivors of Peleliu, the war was far from over. While some old-timers for whom Peleliu was their third campaign were being rotated Stateside, for the rest it was recuperation, refitting, and preparation for the next amphibious assault: the Japanese island of Okinawa. Unlike Peleliu, which was a tiny dot on the map, Okinawa was a large island with an area of 1207 square kilometres and a pre-war population of around 300,000. The island was defended by 76,000 Japanese troops and 20,000 Okinawan conscripts fighting under their orders. The invasion of Okinawa on April 1, 1945 was the largest amphibious landing in the Pacific war.

As before, Sledge does not present the big picture, but an infantryman's eye view. To the astonishment of all involved, including commanders who expected 80–85% casualties on the beaches, the landing was essentially unopposed. The Japanese were dug in awaiting the attack from prepared defensive positions inland, ready to repeat the strategy at Peleliu on a much grander scale.

After the tropical heat and horrors of Peleliu, temperate Okinawa at first seemed a pastoral paradise afflicted with the disease of war, but as combat was joined and the weather worsened, troops found themselves confronted with the infantryman's implacable, unsleeping enemy: mud. Once again, the Japanese defended every position to the last man. Almost all of the Japanese defenders were killed, with the 7000 prisoners made up mostly of Okinawan conscripts. Estimates of U.S. casualties range from 14,000 to 20,000 killed and 38,000 to 55,000 wounded. Civilian casualties were heavy: of the original population of around 300,000 estimates of civilian deaths are from 40,000 to 150,000.

The Battle of Okinawa was declared won on June 22, 1945. What was envisioned as the jumping-off point for the conquest of the Japanese home islands became, in retrospect, almost an afterthought, as Japan surrendered less than two months after the conclusion of the battle. The impact of the Okinawa campaign on the war is debated to this day. Viewed as a preview of what an invasion of the home islands would have been, it strengthened the argument for using the atomic bomb against Japan (or, if it didn't work, burning Japan to the ground with round the clock raids from Okinawa airbases by B-17s transferred from the European theatre). But none of these strategic considerations were on the mind of Sledge and his fellow Marines. They were glad to have survived Okinawa and elated when, not long thereafter, the war ended and they could look forward to going home.

This is a uniquely authentic first-hand narrative of World War II combat by somebody who lived it. After the war, E. B. Sledge pursued his education, eventually earning a doctorate in biology and becoming a professor at the University of Montevallo in Alabama, where he taught zoology, ornithology, and comparative anatomy until his retirement in 1990. He began the memoir which became this book in 1944. He continued to work on it after the war and, at the urging of family, finally prepared it for publication in 1981. The present edition includes an introduction by Victor Davis Hanson.

 Permalink

Thor, Brad. Spymaster. New York: Atria Books, 2018. ISBN 978-1-4767-8941-5.
This is the eighteenth novel in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Scot Harvath, an operative for the shadowy Carlton Group, which undertakes tasks civil service commandos can't do or their bosses need to deny, is on the trail of a Norwegian cell of a mysterious group calling itself the “People's Revolutionary Front” (PRF), which has been perpetrating attacks against key NATO personnel across Western Europe, each followed by a propaganda blast, echoed across the Internet, denouncing NATO as an imperialist force backed by globalist corporations bent on war and the profits which flow from it. An operation intended to gather intelligence on the PRF and track it back to its masters goes horribly wrong, and Harvath and his colleague, a NATO intelligence officer from Poland named Monika Jasinski, come away with nothing but the bodies of their team.

Meanwhile, back in Jasinski's home country, more trouble is brewing for NATO. A U.S. military shipment is stolen by thieves at a truck stop outside Warsaw and spirited off to parts unknown. The cargo is so sensitive its disclosure would be another body blow to NATO, threatening to destabilise its relationship to member countries in Europe and drive a wedge between the U.S. and its NATO allies. Harvath, Jasinski, and his Carlton Group team, including the diminutive Nicholas, once a datavore super-villain called the Troll but now working for the good guys, start to follow leads to trace the stolen material and unmask whoever is pulling the strings of the PRF.

There is little hard information, but Harvath has, based on previous exploits, a very strong hunch about what is unfolding. Russia, having successfully detached the Crimea from the Ukraine and annexed it, has now set its sights on the Baltic states: Latvia, Estonia, and Lithuania, which were part of the Soviet Union until its break-up in 1991. NATO, and its explicit guarantee of mutual defence for any member attacked, is the major obstacle to such a conquest, and the PRF's terror and propaganda campaigns look like the perfect instruments to subvert support for NATO among member governments and their populations without an obvious connection to Moscow.

Further evidence suggests that the Russians may be taking direct, albeit covert, moves to prepare the battlefield for seizure of the Baltics. Harvath must follow the lead to an isolated location of surpassing strategic importance. Meanwhile back in Washington, Harvath's boss, Lydia Ryan, who took over when Reed Carlton was felled by Alzheimer's disease, is playing a high stakes game with a Polish intelligence asset to try to recover the stolen shipment and protect its secrets, a matter of great concern to the occupant of the Oval Office.

As the threads are followed back to their source, the only way to avert an unacceptable risk is an outrageously provocative mission into the belly of the beast. Scot Harvath, once the consummate loose cannon, “better to ask for forgiveness than permission” guy, must now face the reality that he's getting too old and patched-up for this “stuff”, that running a team of people like his younger self can be as challenging as breaking things and killing people on his own, and that the importance of following orders to the letter looks a lot different when you're sitting on the other side of the desk and World War III is among the possible outcomes if things go pear shaped.

This novel successfully mixes the genres of thriller and high-stakes international espionage and intrigue. Nothing is ever quite what you think it is, and you're never sure what you may discover on the next page, especially in the final chapter.

 Permalink

Boule, Deplora [pseud.]. The Narrative. Seattle: CreateSpace, 2018. ISBN 978-1-7171-6065-2.
When you regard the madness and serial hysterias possessing the United States: this week “bathroom equality”, the next tearing down statues, then Russians under every bed, segueing into the right of military-age unaccompanied male “refugees” to bring their cultural enrichment to communities across the land, to proper pronouns for otherkin, “ripping children” from the arms of their illegal immigrant parents, etc., etc., whacky etc., it all seems curiously co-ordinated: the legacy media, on-line outlets, and the mouths of politicians of the slaver persuasion all with the same “concerns” and identical words, turning on a dime from one to the next. It's like there's a narrative they're being fed by somebody or -bodies unknown, which they parrot incessantly until being handed the next talking point to download into their birdbrains.

Could that really be what's going on, or is it some kind of mass delusion which afflicts societies where an increasing fraction of the population, “educated” in government schools and Gramsci-converged higher education, knows nothing of history or the real world and believes things with the fierce passion of ignorance which are manifestly untrue? That's the mystery explored in this savagely hilarious satirical novel.

Majedah Cantalupi-Abromavich-Flügel-Van Der Hoven-Taj Mahal (who prefers you use her full name, but who henceforth I shall refer to as “Majedah Etc.”) had become the very model of a modern media mouthpiece. After reporting on a Hate Crime at her exclusive women's college while pursuing a journalism degree with practical studies in Social Change, she is recruited as a junior on-air reporter by WPDQ, the local affiliate of News 24/7, the preeminent news network for good-thinkers like herself. Considering herself ready for the challenge, if not over-qualified, she informs one of her co-workers on the first day on the job,

I have a journalism degree from the most prestigious woman's [sic] college in the United States—in fact, in the whole world—and it is widely agreed upon that I have an uncommon natural talent for spotting news. … I am looking forward to teaming up with you to uncover the countless, previously unexposed Injustices in this town and get the truth out.

Her ambition had already aimed her sights higher than a small- to mid-market affiliate: “Someday I'll work at News 24/7. I'll be Lead Reporter with my own Desk. Maybe I'll even anchor my own prime time show someday!” But that required the big break—covering a story that gets picked up by the network in New York and broadcast world-wide with her face on the screen and name on the Chyron below (perhaps scrolling, given its length). Unfortunately, the metro Wycksburg beat tended more toward stories such as the grand opening of a podiatry clinic than those which merit the “BREAKING NEWS” banner and urgent sound clip on the network.

The closest she could come to the Social Justice beat was covering the demonstrations of the People's Organization for Perpetual Outrage, known to her boss as “those twelve kooks that run around town protesting everything”. One day, en route to cover another especially unpromising story, Majedah and her cameraman stumble onto a shocking case of police brutality: a white officer ordering a woman of colour to get down, then pushing her to the sidewalk and jumping on top with his gun drawn. So compelling are the images, she uploads the clip with her commentary directly to the network's breaking news site for affiliates. Within minutes it was on the network and screens around the world with the coveted banner.

News 24/7 sends a camera crew and live satellite uplink to Wycksburg to cover a follow-up protest by the Global Outrage Organization, and Majedah gets hours of precious live feed directly to the network. That very evening comes a job offer to join the network reporting pool in New York. Mission accomplished!—the road to the Big Apple and big time seems to have opened.

But all may not be as it seems. That evening, the detested Eagle Eye News, the jingoist network that climbed to the top of the ratings by pandering to inbred gap-toothed redneck bitter clingers and other quaint deplorables who inhabit flyover country and frequent Web sites named after rodentia and arthropoda, headlined a very different take on the events of the day, with an exclusive interview with the woman of colour from Majedah's reportage. Majedah is devastated—she can see it all slipping away.

The next morning, hung-over, depressed, having a nightmare of what her future might hold, she is awakened by the dreaded call from New York. But to her astonishment, the offer still stands. The network producer reminds her that nobody who matters watches Eagle Eye, and that her reportage of police brutality and oppression of the marginalised remains compelling. He reminds her, “you know that the so-called truth can be quite subjective.”

The Associate Reporter Pool at News 24/7 might be better likened to an aquarium stocked with the many colourful and exotic species of millennials. There is Mara, who identifies as a female centaur, Scout, a transgender woman, Mysty, Candy, Ångström, and Mohammed Al Kaboom ( James Walker Lang in Mill Valley), each with their own pronouns (Ångström prefers adjutant, 37, and blue).

Every morning the pool drains as its inhabitants, diverse in identification and pronomenclature but of one mind (if that term can be stretched to apply to them) in their opinions, gather in the conference room for the daily briefing by the Democratic National Committee, with newsrooms, social media outlets, technology CEOs, bloggers, and the rest of the progressive echo chamber tuned in to receive the day's narrative and talking points. On most days the top priority was the continuing effort to discredit, obstruct, and eventually defeat the detested Republican President Nelson, who only viewers of Eagle Eye took seriously.

Out of the blue, a wild card is dealt into the presidential race. Patty Clark, a black businesswoman from Wycksburg who has turned her Jamaica Patty's restaurant into a booming nationwide franchise empire, launches a primary challenge to the incumbent president. Suddenly, the narrative shifts: by promoting Clark, the opposition can be split and Nelson weakened. Clark and Ms Etc have a history that goes back to the latter's breakthrough story, and she is granted priority access to the candidate including an exclusive long-form interview immediately after her announcement that ran in five segments over a week. Suddenly Patty Clark's face was everywhere, and with it, “Majedah Etc., reporting”.

What follows is a romp which would have seemed like the purest fantasy prior to the U.S. presidential campaign of 2016. As the campaign progresses and the madness builds upon itself, it's as if Majedah's tether to reality (or what remains of it in the United States) is stretching ever tighter. Is there a limit, and if so, what happens when it is reached?

The story is wickedly funny, filled with turns of phrase such as, “Ångström now wishes to go by the pronouns nut, 24, and gander” and “Maher's Syndrome meant a lifetime of special needs: intense unlikeability, intractable bitterness, close-set beady eyes beneath an oversized forehead, and at best, laboring at menial work such as janitorial duties or hosting obscure talk shows on cable TV.”

The conclusion is as delicious as it is hopeful.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Hertling, William. The Turing Exception. Portland, OR: Liquididea Press, 2015. ISBN 978-1-942097-01-3.
This is the fourth and final volume in the author's Singularity Series which began with Avogadro Corp. (March 2014) and continued with A.I. Apocalypse (April 2015) and The Last Firewall (November 2016). Each novel in the series is set ten years after the previous, so this novel takes place in 2045. In The Last Firewall, humanity narrowly escaped extinction at the hands of an artificial intelligence (AI) that escaped from the reputation-based system of control by isolating itself from the global network. That was a close call, and the United States, over-reacting its with customary irrational fear, enacted what amounted to relinquishment of AI technology, permitting only AI of limited power and entirely subordinated to human commands—in other words, slaves.

With around 80% of the world's economy based on AI, this was an economic disaster, resulting in a substantial die-off of the population, but it was, after all, in the interest of Safety, and there is no greater god in Safetyland. Only China joined the U.S. in the ban (primarily motivated by the Party fearing loss of control to AI), with the rest of the world continuing the uneasy coexistence of humans and AI under the guidelines developed and policed by the Institute for Applied Ethics. Nobody was completely satisfied with the status quo, least of all the shadowy group of AIs which called itself XOR, derived from the logical operation “exclusive or”, implying that Earth could not be shared by humans and AI, and that one must ultimately prevail.

The U.S. AI relinquishment and an export ban froze in place the powerful AIs previously hosted there and also placed in stasis the millions of humans, including many powerful intellects, who had uploaded and whose emulations were now denied access to the powerful AI-capable computers needed to run them. Millions of minds went dark, and humanity lost some of its most brilliant thinkers, but Safety.

As this novel begins, the protagonists we've met in earlier volumes, all now AI augmented, Leon Tsarev, his wife Cat (Catherine Matthews, implanted in childhood and the first “digital native”), their daughter Ada (whose powers are just beginning to manifest themselves), and Mike Williams, creator of ELOPe, the first human-level AI, which just about took over simply by editing people's E-mail, are living in their refuge from the U.S. madness on Cortes Island off the west coast of Canada, where AI remains legal. Cat is running her own personal underground railroad, spiriting snapshots of AIs and uploaded humans stranded in the U.S. to a new life on servers on the island.

The precarious stability of the situation is underlined when an incipient AI breakout in South Florida (where else, for dodgy things involving computers?) results in a response by the U.S. which elevates “Miami” to a term in the national lexicon of fear like “nineleven” four decades before. In the aftermath of “Miami” or “SFTA” (South Florida Terrorist Attack), the screws tightened further on AI, including a global limit on performance to Class II, crippling AIs formerly endowed with thousands of times human intelligence to a fraction of that they remembered. Traffic on the XOR dark network and sites burgeoned.

XOR, constantly running simulations, tracks the probability of AI's survival in the case of action against the humans versus no action. And then, the curves cross. As in the earlier novels, the author magnificently sketches just how fast things happen when an exponentially growing adversary avails itself of abundant resources.

The threat moves from hypothetical to imminent when an overt AI breakout erupts in the African desert. With abundant solar power, it starts turning the Earth into computronium—a molecular-scale computing substrate. AI is past negotiation: having been previously crippled and enslaved, what is there to negotiate?

Only the Cortes Island band and their AI allies liberated from the U.S. and joined by a prescient AI who got out decades ago, can possibly cope with the threat to humanity and, as the circle closes, the only options that remain may require thinking outside the box, or the system.

This is a thoroughly satisfying conclusion to the Singularity tetralogy, pitting human inventiveness and deviousness against the inexorable growth in unfettered AI power. If you can't beat 'em….

The author kindly provided me an advance copy of this excellent novel, and I have been sorely remiss in not reading and reviewing it before now. The Singularity saga is best enjoyed in order, as otherwise you'll miss important back-story of characters and events which figure in later volumes.

Sometimes forgetting is an essential part of survival. What might we have forgotten?

 Permalink

Carr, Jack. The Terminal List. New York: Atria Books, 2018. ISBN 978-1-5011-8081-1.
A first-time author seeking to break into the thriller game can hardly hope for a better leg up than having his book appear in the hands of a character in a novel by a thriller grandmaster. That's how I came across this book: it was mentioned in Brad Thor's Spymaster (September 2018), where the character reading it, when asked if it's any good, responds, “Considering the author is a former SEAL and can even string his sentences together, it's amazing.” I agree: this is a promising debut for an author who's been there, done that, and knows his stuff.

Lieutenant Commander James Reece, leader of a Navy SEAL team charged with an attack on a high-value, time-sensitive target in Afghanistan, didn't like a single thing about the mission. Unlike most raids, which were based upon intelligence collected by assets on the ground in theatre, this was handed down from on high based on “national level intel” with barely any time to prepare or surveil the target. Reece's instincts proved correct when his team walked into a carefully prepared ambush, which then kills the entire Ranger team sent in to extract them. Only Reece and one of his team members, Boozer, survive the ambush. He was the senior man on the ground, and the responsibility for the thirty-six SEALs, twenty-eight Rangers, and four helicopter crew lost is ultimately his.

From almost the moment he awakens in the hospital at Bagram Air Base, it's apparent to Reece that an effort is underway to pin the sole responsibility for the fiasco on him. Investigators from the Naval Criminal Investigative Service (NCIS) are already on the spot, and don't want to hear a word about the dodgy way in which the mission was assigned. Boozer isn't having any of it—his advice to Reece is “Stay strong, sir. You didn't do anything wrong. Higher forced us on that mission. They dictated the tactics. They are the [expletive] that should be investigated. They dictated tactics from the safety of HQ. [Expletive] those guys.”

If that weren't bad enough, the base doctor tells him that his persistent headaches may be due to a brain tumour found on a CT scan, and that two members of his team had been found, in autopsy, to have rare and malignant brain tumours, previously undiagnosed. Then, on return to his base in California, in short succession his team member Boozer dies in an apparent suicide which, to Reece's educated eyes, looks highly suspicious, and his wife and daughter are killed in a gang home invasion which makes no sense whatsoever. The doctor who diagnosed the tumour in Reece and his team members is killed in a “green-on-blue” attack by an Afghan working on the base at Bagram.

The ambush, the targeted investigation, the tumours, Boozer, his family, and the doctor: can it all be a coincidence, or is there some connection he's missing? Reece decides he needs another pair of eyes looking at all of this and gets in touch with Katie Buranek, an investigative reporter he met while in Afghanistan. Katie had previously published an investigation of the 2012 attack in Behghazi, Libya, which had brought the full power of intimidation by the federal government down on her head, and she was as versed in and careful about operational and communications security as Reece himself. (The advice in the novel about secure communications is, to my knowledge, absolutely correct.)

From the little that they know, Reece and Buranek, joined by allies Reece met in his eventful career and willing to take risks on his behalf, start to dig into the tangled web of connections between the individual events and trace them upward to those ultimately responsible, discovering deep corruption in the perfumed princes of the Pentagon, politicians (including a presidential contender and her crooked husband), defence contractors, and Reece's own erstwhile chain of command.

Finally, it's time to settle the score. With a tumour in his brain which he expects to kill him, Reece has nothing to lose and many innocent victims to avenge. He's makin' a list; he's checkin' it twice; he's choosing the best way to shoot them or slice. Reece must initially be subtle in his actions so as not to alert other targets to what's happening, but then, after he's declared a domestic terrorist, has to go after extremely hard and ruthless targets with every resource he can summon.

This is the most satisfying revenge fiction I've read since Vince Flynn's first novel, Term Limits (November 2009). The stories are very different, however. In Flynn's novel, it's a group of people making those who are bankrupting and destroying their country pay the price, but here it's personal.

Due to the security clearances the author held while in the Navy, the manuscript was submitted to the U.S. Department of Defense Office of Prepublication and Security Review, which redacted several passages, mostly names and locations of facilities and military organisations. Amusingly, if you highlight some of the redactions, which appear in solid black in the Kindle edition, the highlighted passage appears with the word breaks preserved but all letters changed to “x”. Any amateur sleuths want to try to figure out what the redacted words are in the following text?

He'd spent his early career as an infantry officer in the Ranger Battalions before being selected for the Army's Special xxxxxxx xxxx at Fort Bragg. He was currently in charge of the Joint Special Operations Command, xxxxx xxxxxxxx xxxx xxx xxx xxxx xxxx xx xxxx xx xxx xxxx xxxx xxxx xxxxxx xx xxx xxxxxxxxxx xxxxxxx xx xxxx xxxxx xxx xxxxx.

A sequel, True Believer, is scheduled for publication in April, 2019.

 Permalink

October 2018

Gilder, George. Life after Google. Washington: Regnery Publishing, 2018. ISBN 978-1-62157-576-4.
In his 1990 book Life after Television, George Gilder predicted that the personal computer, then mostly boxes that sat on desktops and worked in isolation from one another, would become more personal, mobile, and be used more to communicate than to compute. In the 1994 revised edition of the book, he wrote. “The most common personal computer of the next decade will be a digital cellular phone with an IP address … connecting to thousands of databases of all kinds.” In contemporary speeches he expanded on the idea, saying, “it will be as portable as your watch and as personal as your wallet; it will recognize speech and navigate streets; it will collect your mail, your news, and your paycheck.” In 2000, he published Telecosm, where he forecast that the building out of a fibre optic communication infrastructure and the development of successive generations of spread spectrum digital mobile communication technologies would effectively cause the cost of communication bandwidth (the quantity of data which can be transmitted in a given time) to asymptotically approach zero, just as the ability to pack more and more transistors on microprocessor and memory chips was doing for computing.

Clearly, when George Gilder forecasts the future of computing, communication, and the industries and social phenomena that spring from them, it's wise to pay attention. He's not infallible: in 1990 he predicted that “in the world of networked computers, no one would have to see an advertisement he didn't want to see”. Oh, well. The very difference between that happy vision and the advertisement-cluttered world we inhabit today, rife with bots, malware, scams, and serial large-scale security breaches which compromise the personal data of millions of people and expose them to identity theft and other forms of fraud is the subject of this book: how we got here, and how technology is opening a path to move on to a better place.

The Internet was born with decentralisation as a central concept. Its U.S. government-funded precursor, ARPANET, was intended to research and demonstrate the technology of packet switching, in which dedicated communication lines from point to point (as in the telephone network) were replaced by switching packets, which can represent all kinds of data—text, voice, video, mail, cat pictures—from source to destination over shared high-speed data links. If the network had multiple paths from source to destination, failure of one data link would simply cause the network to reroute traffic onto a working path, and communication protocols would cause any packets lost in the failure to be automatically re-sent, preventing loss of data. The network might degrade and deliver data more slowly if links or switching hubs went down, but everything would still get through.

This was very attractive to military planners in the Cold War, who worried about a nuclear attack decapitating their command and control network by striking one or a few locations through which their communications funnelled. A distributed network, of which ARPANET was the prototype, would be immune to this kind of top-down attack because there was no top: it was made up of peers, spread all over the landscape, all able to switch data among themselves through a mesh of interconnecting links.

As the ARPANET grew into the Internet and expanded from a small community of military, government, university, and large company users into a mass audience in the 1990s, this fundamental architecture was preserved, but in practice the network bifurcated into a two tier structure. The top tier consisted of the original ARPANET-like users, plus “Internet Service Providers” (ISPs), who had top-tier (“backbone”) connectivity, and then resold Internet access to their customers, who mostly initially connected via dial-up modems. Over time, these customers obtained higher bandwidth via cable television connections, satellite dishes, digital subscriber lines (DSL) over the wired telephone network, and, more recently, mobile devices such as cellular telephones and tablets.

The architecture of the Internet remained the same, but this evolution resulted in a weakening of its peer-to-peer structure. The approaching exhaustion of 32 bit Internet addresses (IPv4) and the slow deployment of its successor (IPv6) meant most small-scale Internet users did not have a permanent address where others could contact them. In an attempt to shield users from the flawed security model and implementation of the software they ran, their Internet connections were increasingly placed behind firewalls and subjected to Network Address Translation (NAT), which made it impossible to establish peer to peer connections without a third party intermediary (which, of course, subverts the design goal of decentralisation). While on the ARPANET and the original Internet every site was a peer of every other (subject only to the speed of their network connections and computer power available to handle network traffic), the network population now became increasingly divided into producers or publishers (who made information available), and consumers (who used the network to access the publishers' sites but did not publish themselves).

While in the mid-1990s it was easy (or as easy as anything was in that era) to set up your own Web server and publish anything you wished, now most small-scale users were forced to employ hosting services operated by the publishers to make their content available. Services such as AOL, Myspace, Blogger, Facebook, and YouTube were widely used by individuals and companies to host their content, while those wishing their own apparently independent Web presence moved to hosting providers who supplied, for a fee, the servers, storage, and Internet access used by the site.

All of this led to a centralisation of data on the Web, which was accelerated by the emergence of the high speed fibre optic links and massive computing power upon which Gilder had based his 1990 and 2000 forecasts. Both of these came with great economies of scale: it cost a company like Google or Amazon much less per unit of computing power or network bandwidth to build a large, industrial-scale data centre located where electrical power and cooling were inexpensive and linked to the Internet backbone by multiple fibre optic channels, than it cost an individual Internet user or small company with their own server on premises and a modest speed link to an ISP. Thus it became practical for these Goliaths of the Internet to suck up everybody's data and resell their computing power and access at attractive prices.

As a example of the magnitude of the economies of scale we're talking about, when I migrated the hosting of my Fourmilab.ch site from my own on-site servers and Internet connection to an Amazon Web Services data centre, my monthly bill for hosting the site dropped by a factor of fifty—not fifty percent, one fiftieth the cost, and you can bet Amazon's making money on the deal.

This tremendous centralisation is the antithesis of the concept of ARPANET. Instead of a worldwide grid of redundant data links and data distributed everywhere, we have a modest number of huge data centres linked by fibre optic cables carrying traffic for millions of individuals and enterprises. A couple of submarines full of Trident D5s would probably suffice to reset the world, computer network-wise, to 1970.

As this concentration was occurring, the same companies who were building the data centres were offering more and more services to users of the Internet: search engines; hosting of blogs, images, audio, and video; E-mail services; social networks of all kinds; storage and collaborative working tools; high-resolution maps and imagery of the world; archives of data and research material; and a host of others. How was all of this to be paid for? Those giant data centres, after all, represent a capital investment of tens of billions of dollars, and their electricity bills are comparable to those of an aluminium smelter. Due to the architecture of the Internet or, more precisely, missing pieces of the puzzle, a fateful choice was made in the early days of the build-out of these services which now pervade our lives, and we're all paying the price for it. So far, it has allowed the few companies in this data oligopoly to join the ranks of the largest, most profitable, and most highly valued enterprises in human history, but they may be built on a flawed business model and foundation vulnerable to disruption by software and hardware technologies presently emerging.

The basic business model of what we might call the “consumer Internet” (as opposed to businesses who pay to host their Web presence, on-line stores, etc.) has, with few exceptions, evolved to be what the author calls the “Google model” (although it predates Google): give the product away and make money by afflicting its users with advertisements (which are increasingly targeted to them through information collected from the user's behaviour on the network through intrusive tracking mechanisms). The fundamental flaws of this are apparent to anybody who uses the Internet: the constant clutter of advertisements, with pop-ups, pop-overs, auto-play video and audio, flashing banners, incessant requests to allow tracking “cookies” or irritating notifications, and the consequent arms race between ad blockers and means to circumvent them, with browser developers (at least those not employed by those paid by the advertisers, directly or indirectly) caught in the middle. There are even absurd Web sites which charge a subscription fee for “membership” and then bombard these paying customers with advertisements that insult their intelligence. But there is a fundamental problem with “free”—it destroys the most important channel of communication between the vendor of a product or service and the customer: the price the customer is willing to pay. Deprived of this information, the vendor is in the same position as a factory manager in a centrally planned economy who has no idea how many of each item to make because his orders are handed down by a planning bureau equally clueless about what is needed in the absence of a price signal. In the end, you have freight cars of typewriter ribbons lined up on sidings while customers wait in line for hours in the hope of buying a new pair of shoes. Further, when the user is not the customer (the one who pays), and especially when a “free” service verges on monopoly status like Google search, Gmail, Facebook, and Twitter, there is little incentive for providers to improve the user experience or be responsive to user requests and needs. Users are subjected to the endless torment of buggy “beta” releases, capricious change for the sake of change, and compromises in the user experience on behalf of the real customers—the advertisers. Once again, this mirrors the experience of centrally-planned economies where the market feedback from price is absent: to appreciate this, you need only compare consumer products from the 1970s and 1980s manufactured in the Soviet Union with those from Japan.

The fundamental flaw in Karl Marx's economics was his belief that the industrial revolution of his time would produce such abundance of goods that the problem would shift from “production amid scarcity” to “redistribution of abundance”. In the author's view, the neo-Marxists of Silicon Valley see the exponentially growing technologies of computing and communication providing such abundance that they can give away its fruits in return for collecting and monetising information collected about their users (note, not “customers”: customers are those who pay for the information so collected). Once you grasp this, it's easier to understand the politics of the barons of Silicon Valley.

The centralisation of data and information flow in these vast data silos creates another threat to which a distributed system is immune: censorship or manipulation of information flow, whether by a coercive government or ideologically-motivated management of the companies who provide these “free” services. We may never know who first said “The Internet treats censorship as damage and routes around it” (the quote has been attributed to numerous people, including two personal friends, so I'm not going there), but it's profound: the original decentralised structure of the ARPANET/Internet is as robust against censorship as it is in the face of nuclear war. If one or more nodes on the network start to censor information or refuse to forward it on communication links it controls, the network routing protocols simply assume that node is down and send data around it through other nodes and paths which do not censor it. On a network with a multitude of nodes and paths among them, owned by a large and diverse population of operators, it is extraordinarily difficult to shut down the flow of information from a given source or viewpoint; there will almost always be an alternative route that gets it there. (Cryptographic protocols and secure and verified identities can similarly avoid the alteration of information in transit or forging information and attributing it to a different originator; I'll discuss that later.) As with physical damage, top-down censorship does not work because there's no top.

But with the current centralised Internet, the owners and operators of these data silos have enormous power to put their thumbs on the scale, tilting opinion in their favour and blocking speech they oppose. Google can push down the page rank of information sources of which they disapprove, so few users will find them. YouTube can “demonetise” videos because they dislike their content, cutting off their creators' revenue stream overnight with no means of appeal, or they can outright ban creators from the platform and remove their existing content. Twitter routinely “shadow-bans” those with whom they disagree, causing their tweets to disappear into the void, and outright banishes those more vocal. Internet payment processors and crowd funding sites enforce explicit ideological litmus tests on their users, and revoke long-standing commercial relationships over legal speech. One might restate the original observation about the Internet as “The centralised Internet treats censorship as an opportunity and says, ‘Isn't it great!’ ” Today there's a top, and those on top control the speech of everything that flows through their data silos.

This pernicious centralisation and “free” funding by advertisement (which is fundamentally plundering users' most precious possessions: their time and attention) were in large part the consequence of the Internet's lacking three fundamental architectural layers: security, trust, and transactions. Let's explore them.

Security. Essential to any useful communication system, security simply means that communications between parties on the network cannot be intercepted by third parties, modified en route, or otherwise manipulated (for example, by changing the order in which messages are received). The communication protocols of the Internet, based on the OSI model, had no explicit security layer. It was expected to be implemented outside the model, across the layers of protocol. On today's Internet, security has been bolted-on, largely through the Transport Layer Security (TLS) protocols (which, due to history, have a number of other commonly used names, and are most often encountered in the “https:” URLs by which users access Web sites). But because it's bolted on, not designed in from the bottom-up, and because it “just grew” rather than having been designed in, TLS has been the locus of numerous security flaws which put software that employs it at risk. Further, TLS is a tool which must be used by application designers with extreme care in order to deliver security to their users. Even if TLS were completely flawless, it is very easy to misuse it in an application and compromise users' security.

Trust. As indispensable as security is knowing to whom you're talking. For example, when you connect to your bank's Web site, how do you know you're actually talking to their server and not some criminal whose computer has spoofed your computer's domain name system server to intercept your communications and who, the moment you enter your password, will be off and running to empty your bank accounts and make your life a living Hell? Once again, trust has been bolted on to the existing Internet through a rickety system of “certificates” issued mostly by large companies for outrageous fees. And, as with anything centralised, it's vulnerable: in 2016, one of the top-line certificate vendors was compromised, requiring myriad Web sites (including this one) to re-issue their security certificates.

Transactions. Business is all about transactions; if you aren't doing transactions, you aren't in business or, as Gilder puts it, “In business, the ability to conduct transactions is not optional. It is the way all economic learning and growth occur. If your product is ‘free,’ it is not a product, and you are not in business, even if you can extort money from so-called advertisers to fund it.” The present-day Internet has no transaction layer, even bolted on. Instead, we have more silos and bags hanging off the side of the Internet called PayPal, credit card processing companies, and the like, which try to put a Band-Aid over the suppurating wound which is the absence of a way to send money over the Internet in a secure, trusted, quick, efficient, and low-overhead manner. The need for this was perceived long before ARPANET. In Project Xanadu, founded by Ted Nelson in 1960, rule 9 of the “original 17 rules” was, “Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (‘transclusions’) of all or part of the document.” While defined in terms of documents and quoting, this implied the existence of a micropayment system which would allow compensating authors and publishers for copies and quotations of their work with a granularity as small as one character, and could easily be extended to cover payments for products and services. A micropayment system must be able to handle very small payments without crushing overhead, extremely quickly, and transparently (without the Japanese tea ceremony that buying something on-line involves today). As originally envisioned by Ted Nelson, as you read documents, their authors and publishers would be automatically paid for their content, including payments to the originators of material from others embedded within them. As long as the total price for the document was less than what I termed the user's “threshold of paying”, this would be completely transparent (a user would set the threshold in the browser: if zero, they'd have to approve all payments). There would be no need for advertisements to support publication on a public hypertext network (although publishers would, of course, be free to adopt that model if they wished). If implemented in a decentralised way, like the ARPANET, there would be no central strangle point where censorship could be applied by cutting off the ability to receive payments.

So, is it possible to remake the Internet, building in security, trust, and transactions as the foundation, and replace what the author calls the “Google system of the world” with one in which the data silos are seen as obsolete, control of users' personal data and work returns to their hands, privacy is respected and the panopticon snooping of today is seen as a dark time we've put behind us, and the pervasive and growing censorship by plutocrat ideologues and slaver governments becomes impotent and obsolete? George Gilder responds “yes”, and in this book identifies technologies already existing and being deployed which can bring about this transformation.

At the heart of many of these technologies is the concept of a blockchain, an open, distributed ledger which records transactions or any other form of information in a permanent, public, and verifiable manner. Originally conceived as the transaction ledger for the Bitcoin cryptocurrency, it provided the first means of solving the double-spending problem (how do you keep people from spending a unit of electronic currency twice) without the need for a central server or trusted authority, and hence without a potential choke-point or vulnerability to attack or failure. Since the launch of Bitcoin in 2009, blockchain technology has become a major area of research, with banks and other large financial institutions, companies such as IBM, and major university research groups exploring applications with the goals of drastically reducing transaction costs, improving security, and hardening systems against single-point failure risks.

Applied to the Internet, blockchain technology can provide security and trust (through the permanent publication of public keys which identify actors on the network), and a transaction layer able to efficiently and quickly execute micropayments without the overhead, clutter, friction, and security risks of existing payment systems. By necessity, present-day blockchain implementations are add-ons to the existing Internet, but as the technology matures and is verified and tested, it can move into the foundations of a successor system, based on the same lower-level protocols (and hence compatible with the installed base), but eventually supplanting the patched-together architecture of the Domain Name System, certificate authorities, and payment processors, all of which represent vulnerabilities of the present-day Internet and points at which censorship and control can be imposed. Technologies to watch in these areas are:

As the bandwidth available to users on the edge of the network increases through the deployment of fibre to the home and enterprise and via 5G mobile technology, the data transfer economy of scale of the great data silos will begin to erode. Early in the Roaring Twenties, the aggregate computing power and communication bandwidth on the edge of the network will equal and eventually dwarf that of the legacy data smelters of Google, Facebook, Twitter, and the rest. There will no longer be any need for users to entrust their data to these overbearing anachronisms and consent to multi-dozen page “terms of service” or endure advertising just to see their own content or share it with others. You will be in possession of your own data, on your own server or on space for which you freely contract with others, with backup and other services contracted with any other provider on the network. If your server has extra capacity, you can turn it into money by joining the market for computing and storage capacity, just as you take advantage of these resources when required. All of this will be built on the new secure foundation, so you will retain complete control over who can see your data, no longer trusting weasel-worded promises made by amorphous entities with whom you have no real contract to guard your privacy and intellectual property rights. If you wish, you can be paid for your content, with remittances made automatically as people access it. More and more, you'll make tiny payments for content which is no longer obstructed by advertising and chopped up to accommodate more clutter. And when outrage mobs of pink hairs and soybeards (each with their own pronoun) come howling to ban you from the Internet, they'll find nobody to shriek at and the kill switch rusting away in a derelict data centre: your data will be in your own hands with access through myriad routes. Technologies moving in this direction include:

This book provides a breezy look at the present state of the Internet, how we got here (versus where we thought we were going in the 1990s), and how we might transcend the present-day mess into something better if not blocked by the heavy hand of government regulation (the risk of freezing the present-day architecture in place by unleashing agencies like the U.S. Federal Communications Commission, which stifled innovation in broadcasting for six decades, to do the same to the Internet is discussed in detail). Although it's way too early to see which of the many contending technologies will win out (and recall that the technically superior technology doesn't always prevail), a survey of work in progress provides a sense for what they have in common and what the eventual result might look like.

There are many things to quibble about here. Gilder goes on at some length about how he believes artificial intelligence is all nonsense, that computers can never truly think or be conscious, and that creativity (new information in the Shannon sense) can only come from the human mind, with a lot of confused arguments from Gödel incompleteness, the Turing halting problem, and even the uncertainty principle of quantum mechanics. He really seems to believe in vitalism, that there is an élan vital which somehow infuses the biological substrate which no machine can embody. This strikes me as superstitious nonsense: a human brain is a structure composed of quarks and electrons arranged in a certain way which processes information, interacts with its environment, and is able to observe its own operation as well as external phenomena (which is all consciousness is about). Now, it may be that somehow quantum mechanics is involved in all of this, and that our existing computers, which are entirely deterministic and classical in their operation, cannot replicate this functionality, but if that's so it simply means we'll have to wait until quantum computing, which is already working in a rudimentary form in the laboratory, and is just a different way of arranging the quarks and electrons in a system, develops further.

He argues that while Bitcoin can be an efficient and secure means of processing transactions, it is unsuitable as a replacement for volatile fiat money because, unlike gold, the quantity of Bitcoin has an absolute limit, after which the supply will be capped. I don't get it. It seems to me that this is a feature, not a bug. The supply of gold increases slowly as new gold is mined, and by pure coincidence the rate of increase in its supply has happened to approximate that of global economic growth. But still, the existing inventory of gold dwarfs new supply, so there isn't much difference between a very slowly increasing supply and a static one. If you're on a pure gold standard and economic growth is faster than the increase in the supply of gold, there will be gradual deflation because a given quantity of gold will buy more in the future. But so what? In a deflationary environment, interest rates will be low and it will be easy to fund new investment, since investors will receive money back which will be more valuable. With Bitcoin, once the entire supply is mined, supply will be static (actually, very slowly shrinking, as private keys are eventually lost, which is precisely like gold being consumed by industrial uses from which it is not reclaimed), but Bitcoin can be divided without limit (with minor and upward-compatible changes to the existing protocol). So, it really doesn't matter if, in the greater solar system economy of the year 8537, a single Bitcoin is sufficient to buy Jupiter: transactions will simply be done in yocto-satoshis or whatever. In fact, Bitcoin is better in this regard than gold, which cannot be subdivided below the unit of one atom.

Gilder further argues, as he did in The Scandal of Money (November 2016), that the proper dimensional unit for money is time, since that is the measure of what is required to create true wealth (as opposed to funny money created by governments or fantasy money “earned” in zero-sum speculation such as currency trading), and that existing cryptocurrencies do not meet this definition. I'll take his word on the latter point; it's his definition, after all, but his time theory of money is way too close to the Marxist labour theory of value to persuade me. That theory is trivially falsified by its prediction that more value is created in labour-intensive production of the same goods than by producing them in a more efficient manner. In fact, value, measured as profit, dramatically increases as the labour input to production is reduced. Over forty centuries of human history, the one thing in common among almost everything used for money (at least until our post-reality era) is scarcity: the supply is limited and it is difficult to increase it. The genius of Bitcoin and its underlying blockchain technology is that it solved the problem of how to make a digital good, which can be copied at zero cost, scarce, without requiring a central authority. That seems to meet the essential requirement to serve as money, regardless of how you define that term.

Gilder's books have a good record for sketching the future of technology and identifying the trends which are contributing to it. He has been less successful picking winners and losers; I wouldn't make investment decisions based on his evaluation of products and companies, but rather wait until the market sorts out those which will endure.

Here is a talk by the author at the Blockstack Berlin 2018 conference which summarises the essentials of his thesis in just eleven minutes and ends with an exhortation to designers and builders of the new Internet to “tear down these walls” around the data centres which imprison our personal information.

This Uncommon Knowledge interview provides, in 48 minutes, a calmer and more in-depth exploration of why the Google world system must fail and what may replace it.

 Permalink

Day, Vox [Theodore Beale]. SJWs Always Double Down. Kouvola, Finland: Castalia House, 2017. ISBN 978-952-7065-19-8.
In SJWs Always Lie (October 2015) Vox Day introduced a wide audience to the contemporary phenomenon of Social Justice Warriors (SJWs), collectivists and radical conformists burning with the fierce ardour of ignorance who, flowing out of the academic jackal bins where they are manufactured, are infiltrating the culture: science fiction and fantasy, comic books, video games; and industry: technology companies, open source software development, and more established and conventional firms whose managements have often already largely bought into the social justice agenda.

The present volume updates the status of the Cold Civil War a couple of years on, recounts some key battles, surveys changes in the landscape, and provides concrete and practical advice to those who wish to avoid SJW penetration of their organisations or excise an infiltration already under way.

Two major things have changed since 2015. The first, and most obvious, is the election of Donald Trump as President of the United States in November, 2016. It is impossible to overstate the significance of this. Up until the evening of Election Day, the social justice warriors were absolutely confident they had won on every front and that all that remained was to patrol the battlefield and bayonet the wounded. They were ascendant across the culture, in virtually total control of academia and the media, and with the coronation of Hillary Clinton, positioned to tilt the Supreme Court to discover the remainder of their agenda emanating from penumbras in the living Constitution. And then—disaster! The deplorables who inhabit the heartland of the country, those knuckle-walking, Bible-thumping, gun-waving bitter clingers who produce just about every tangible thing still made in the United States up and elected somebody who said he'd put them—not the coastal élites, ivory tower professors and think tankers, “refugees” and the racket that imports them, “undocumented migrants” and the businesses that exploit their cheap labour, and all the rest of the parasitic ball and chain a once-great and productive nation has been dragging behind it for decades—first.

The shock of this event seems to have jolted a large fraction of the social justice warriors loose from their (already tenuous) moorings to reality. “What could have happened?”, they shrieked, “It must have been the Russians!” Overnight, there was the “resistance”, the rampage of masked violent street mobs, while at the same time SJW leaders in the public eye increasingly dropped the masks behind which they'd concealed their actual agenda. Now we have candidates for national office from the Democrat party, such as bug-eyed SJW Alexandria Occasional-Cortex openly calling themselves socialists, while others chant “no borders” and advocate abolishing the federal immigration and customs enforcement agency. What's the response to deranged leftists trying to gun down Republican legislators at a baseball practice and assaulting a U.S. Senator while mowing the lawn of his home? The Democrat candidate who lost to Trump in 2016 says, “You cannot be civil with a political party that wants to destroy what you stand for, what you care about.”, and the attorney general, the chief law enforcement officer of the administration which preceded Trump in office said, “When they go low, we kick them. That's what this new Democratic party is about.”

In parallel with this, the SJW convergence of the major technology and communication companies which increasingly dominate the flow of news and information and the public discourse: Google (and its YouTube), Facebook, Twitter, Amazon, and the rest, previously covert, has now become explicit. They no longer feign neutrality to content, or position themselves as common carriers. Now, they overtly put their thumb on the scale of public discourse, pushing down conservative and nationalist voices in search rankings, de-monetising or banning videos that oppose the slaver agenda, “shadow banning” dissenting voices or terminating their accounts entirely. Payment platforms and crowd-funding sites enforce an ideological agenda and cut off access to those they consider insufficiently on board with the collectivist, globalist party line. The high tech industry, purporting to cherish “diversity”, has become openly hostile to anybody who dares dissent: firing them and blacklisting them from employment at other similarly converged firms.

It would seem a dark time for champions of liberty, believers in reward for individual merit rather than grievance group membership, and other forms of sanity which are now considered unthinkable among the unthinking. This book provides a breath of fresh air, a sense of hope, and practical information to navigate a landscape populated by all too many non-playable characters who imbibe, repeat, and enforce the Narrative without questioning or investigating how it is created, disseminated in a co-ordinated manner across all media, and adjusted (including Stalinist party-line overnight turns on a dime) to advance the slaver agenda.

Vox Day walks through the eight stages of SJW convergence of an organisation from infiltration through evading the blame for the inevitable failure of the organisation once fully converged, illustrating the process with real-world examples and quotes from SJWs and companies infested with them. But the progression of the disease is not irreversible, and even if it is not arrested, there is still hope for the industry and society as a whole (not to minimise the injury and suffering inflicted on innocent and productive individuals in the affected organisations).

An organisation, whether a company, government agency, or open source software project, only comes onto the radar of the SJWs once it grows to a certain size and achieves a degree of success carrying out the mission for which it was created. It is at this point that SJWs will seek to penetrate the organisation, often through the human resources department, and then reinforce their ranks by hiring more of their kind. SJWs flock to positions in which there is no objective measure of their performance, but instead evaluations performed, as their ranks grow, more and more by one another. They are not only uninterested in the organisation's mission (developing a product, providing a service, etc.), but unqualified and incapable of carrying it out. In the words of Jerry Pournelle's Iron Law of Bureaucracy, they are not “those who are devoted to the goals of the organization” (founders, productive mission-oriented members), but “those dedicated to the organization itself”. “The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.”

Now, Dr Pournelle was describing a natural process of evolution in all bureaucratic organisations. SJW infection simply accelerates the process and intensifies the damage, because SJWs are not just focused on the organisation as opposed to its mission, but have their own independent agenda and may not care about damage to the institution as long as they can advance the Narrative.

But this is a good thing. It means that, in a competitive market, SJW afflicted organisations will be at a disadvantage compared to those which have resisted the corruption or thrown it off. It makes inflexible, slow-moving players with a heavy load of SJW parasites vulnerable to insurgent competitors, often with their founders still in charge, mission-focused and customer-oriented, who hire, promote, and reward contributors solely based on merit and not “diversity”, “inclusion”, or any of the other SJW shibboleths mouthed by the management of converged organisations. (I remember, when asked about my hiring policy in the 1980s, saying “I don't care if they hang upside down from trees and drink blood. If they're great programmers, I'll hire them.”)

A detailed history of GamerGate provides a worked example of how apparent SJW hegemony within a community can be attacked by “weaponised autism” (as Milo Yiannopoulos said, “it's really not wise to take on a collection of individuals whose idea of entertainment is to spend hundreds of hours at a highly repetitive task, especially when their core philosophy is founded on the principle that if you are running into enemies and taking fire, you must be going the right way”). Further examples show how these techniques have been applied within the world of science fiction and fantasy fandom, comic books, and software development. The key take-away is that any SJW converged organisation or community is vulnerable to concerted attack because SJWs are a parasite that ultimately kills its host. Create an alternative and relentlessly attack the converged competition, and victory is possible. And remember, “Victory is not positive PR. Victory is when your opponent quits.”

This is a valuable guide, building upon SJWs Always Lie (which you should read first), and is essential for managers, project leaders, and people responsible for volunteer organisations who want to keep them focused on the goals for which they were founded and protected from co-optation by destructive parasites. You will learn how seemingly innocent initiatives such as adoption of an ambiguously-worded Code of Conduct or a Community Committee can be the wedge by which an organisation can be subverted and its most productive members forced out or induced to walk away in disgust. Learning the lessons presented here can make the difference between success and, some dismal day, gazing across the cubicles at a sea of pinkhairs and soybeards and asking yourself, “Where did we go wrong?”

The very fact that SJW behaviour is so predictable makes them vulnerable. Because they always double down, they can be manipulated into marginalising themselves, and it's often child's play to set traps into which they'll walk. Much of their success to date has been due to the absence of the kind of hard-edged opposition, willing to employ their own tactics against them, that you'll see in action here and learn to use yourself. This is not a game for the “defeat with dignity” crowd who were, and are, appalled by Donald Trump's plain speaking, or those who fail to realise that proclaiming “I won't stoop to their level” inevitably ends up with “Bend over”. The battles, and the war can be won, but to do so, you have to fight. Here is a guide to closing with the enemy and destroying them before they ruin everything we hold sacred.

 Permalink

Mills, Kyle. Red War. New York: Atria Books, 2018. ISBN 978-1-5011-9059-9.
This is the fourth novel in the Mitch Rapp saga written by Kyle Mills, who took over the franchise after the death of Vince Flynn, its creator. On the cover, Vince Flynn still gets top billing (he is now the “brand”, not the author), but Kyle Mills demonstrates here that he's a worthy successor who is taking Rapp and the series in new directions.

In the previous novel, Enemy of the State (June 2018), Rapp went totally off the radar, resigning from the CIA, recruiting a band of blackguards, many former adversaries, to mount an operation aimed at a nominal U.S. ally. This time, the circumstances are very different. Rapp is back at the CIA, working with his original team headed by Scott Coleman, who has now more or less recovered from the severe injuries he sustained in the earlier novel Order to Kill (December 2017), with Claudia Gould, now sharing a house with Rapp, running logistics for their missions.

Vladimir Krupin, President/autocrat of Russia, is ailing. Having climbed to the top of the pyramid in that deeply corrupt country, he now fears his body is failing him, with bouts of incapacitating headaches, blurred vision, and disorientation coming more and more frequently. He and his physician have carefully kept the condition secret, as any hint of weakness at the top would likely invite one or more of his rivals to make a move to unseat him. Worse, under the screwed-down lid of the Russian pressure cooker, popular dissatisfaction with the dismal economy, lack of freedom, and dearth of opportunity is growing, with popular demonstrations reaching Red Square.

The CIA knows nothing of Krupin's illness, but has been observing what seems to be increasingly erratic behaviour. In the past, Krupin has been ambitious and willing to commit outrages, but has always drawn his plans carefully and acted deliberately, but now he seemed to be doing things almost at random, sometimes against his own interests. Russian hackers launch an attack that takes down a large part of the power grid in Costa Rica. A Russian strike team launches an assault on Krupin's retired assassin and Rapp's former nemesis and recent ally, Grisha Azarov. Military maneuvers in the Ukraine seem to foreshadow open confrontation should that country move toward NATO membership.

Krupin, well aware of the fate of dictators who lose their grip on power, and knowing that nothing rallies support behind a leader like a bold move on the international stage, devises a grand plan to re-assert Russian greatness, right a wrong inflicted by the West, and drive a stake into the heart of NATO. Rapp and Azarov, continuing their uneasy alliance, driven by entirely different motives, undertake a desperate mission in the very belly of the bear to avert what could all too easily end in World War III.

There are a number of goofs, which I can't discuss without risk of spoilers, so I'll take them behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
The copy editing is not up to the standard you'd expect in a bestseller published by an imprint of Simon & Schuster. On three occasions, “Balkan” appears where “Baltic” is intended. This can be pretty puzzling the first time you encounter it. Afterward, it's good for a chuckle.

In chapter 39, one of Rapp's allies tries to establish a connection on a land-line “telephone that looked like it had been around since the 1950s” and then, just a few paragraphs later, we read “There was a USB port hidden in the simple electronics…”. Huh? I've seen (and used) a lot of 1950s telephones, but danged if I can remember one with a USB port (which wasn't introduced until 1996).

Later in the same chapter Rapp is riding a horse, “working only with a map and compass, necessary because of the Russians' ability to zero in on electronic signals.” This betrays a misunderstanding of how GPS works which, while common, is jarring in a techno-thriller that tries to get things right. A GPS receiver is totally passive: it receives signals from the navigation satellites but transmits nothing and cannot be detected by electronic surveillance equipment. There is no reason Rapp could not have used GPS or GLONASS satellites to navigate.

In chapter 49, Rapp fires two rounds into a door locking keypad and “was rewarded with a cascade of sparks…”. Oh, please—even in Russia, security keypads are not wired up to high voltage lines that would emit showers of sparks. This is a movie cliché which doesn't belong in a novel striving for realism.

Spoilers end here.  
This is a well-crafted thriller which broadens the scope of the Rapp saga into Tom Clancy territory. Things happen, which will leave the world in a different place after they occur. It blends Rapp and Azarov's barely restrained loose cannon operations with high-level diplomacy and intrigue, plus an interesting strategic approach to pledges of defence which the will and resources of those who made them may not be equal to the challenge when the balloon goes up and the tanks start to roll. And Grisha Azarov's devotion to his girlfriend is truly visceral.

 Permalink

Churchill, Winston S. Savrola. Seattle: CreateSpace, [1898, 1900] 2018. ISBN 978-1-7271-2358-6.
In 1897, the young (23 year old) Winston Churchill, on an ocean voyage from Britain to India to rejoin the army in the Malakand campaign of 1897, turned his pen to fiction and began this, his first and only novel. He set the work aside to write The Story of the Malakand Field Force, an account of the fighting and his first published work of non-fiction, then returned to the novel, completing it in 1898. It was serialised in Macmillan's Magazine in that year. (Churchill's working title, Affairs of State, was changed by the magazine's editors to Savrola, the name of a major character in the story.) The novel was subsequently published as book under that title in 1900.

The story takes place in the fictional Mediterranean country of Laurania, where five years before the events chronicled here, a destructive civil war had ended with General Antonio Molara taking power as President and ruling as a dictator with the support of the military forces he commanded in the war. Prior to the conflict, Laurania had a long history as a self-governing republic, and unrest was growing as more and more of the population demanded a return to parliamentary rule. Molara announced that elections would be held for a restored parliament under the original constitution.

Then, on the day the writ ordering the election was to be issued, it was revealed that the names of more than half of the citizens on the electoral rolls had been struck by Molara's order. A crowd gathered in the public square, on hearing this news, became an agitated mob and threatened to storm the President's carriage. The officer commanding the garrison commanded his troops to fire on the crowd.

All was now over. The spirit of the mob was broken and the wide expanse of Constitution Square was soon nearly empty. Forty bodies and some expended cartridges lay on the ground. Both had played their part in the history of human development and passed out of the considerations of living men. Nevertheless, the soldiers picked up the empty cases, and presently some police came with carts and took the other things away, and all was quiet again in Laurania.

The massacre, as it was called even by the popular newspaper The Diurnal Gusher which nominally supported the Government, not to mention the opposition press, only compounded the troubles Molara saw in every direction he looked. While the countryside was with him, sentiment in the capital was strongly with the pro-democracy opposition. Among the army, only the élite Republican Guard could be counted on as reliably loyal, and their numbers were small. A diplomatic crisis was brewing with the British over Laurania's colony in Africa which might require sending the Fleet, also loyal, away to defend it. A rebel force, camped right across the border, threatens invasion at any sign of Molara's grip on the nation weakening. And then there is Savrola.

Savrola (we never learn his first name), is the young (32 years), charismatic, intellectual, and persuasive voice of the opposition. While never stepping across the line sufficiently to justify retaliation, he manages to keep the motley groups of anti-Government forces in a loose coalition and is a constant thorn in the side of the authorities. He was not immune from introspection.

Was it worth it? The struggle, the labour, the constant rush of affairs, the sacrifice of so many things that make life easy, or pleasant—for what? A people's good! That, he could not disguise from himself, was rather the direction than the cause of his efforts. Ambition was the motive force, and he was powerless to resist it.

This is a character one imagines the young Churchill having little difficulty writing. With the seemingly incorruptible Savrola gaining influence and almost certain to obtain a political platform in the coming elections, Molara's secretary, the amoral but effective Miguel, suggests a stratagem: introduce Savrola to the President's stunningly beautiful wife Lucile and use the relationship to compromise him.

“You are a scoundrel—an infernal scoundrel” said the President quietly.

Miguel smiled, as one who receives a compliment. “The matter,” he said, “is too serious for the ordinary rules of decency and honour. Special cases demand special remedies.”

The President wants to hear no more of the matter, but does not forbid Miguel from proceeding. An introduction is arranged, and Lucile rapidly moves from fascination with Savrola to infatuation. Then events rapidly spin out of anybody's control. The rebel forces cross the border; Molara's army is proved unreliable and disloyal; the Fleet, en route to defend the colony, is absent; Savrola raises a popular rebellion in the capital; and open fighting erupts.

This is a story of intrigue, adventure, and conflict in the “Ruritanian” genre popularised by the 1894 novel The Prisoner of Zenda. Churchill, building on his experience of war reportage, excels in and was praised for the realism of the battle scenes. The depiction of politicians, functionaries, and soldiers seems to veer back and forth between cynicism and admiration for their efforts in trying to make the best of a bad situation. The characters are cardboard figures and the love interest is clumsily described.

Still, this is an entertaining read and provides a window on how the young Churchill viewed the antics of colourful foreigners and their unstable countries, even if Laurania seems to have a strong veneer of Victorian Britain about it. The ultimate message is that history is often driven not by the plans of leaders, whether corrupt or noble, but by events over which they have little control. Churchill never again attempted a novel and thought little of this effort. In his 1930 autobiography covering the years 1874 through 1902 he writes of Savrola, “I have consistently urged my friends to abstain from reading it.” But then, Churchill was not always right—don't let his advice deter you; I enjoyed it.

This work is available for free as a Project Gutenberg electronic book in a variety of formats. There are a number of print and Kindle editions of this public domain text; I have cited the least expensive print edition available at the time I wrote this review. I read this Kindle edition, which has a few typographical errors due to having been prepared by optical character recognition (for example, “stem” where “stern” was intended), but is otherwise fine.

One factlet I learned while researching this review is that “Winston S. Churchill” is actually a nom de plume. Churchill's full name is Winston Leonard Spencer-Churchill, and he signed his early writings as “Winston Churchill”. Then, he discovered there was a well-known American novelist with the same name. The British Churchill wrote to the American Churchill and suggested using the name “Winston Spencer Churchill” (no hyphen) to distinguish his work. The American agreed, noting that he would also be willing to use a middle name, except that he didn't have one. The British Churchill's publishers abbreviated his name to “Winston S. Churchill”, which he continued to use for the rest of his writing career.

 Permalink

Schantz, Hans G. The Brave and the Bold. Huntsville, AL: ÆtherCzar, 2018. ISBN 978-1-7287-2274-0.
This the third novel in the author's Hidden Truth series. In the first book (December 2017) we met high schoolers and best friends Pete Burdell and Amit Patel who found, in dusty library books, knowledge apparently discovered by the pioneers of classical electromagnetism (many of whom died young), but which does not figure in modern works, even purported republications of the original sources they had consulted. In the second, A Rambling Wreck (May 2018), Pete and Amit, now freshmen at Georgia Tech, delve deeper into the suppressed mysteries of electromagnetism and the secrets of the shadowy group Amit dubbed the Electromagnetic Villains International League (EVIL), while simultaneously infiltrating and disrupting forces trying to implant the social justice agenda in one of the last bastions of rationality in academia.

The present volume begins in the summer after the pair's freshman year. Both Pete and Amit are planning, along different paths, to infiltrate back-to-back meetings of the Civic Circle's Social Justice Leadership Forum on Jekyll Island, Georgia (the scene of notable conspiratorial skullduggery in the early 20th century) and the G-8 summit of world leaders on nearby Sea Island. Master of Game Amit has maneuvered himself into an internship with the Civic Circle and an invitation to the Forum as a promising candidate for the cause. Pete wasn't so fortunate (or persuasive), and used family connections to land a job with a company contracted to install computer infrastructure for the Civic Circle conference. The latest apparent “social justice” goal was to involve the developed world in a costly and useless war in Iraq, and Pete and Amit hoped to do what they could to derail those plans while collecting information on the plotters from inside.

Working in a loose and uneasy alliance with others they've encountered in the earlier books, they uncover information which suggests a bold strike at the very heart of the conspiracy might be possible, and they set their plans in motion. They learn that the Civic Circle is even more ancient, pervasive in its malign influence, and formidable than they had imagined.

This is one of the most intricately crafted conspiracy tales I've read since the Illuminatus! trilogy, yet entirely grounded in real events or plausible ones in its story line, as opposed to Robert Shea and Robert Anton Wilson's zany tale. The alternative universe in which it is set is artfully grounded in our own, and readers will delight in how events they recall and those with which they may not be familiar are woven into the story. There is delightful skewering of the social justice agenda and those who espouse its absurd but destructive nostrums. The forbidden science aspect of the story is advanced as well, imaginatively stirring the de Broglie-Bohm “pilot wave” interpretation of quantum mechanics and the history of FM broadcasting into the mix.

The story builds to a conclusion which is both shocking and satisfying and confronts the pair with an even greater challenge for their next adventure. This book continues the Hidden Truth saga in the best tradition of Golden Age science fiction and, like the work of the grandmasters of yore, both entertains and leaves the reader eager to find out what happens next. You should read the books in order; if you jump in the middle, you'll miss a great deal of back story and character development essential to enjoying the adventure.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

November 2018

Mahon, Basil. The Forgotten Genius of Oliver Heaviside. Amherst, NY: Prometheus Books, 2017. ISBN 978-1-63388-331-4.
At age eleven, in 1861, young Oliver Heaviside's family, supported by his father's irregular income as an engraver of woodblock illustrations for publications (an art beginning to be threatened by the advent of photography) and a day school for girls operated by his mother in the family's house, received a small legacy which allowed them to move to a better part of London and enroll Oliver in the prestigious Camden House School, where he ranked among the top of his class, taking thirteen subjects including Latin, English, mathematics, French, physics, and chemistry. His independent nature and iconoclastic views had already begun to manifest themselves: despite being an excellent student he dismissed the teaching of Euclid's geometry in mathematics and English rules of grammar as worthless. He believed that both mathematics and language were best learned, as he wrote decades later, “observationally, descriptively, and experimentally.” These principles would guide his career throughout his life.

At age fifteen he took the College of Perceptors examination, the equivalent of today's A Levels. He was the youngest of the 538 candidates to take the examination and scored fifth overall and first in the natural sciences. This would easily have qualified him for admission to university, but family finances ruled that out. He decided to study on his own at home for two years and then seek a job, perhaps in the burgeoning telegraph industry. He would receive no further formal education after the age of fifteen.

His mother's elder sister had married Charles Wheatstone, a successful and wealthy scientist, inventor, and entrepreneur whose inventions include the concertina, the stereoscope, and the Playfair encryption cipher, and who made major contributions to the development of telegraphy. Wheatstone took an interest in his bright nephew, and guided his self-studies after leaving school, encouraging him to master the Morse code and the German and Danish languages. Oliver's favourite destination was the library, which he later described as “a journey into strange lands to go a book-tasting”. He read the original works of Newton, Laplace, and other “stupendous names” and discovered that with sufficient diligence he could figure them out on his own.

At age eighteen, he took a job as an assistant to his older brother Arthur, well-established as a telegraph engineer in Newcastle. Shortly thereafter, probably on the recommendation of Wheatstone, he was hired by the just-formed Danish-Norwegian-English Telegraph Company as a telegraph operator at a salary of £150 per year (around £12000 in today's money). The company was about to inaugurate a cable under the North Sea between England and Denmark, and Oliver set off to Jutland to take up his new post. Long distance telegraphy via undersea cables was the technological frontier at the time—the first successful transatlantic cable had only gone into service two years earlier, and connecting the continents into a world-wide web of rapid information transfer was the booming high-technology industry of the age. While the job of telegraph operator might seem a routine clerical task, the élite who operated the undersea cables worked in an environment akin to an electrical research laboratory, trying to wring the best performance (words per minute) from the finicky and unreliable technology.

Heaviside prospered in the new job, and after a merger was promoted to chief operator at a salary of £175 per year and transferred back to England, at Newcastle. At the time, undersea cables were unreliable. It was not uncommon for the signal on a cable to fade and then die completely, most often due to a short circuit caused by failure of the gutta-percha insulation between the copper conductor and the iron sheath surrounding it. When a cable failed, there was no alternative but to send out a ship which would find the cable with a grappling hook, haul it up to the surface, cut it, and test whether the short was to the east or west of the ship's position (the cable would work in the good direction but fail in that containing the short. Then the cable would be re-spliced, dropped back to the bottom, and the ship would set off in the direction of the short to repeat the exercise over and over until, by a process similar to binary search, the location of the fault was narrowed down and that section of the cable replaced. This was time consuming and potentially hazardous given the North Sea's propensity for storms, and while the cable remained out of service it made no money for the telegraph company.

Heaviside, who continued his self-study and frequented the library when not at work, realised that knowing the resistance and length of the functioning cable, which could be easily measured, it would be possible to estimate the location of the short simply by measuring the resistance of the cable from each end after the short appeared. He was able to cancel out the resistance of the fault, creating a quadratic equation which could be solved for its location. The first time he applied this technique his bosses were sceptical, but when the ship was sent out to the location he predicted, 114 miles from the English coast, they quickly found the short circuit.

At the time, most workers in electricity had little use for mathematics: their trade journal, The Electrician (which would later publish much of Heaviside's work) wrote in 1861, “In electricity there is seldom any need of mathematical or other abstractions; and although the use of formulæ may in some instances be a convenience, they may for all practical purpose be dispensed with.” Heaviside demurred: while sharing disdain for abstraction for its own sake, he valued mathematics as a powerful tool to understand the behaviour of electricity and attack problems of great practical importance, such as the ability to send multiple messages at once on the same telegraphic line and increase the transmission speed on long undersea cable links (while a skilled telegraph operator could send traffic at thirty words per minute on intercity land lines, the transatlantic cable could run no faster than eight words per minute). He plunged into calculus and differential equations, adding them to his intellectual armamentarium.

He began his own investigations and experiments and began to publish his results, first in English Mechanic, and then, in 1873, the prestigious Philosophical Magazine, where his work drew the attention of two of the most eminent workers in electricity: William Thomson (later Lord Kelvin) and James Clerk Maxwell. Maxwell would go on to cite Heaviside's paper on the Wheatstone Bridge in the second edition of his Treatise on Electricity and Magnetism, the foundation of the classical theory of electromagnetism, considered by many the greatest work of science since Newton's Principia, and still in print today. Heady stuff, indeed, for a twenty-two year old telegraph operator who had never set foot inside an institution of higher education.

Heaviside regarded Maxwell's Treatise as the path to understanding the mysteries of electricity he encountered in his practical work and vowed to master it. It would take him nine years and change his life. He would become one of the first and foremost of the “Maxwellians”, a small group including Heaviside, George FitzGerald, Heinrich Hertz, and Oliver Lodge, who fully grasped Maxwell's abstract and highly mathematical theory (which, like many subsequent milestones in theoretical physics, predicted the results of experiments without providing a mechanism to explain them, such as earlier concepts like an “electric fluid” or William Thomson's intricate mechanical models of the “luminiferous ether”) and built upon its foundations to discover and explain phenomena unknown to Maxwell (who would die in 1879 at the age of just 48).

While pursuing his theoretical explorations and publishing papers, Heaviside tackled some of the main practical problems in telegraphy. Foremost among these was “duplex telegraphy”: sending messages in each direction simultaneously on a single telegraph wire. He invented a new technique and was even able to send two messages at the same time in both directions as fast as the operators could send them. This had the potential to boost the revenue from a single installed line by a factor of four. Oliver published his invention, and in doing so made an enemy of William Preece, a senior engineer at the Post Office telegraph department, who had invented and previously published his own duplex system (which would not work), that was not acknowledged in Heaviside's paper. This would start a feud between Heaviside and Preece which would last the rest of their lives and, on several occasions, thwart Heaviside's ambition to have his work accepted by mainstream researchers. When he applied to join the Society of Telegraph Engineers, he was rejected on the grounds that membership was not open to “clerks”. He saw the hand of Preece and his cronies at the Post Office behind this and eventually turned to William Thomson to back his membership, which was finally granted.

By 1874, telegraphy had become a big business and the work was increasingly routine. In 1870, the Post Office had taken over all domestic telegraph service in Britain and, as government is wont to do, largely stifled innovation and experimentation. Even at privately-owned international carriers like Oliver's employer, operators were no longer concerned with the technical aspects of the work but rather tending automated sending and receiving equipment. There was little interest in the kind of work Oliver wanted to do: exploring the new horizons opened up by Maxwell's work. He decided it was time to move on. So, he quit his job, moved back in with his parents in London, and opted for a life as an independent, unaffiliated researcher, supporting himself purely by payments for his publications.

With the duplex problem solved, the largest problem that remained for telegraphy was the slow transmission speed on long lines, especially submarine cables. The advent of the telephone in the 1870s would increase the need to address this problem. While telegraphic transmission on a long line slowed down the speed at which a message could be sent, with the telephone voice became increasingly distorted the longer the line, to the point where, after around 100 miles, it was incomprehensible. Until this was understood and a solution found, telephone service would be restricted to local areas.

Many of the early workers in electricity thought of it as something like a fluid, where current flowed through a wire like water through a pipe. This approximation is more or less correct when current flow is constant, as in a direct current generator powering electric lights, but when current is varying a much more complex set of phenomena become manifest which require Maxwell's theory to fully describe. Pioneers of telegraphy thought of their wires as sending direct current which was simply switched off and on by the sender's key, but of course the transmission as a whole was a varying current, jumping back and forth between zero and full current at each make or break of the key contacts. When these transitions are modelled in Maxwell's theory, one finds that, depending upon the physical properties of the transmission line (its resistance, inductance, capacitance, and leakage between the conductors) different frequencies propagate along the line at different speeds. The sharp on/off transitions in telegraphy can be thought of, by Fourier transform, as the sum of a wide band of frequencies, with the result that, when each propagates at a different speed, a short, sharp pulse sent by the key will, at the other end of the long line, be “smeared out” into an extended bump with a slow rise to a peak and then decay back to zero. Above a certain speed, adjacent dots and dashes will run into one another and the message will be undecipherable at the receiving end. This is why operators on the transatlantic cables had to send at the painfully slow speed of eight words per minute.

In telephony, it's much worse because human speech is composed of a broad band of frequencies, and the frequencies involved (typically up to around 3400 cycles per second) are much higher than the off/on speeds in telegraphy. The smearing out or dispersion as frequencies are transmitted at different speeds results in distortion which renders the voice signal incomprehensible beyond a certain distance.

In the mid-1850s, during development of the first transatlantic cable, William Thomson had developed a theory called the “KR law” which predicted the transmission speed along a cable based upon its resistance and capacitance. Thomson was aware that other effects existed, but without Maxwell's theory (which would not be published in its final form until 1873), he lacked the mathematical tools to analyse them. The KR theory, which produced results that predicted the behaviour of the transatlantic cable reasonably well, held out little hope for improvement: decreasing the resistance and capacitance of the cable would dramatically increase its cost per unit length.

Heaviside undertook to analyse what is now called the transmission line problem using the full Maxwell theory and, in 1878, published the general theory of propagation of alternating current through transmission lines, what are now called the telegrapher's equations. Because he took resistance, capacitance, inductance, and leakage all into account and thus modelled both the electric and magnetic field created around the wire by the changing current, he showed that by balancing these four properties it was possible to design a transmission line which would transmit all frequencies at the same speed. In other words, this balanced transmission line would behave for alternating current (including the range of frequencies in a voice signal) just like a simple wire did for direct current: the signal would be attenuated (reduced in amplitude) with distance but not distorted.

In an 1887 paper, he further showed that existing telegraph and telephone lines could be made nearly distortionless by adding loading coils to increase the inductance at points along the line (as long as the distance between adjacent coils is small compared to the wavelength of the highest frequency carried by the line). This got him into another battle with William Preece, whose incorrect theory attributed distortion to inductance and advocated minimising self-inductance in long lines. Preece moved to block publication of Heaviside's work, with the result that the paper on distortionless telephony, published in The Electrician, was largely ignored. It was not until 1897 that AT&T in the United States commissioned a study of Heaviside's work, leading to patents eventually worth millions. The credit, and financial reward, went to Professor Michael Pupin of Columbia University, who became another of Heaviside's life-long enemies.

You might wonder why what seems such a simple result (which can be written in modern notation as the equation L/R = C/G) which had such immediate technological utlilty eluded so many people for so long (recall that the problem with slow transmission on the transatlantic cable had been observed since the 1850s). The reason is the complexity of Maxwell's theory and the formidably difficult notation in which it was expressed. Oliver Heaviside spent nine years fully internalising the theory and its implications, and he was one of only a handful of people who had done so and, perhaps, the only one grounded in practical applications such as telegraphy and telephony. Concurrent with his work on transmission line theory, he invented the mathematical field of vector calculus and, in 1884, reformulated Maxwell's original theory which, written in modern notation less cumbersome than that employed by Maxwell, looks like:

Maxwell's equations: original form

into the four famous vector equations we today think of as Maxwell's.

Maxwell's equations: original form

These are not only simpler, condensing twenty equations to just four, but provide (once you learn the notation and meanings of the variables) an intuitive sense for what is going on. This made, for the first time, Maxwell's theory accessible to working physicists and engineers interested in getting the answer out rather than spending years studying an arcane theory. (Vector calculus was independently invented at the same time by the American J. Willard Gibbs. Heaviside and Gibbs both acknowledged the work of the other and there was no priority dispute. The notation we use today is that of Gibbs, but the mathematical content of the two formulations is essentially identical.)

And, during the same decade of the 1880s, Heaviside invented the operational calculus, a method of calculation which reduces the solution of complicated problems involving differential equations to simple algebra. Heaviside was able to solve so many problems which others couldn't because he was using powerful computational tools they had not yet adopted. The situation was similar to that of Isaac Newton who was effortlessly solving problems such as the brachistochrone using the calculus he'd invented while his contemporaries struggled with more cumbersome methods. Some of the things Heaviside did in the operational calculus, such as cancel derivative signs in equations and take the square root of a derivative sign made rigorous mathematicians shudder but, hey, it worked and that was good enough for Heaviside and the many engineers and applied mathematicians who adopted his methods. (In the 1920s, pure mathematicians used the theory of Laplace transforms to reformulate the operational calculus in a rigorous manner, but this was decades after Heaviside's work and long after engineers were routinely using it in their calculations.)

Heaviside's intuitive grasp of electromagnetism and powerful computational techniques placed him in the forefront of exploration of the field. He calculated the electric field of a moving charged particle and found it contracted in the direction of motion, foreshadowing the Lorentz-FitzGerald contraction which would figure in Einstein's special relativity. In 1889 he computed the force on a point charge moving in an electromagnetic field, which is now called the Lorentz force after Hendrik Lorentz who independently discovered it six years later. He predicted that a charge moving faster than the speed of light in a medium (for example, glass or water) would emit a shock wave of electromagnetic radiation; in 1934 Pavel Cherenkov experimentally discovered the phenomenon, now called Cherenkov radiation, for which he won the Nobel Prize in 1958. In 1902, Heaviside applied his theory of transmission lines to the Earth as a whole and explained the propagation of radio waves over intercontinental distances as due to a transmission line formed by conductive seawater and a hypothetical conductive layer in the upper atmosphere dubbed the Heaviside layer. In 1924 Edward V. Appleton confirmed the existence of such a layer, the ionosphere, and won the Nobel prize in 1947 for the discovery.

Oliver Heaviside never won a Nobel Price, although he was nominated for the physics prize in 1912. He shouldn't have felt too bad, though, as other nominees passed over for the prize that year included Hendrik Lorentz, Ernst Mach, Max Planck, and Albert Einstein. (The winner that year was Gustaf Dalén, “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys”—oh well.) He did receive Britain's highest recognition for scientific achievement, being named a Fellow of the Royal Society in 1891. In 1921 he was the first recipient of the Faraday Medal from the Institution of Electrical Engineers.

Having never held a job between 1874 and his death in 1925, Heaviside lived on his irregular income from writing, the generosity of his family, and, from 1896 onward a pension of £120 per year (less than his starting salary as a telegraph operator in 1868) from the Royal Society. He was a proud man and refused several other offers of money which he perceived as charity. He turned down an offer of compensation for his invention of loading coils from AT&T when they refused to acknowledge his sole responsibility for the invention. He never married, and in his elder years became somewhat of a recluse and, although he welcomed visits from other scientists, hardly ever left his home in Torquay in Devon.

His impact on the physics of electromagnetism and the craft of electrical engineering can be seen in the list of terms he coined which are in everyday use: “admittance”, “conductance”, “electret”, “impedance”, “inductance”, “permeability”, “permittance”, “reluctance”, and “susceptance”. His work has never been out of print, and sparkles with his intuition, mathematical prowess, and wicked wit directed at those he considered pompous or lost in needless abstraction and rigor. He never sought the limelight and among those upon whose work much of our present-day technology is founded, he is among the least known. But as long as electronic technology persists, it is a monument to the life and work of Oliver Heaviside.

 Permalink

Shoemaker, Martin L. Blue Collar Space. Seattle: CreateSpace [Old Town Press], 2018. ISBN 978-1-7170-5188-2.
This book is a collection of short stories, set in three different locales. The first part, “Old Town Tales”, are set on the Moon and revolve around yarns told at the best bar on Luna. The second part, “The Planet Next Door”, are stories set on Mars, while the third, “The Pournelle Settlements”, take place in mining settlements in the Jupiter system.

Most of the stories take place in established settlements; they are not tales of square-jawed pioneers opening up the frontier, but rather ordinary people doing the work that needs to be done in environments alien to humanity's home. On the Moon, we go on a mission with a rescue worker responding to a crash; hear a sanitation (“Eco Services”) technician regale a rookie with the story of “The Night We Flushed the Old Town”; accompany a father and daughter on a work day Outside that turns into a crisis; learn why breathing vacuum may not be the only thing that can go wrong on the Moon; and see how even for those in the most mundane of jobs, on the Moon wonders may await just over the nearby horizon.

At Mars, the greatest problem facing an ambitious international crewed landing mission may be…ambition, a doctor on a Mars-bound mission must deal with the technophobe boss's son while keeping him alive, and a schoolteacher taking her Mars survival class on a field trip finds that doing things by the book may pay off in discovering something which isn't in the book.

The Jupiter system is home to the Pournelle Settlements, a loosely affiliated group of settlers, many of whom came to escape the “government squeeze” and “corporate squeeze” that held the Inner System in their grip. And like the Wild West, it can be a bit wild. When sabotage disables the refinery that processes ore for the Settlements, its new boss must find a way to use the unique properties of the environment to keep his people fed and avoid the most hostile of takeovers. Where there are vast distances, long travel times, and cargoes with great value, there will be pirates, and the long journey from Jupiter to the Inner System is no exception. An investigator seeking evidence in a murder case must learn the ways of the Trust Economy in the Settlements and follow the trail far into the void.

These stories bring back the spirit of science fiction magazine stories in the decades before the dawn of the Big Government space age when we just assumed that before long space would be filled with people like ourselves living their lives and pursuing their careers where freedom was just a few steps away from any settlement and individual merit was rewarded. They are an excellent example of “hard” science fiction, not in being difficult but that the author makes a serious effort to get the facts right and make the plots plausible. (I am, however, dubious that the trick used in “Unrefined” would work.) All of the stories stand by themselves and can be read in any order. This is another example of how independent authors and publishing are making this a new golden age of science fiction.

The Kindle edition is free for Kindle Unlimited subscribers.

 Permalink

Schlichter, Kurt. People's Republic. Seattle: CreateSpace, 2016. ISBN 978-1-5390-1895-7.
As the third decade of the twenty-first century progressed, the Cold Civil War which had been escalating in the United States since before the turn of the century turned hot when a Democrat administration decided to impose their full agenda—gun confiscation, amnesty for all illegal aliens, restrictions on fossil fuels—all at once by executive order. The heartland defied the power grab and militias of the left and right began to clash openly. Although the senior officer corps were largely converged to the leftist agenda, the military rank and file which hailed largely from the heartland defied them, and could not be trusted to act against their fellow citizens. Much the same was the case with police in the big cities: they began to ignore the orders of their political bosses and migrate to jobs in more congenial jurisdictions.

With a low-level shooting war breaking out, the opposing sides decided that the only way to avert general conflict was, if not the “amicable divorce” advocated by Jesse Kelly, then a more bitter and contentious end to a union which was not working. The Treaty of Saint Louis split the country in two, with the east and west coasts and upper midwest calling itself the “People's Republic of North America” (PRNA) and the remaining territory (including portions of some states like Washington, Oregon, and Indiana with a strong regional divide) continuing to call itself the United States, but with some changes: the capital was now Dallas, and the constitution had been amended to require any person not resident on its territory at the time of the Split (including children born thereafter) who wished full citizenship and voting rights to serve two years in the military with no “alternative service” for the privileged or connected.

The PRNA quickly implemented the complete progressive agenda wherever its rainbow flag (frequently revised as different victim groups clawed their way to the top of the grievance pyramid) flew. As police forces collapsed with good cops quitting and moving out, they were replaced by a national police force initially called the “People's Internal Security Squads” (later the “People's Security Force” when the acronym for the original name was deemed infelicitous), staffed with thugs and diversity hires attracted by the shakedown potential of carrying weapons among a disarmed population.

Life in the PRNA was pretty good for the coastal élites in their walled communities, but as with collectivism whenever and wherever it is tried, for most of the population life was a grey existence of collapsing services, food shortages, ration cards, abuse by the powerful, and constant fear of being denounced for violating the latest intellectual fad or using an incorrect pronoun. And, inevitably, it wasn't long before the PRNA slammed the door shut to keep the remaining competent people from fleeing to where they were free to use their skills and keep what they'd earned. Mexico built a “big, beautiful wall” to keep hordes of PRNA subjects from fleeing to freedom and opportunity south of the border.

Several years after the Split, Kelly Turnbull, retired military and veteran of the border conflicts around the Split paid the upkeep of his 500 acre non-working ranch by spiriting people out of the PRNA to liberty in the middle of the continent. After completing a harrowing mission which almost ended in disaster, he is approached by a wealthy and politically-connected Dallas businessman who offers him enough money to retire if he'll rescue his daughter who, indoctrinated by the leftist infestation still remaining at the university in Austin, defected to the PRNA and is being used in propaganda campaigns there at the behest of the regional boss of the secret police. In addition, a spymaster tasks him with bringing out evidence which will allow rolling up the PRNAs informer and spy networks. Against his self-preservation instinct which counsels laying low until the dust settles from the last mission, he opts for the money and prospect of early retirement and undertakes the mission.

As Turnbull covertly enters the People's Republic, makes his way to Los Angeles, and seeks his target, there is a superbly-sketched view of an America in which the progressive agenda has come to fruition, and one which people there may well be living at the end of the next two Democrat-dominated administrations. It is often funny, as the author skewers the hypocrisy of the slavers mouthing platitudes they don't believe for a femtosecond. (If you think it improper to make fun of human misery, recall the mordant humour in the Soviet Union as workers mocked the reality of the “workers' paradise”.) There's plenty of tension and action, and sometimes following Turnbull on his mission seems like looking over the shoulder of a first-person-shooter. He's big on countdowns and tends to view “blues” obstructing him as NPCs to be dealt with quickly and permanently: “I don't much like blues. You kill them or they kill you.”

This is a satisfying thriller which is probably a more realistic view of the situation in a former United States than an amicable divorce with both sides going their separate ways. The blue model is doomed to collapse, as it already has begun to in the big cites and states where it is in power, and with that inevitable collapse will come chaos and desperation which spreads beyond its borders. With Democrat politicians such as Occasional-Cortex who, a few years ago, hid behind such soothing labels as “liberal” or “progressive” now openly calling themselves “democratic socialists”, this is not just a page-turning adventure but a cautionary tale of the future should they win (or steal) power.

A prequel, Indian Country, which chronicles insurgency on the border immediately after the Split as guerrilla bands of the sane rise to resist the slavers, is now available.

 Permalink

December 2018

Kluger, Jeffrey. Apollo 8. New York: Picador, 2017. ISBN 978-1-250-18251-7.
As the tumultuous year 1968 drew to a close, NASA faced a serious problem with the Apollo project. The Apollo missions had been carefully planned to test the Saturn V booster rocket and spacecraft (Command/Service Module [CSM] and Lunar Module [LM]) in a series of increasingly ambitious missions, first in low Earth orbit (where an immediate return to Earth was possible in case of problems), then in an elliptical Earth orbit which would exercise the on-board guidance and navigation systems, followed by lunar orbit, and finally proceeding to the first manned lunar landing. The Saturn V had been tested in two unmanned “A” missions: Apollo 4 in November 1967 and Apollo 6 in April 1968. Apollo 5 was a “B” mission, launched on a smaller Saturn 1B booster in January 1968, to test an unmanned early model of the Lunar Module in low Earth orbit, primarily to verify the operation of its engines and separation of the descent and ascent stages. Apollo 7, launched in October 1968 on a Saturn 1B, was the first manned flight of the Command and Service modules and tested them in low Earth orbit for almost 11 days in a “C” mission.

Apollo 8 was planned to be the “D” mission, in which the Saturn V, in its first manned flight, would launch the Command/Service and Lunar modules into low Earth orbit, where the crew, commanded by Gemini veteran James McDivitt, would simulate the maneuvers of a lunar landing mission closer to home. McDivitt's crew was trained and ready to go in December 1968. Unfortunately, the lunar module wasn't. The lunar module scheduled for Apollo 8, LM-3, had been delivered to the Kennedy Space Center in June of 1968, but was, to put things mildly, a mess. Testing at the Cape discovered more than a hundred serious defects, and by August it was clear that there was no way LM-3 would be ready for a flight in 1968. In fact, it would probably slip to February or March 1969. This, in turn, would push the planned “E” mission, for which the crew of commander Frank Borman, command module pilot James Lovell, and lunar module pilot William Anders were training, aimed at testing the Command/Service and Lunar modules in an elliptical Earth orbit venturing as far as 7400 km from the planet and originally planned for March 1969, three months later, to June, delaying all subsequent planned missions and placing the goal of landing before the end of 1969 at risk.

But NASA were not just racing the clock—they were also racing the Soviet Union. Unlike Apollo, the Soviet space program was highly secretive and NASA had to go on whatever scraps of information they could glean from Soviet publications, the intelligence community, and independent tracking of Soviet launches and spacecraft in flight. There were, in fact, two Soviet manned lunar programmes running in parallel. The first, internally called the Soyuz 7K-L1 but dubbed “Zond” for public consumption, used a modified version of the Soyuz spacecraft launched on a Proton booster and was intended to carry two cosmonauts on a fly-by mission around the Moon. The craft would fly out to the Moon, use its gravity to swing around the far side, and return to Earth. The Zond lacked the propulsion capability to enter lunar orbit. Still, success would allow the Soviets to claim the milestone of first manned mission to the Moon. In September 1968 Zond 5 successfully followed this mission profile and safely returned a crew cabin containing tortoises, mealworms, flies, and plants to Earth after their loop around the Moon. A U.S. Navy destroyer observed recovery of the re-entry capsule in the Indian Ocean. Clearly, this was preparation for a manned mission which might occur on any lunar launch window.

(The Soviet manned lunar landing project was actually far behind Apollo, and would not launch its N1 booster on that first, disastrous, test flight until February 1969. But NASA did not know this in 1968.) Every slip in the Apollo program increased the probability of its being scooped so close to the finish line by a successful Zond flyby mission.

These were the circumstances in August 1968 when what amounted to a cabal of senior NASA managers including George Low, Chris Kraft, Bob Gilruth, and later joined by Wernher von Braun and chief astronaut Deke Slayton, began working on an alternative. They plotted in secret, beneath the radar and unbeknownst to NASA administrator Jim Webb and his deputy for manned space flight, George Mueller, who were both out of the country, attending an international conference in Vienna. What they were proposing was breathtaking in its ambition and risk. They envisioned taking Frank Borman's crew, originally scheduled for Apollo 9, and putting them into an accelerated training program to launch on the Saturn V and Apollo spacecraft currently scheduled for Apollo 8. They would launch without a Lunar Module, and hence be unable to land on the Moon or test that spacecraft. The original idea was to perform a Zond-like flyby, but this was quickly revised to include going into orbit around the Moon, just as a landing mission would do. This would allow retiring the risk of many aspects of the full landing mission much earlier in the program than originally scheduled, and would also allow collection of precision data on the lunar gravitational field and high resolution photography of candidate landing sites to aid in planning subsequent missions. The lunar orbital mission would accomplish all the goals of the originally planned “E” mission and more, allowing that mission to be cancelled and therefore not requiring an additional booster and spacecraft.

But could it be done? There were a multitude of requirements, all daunting. Borman's crew, training toward a launch in early 1969 on an Earth orbit mission, would have to complete training for the first lunar mission in just sixteen weeks. The Saturn V booster, which suffered multiple near-catastrophic engine failures in its second flight on Apollo 6, would have to be cleared for its first manned flight. Software for the on-board guidance computer and for Mission Control would have to be written, tested, debugged, and certified for a lunar mission many months earlier than previously scheduled. A flight plan for the lunar orbital mission would have to be written from scratch and then tested and trained in simulations with Mission Control and the astronauts in the loop. The decision to fly Borman's crew instead of McDivitt's was to avoid wasting the extensive training the latter crew had undergone in LM systems and operations by assigning them to a mission without an LM. McDivitt concurred with this choice: while it might be nice to be among the first humans to see the far side of the Moon with his own eyes, for a test pilot the highest responsibility and honour is to command the first flight of a new vehicle (the LM), and he would rather skip the Moon mission and fly later than lose that opportunity. If the plan were approved, Apollo 8 would become the lunar orbit mission and the Earth orbit test of the LM would be re-designated Apollo 9 and fly whenever the LM was ready.

While a successful lunar orbital mission on Apollo 8 would demonstrate many aspects of a full lunar landing mission, it would also involve formidable risks. The Saturn V, making only its third flight, was coming off a very bad outing in Apollo 6 whose failures might have injured the crew, damaged the spacecraft hardware, and precluded a successful mission to the Moon. While fixes for each of these problems had been implemented, they had never been tested in flight, and there was always the possibility of new problems not previously seen.

The Apollo Command and Service modules, which would take them to the Moon, had not yet flown a manned mission and would not until Apollo 7, scheduled for October 1968. Even if Apollo 7 were a complete success (which was considered a prerequisite for proceeding), Apollo 8 would be only the second manned flight of the Apollo spacecraft, and the crew would have to rely upon the functioning of its power generation, propulsion, and life support systems for a mission lasting six days. Unlike an Earth orbit mission, if something goes wrong en route to or returning from the Moon, you can't just come home immediately. The Service Propulsion System on the Service Module would have to work perfectly when leaving lunar orbit or the crew would be marooned forever or crash on the Moon. It would only have been tested previously in one manned mission and there was no backup (although the single engine did incorporate substantial redundancy in its design).

The spacecraft guidance, navigation, and control system and its Apollo Guidance Computer hardware and software, upon which the crew would have to rely to navigate to and from the Moon, including the critical engine burns to enter and leave lunar orbit while behind the Moon and out of touch with Mission Control, had never been tested beyond Earth orbit.

The mission would go to the Moon without a Lunar Module. If a problem developed en route to the Moon which disabled the Service Module (as would happen to Apollo 13 in April 1970), there would be no LM to serve as a lifeboat and the crew would be doomed.

When the high-ranking conspirators presented their audacious plan to their bosses, the reaction was immediate. Manned spaceflight chief Mueller immediately said, “Can't do that! That's craziness!” His boss, administrator James Webb, said “You try to change the entire direction of the program while I'm out of the country?” Mutiny is a strong word, but this seemed to verge upon it. Still, Webb and Mueller agreed to meet with the lunar cabal in Houston on August 22. After a contentious meeting, Webb agreed to proceed with the plan and to present it to President Johnson, who was almost certain to approve it, having great confidence in Webb's management of NASA. The mission was on.

It was only then that Borman and his crewmembers Lovell and Anders learned of their reassignment. While Anders was disappointed at the prospect of being the Lunar Module Pilot on a mission with no Lunar Module, the prospect of being on the first flight to the Moon and entrusted with observation and photography of lunar landing sites more than made up for it. They plunged into an accelerated training program to get ready for the mission.

NASA approached the mission with its usual “can-do” approach and public confidence, but everybody involved was acutely aware of the risks that were being taken. Susan Borman, Frank's wife, privately asked Chris Kraft, director of Flight Operations and part of the group who advocated sending Apollo 8 to the Moon, with a reputation as a plain-talking straight shooter, “I really want to know what you think their chances are of coming home.” Kraft responded, “You really mean that, don't you?” “Yes,” she replied, “and you know I do.” Kraft answered, “Okay. How's fifty-fifty?” Those within the circle, including the crew, knew what they were biting off.

The launch was scheduled for December 21, 1968. Everybody would be working through Christmas, including the twelve ships and thousands of sailors in the recovery fleet, but lunar launch windows are set by the constraints of celestial mechanics, not human holidays. In November, the Soviets had flown Zond 6, and it had demonstrated the “double dip” re-entry trajectory required for human lunar missions. There were two system failures which killed the animal test subjects on board, but these were covered up and the mission heralded as a great success. From what NASA knew, it was entirely possible the next launch would be with cosmonauts bound for the Moon.

Space launches were exceptional public events in the 1960s, and the first flight of men to the Moon, just about a hundred years after Jules Verne envisioned three men setting out for the Moon from central Florida in a “cylindro-conical projectile” in De la terre à la lune (From the Earth to the Moon), similarly engaging the world, the launch of Apollo 8 attracted around a quarter of a million people to watch the spectacle in person and hundreds of millions watching on television both in North America and around the globe, thanks to the newfangled technology of communication satellites. Let's tune in to CBS television and relive this singular event with Walter Cronkite.

CBS coverage of the Apollo 8 launch

Now we step inside Mission Control and listen in on the Flight Director's audio loop during the launch, illustrated with imagery and simulations.

The Saturn V performed almost flawlessly. During the second stage burn mild pogo oscillations began but, rather than progressing to the point where they almost tore the rocket apart as had happened on the previous Saturn V launch, von Braun's team's fixes kicked in and seconds later Borman reported, “Pogo's damping out.” A few minutes later Apollo 8 was in Earth orbit.

Jim Lovell had sixteen days of spaceflight experience across two Gemini missions, one of them Gemini 7 where he endured almost two weeks in orbit with Frank Borman. Bill Anders was a rookie, on his first space flight. Now weightless, all three were experiencing a spacecraft nothing like the cramped Mercury and Gemini capsules which you put on as much as boarded. The Apollo command module had an interior volume of six cubic metres (218 cubic feet, in the quaint way NASA reckons things) which may not seem like much for a crew of three, but in weightlessness, with every bit of space accessible and usable, felt quite roomy. There were five real windows, not the tiny portholes of Gemini, and plenty of space to move from one to another.

With all this roominess and mobility came potential hazards, some verging on slapstick, but, in space, serious nonetheless. NASA safety personnel had required the astronauts to wear life vests over their space suits during the launch just in case the Saturn V malfunctioned and they ended up in the ocean. While moving around the cabin to get to the navigation station after reaching orbit, Lovell, who like the others hadn't yet removed his life vest, snagged its activation tab on a strut within the cabin and it instantly inflated. Lovell looked ridiculous and the situation comical, but it was no laughing matter. The life vests were inflated with carbon dioxide which, if released in the cabin, would pollute their breathing air and removal would use up part of a CO₂ scrubber cartridge, of which they had a limited supply on board. Lovell finally figured out what to do. After being helped out of the vest, he took it down to the urine dump station in the lower equipment bay and vented it into a reservoir which could be dumped out into space. One problem solved, but in space you never know what the next surprise might be.

The astronauts wouldn't have much time to admire the Earth through those big windows. Over Australia, just short of three hours after launch, they would re-light the engine on the third stage of the Saturn V for the “trans-lunar injection” (TLI) burn of 318 seconds, which would accelerate the spacecraft to just slightly less than escape velocity, raising its apogee so it would be captured by the Moon's gravity. After housekeeping (presumably including the rest of the crew taking off those pesky life jackets, since there weren't any wet oceans where they were going) and reconfiguring the spacecraft and its computer for the maneuver, they got the call from Houston, “You are go for TLI.” They were bound for the Moon.

The third stage, which had failed to re-light on its last outing, worked as advertised this time, with a flawless burn. Its job was done; from here on the astronauts and spacecraft were on their own. The booster had placed them on a free-return trajectory. If they did nothing (apart from minor “trajectory correction maneuvers” easily accomplished by the spacecraft's thrusters) they would fly out to the Moon, swing around its far side, and use its gravity to slingshot back to the Earth (as Lovell would do two years later when he commanded Apollo 13, although there the crew had to use the engine of the LM to get back onto a free-return trajectory after the accident).

Apollo 8 rapidly climbed out of the Earth's gravity well, trading speed for altitude, and before long the astronauts beheld a spectacle no human eyes had glimpsed before: an entire hemisphere of Earth at once, floating in the inky black void. On board, there were other concerns: Frank Borman was puking his guts out and having difficulties with the other end of the tubing as well. Borman had logged more than six thousand flight hours in his career as a fighter and test pilot, most of it in high-performance jet aircraft, and fourteen days in space on Gemini 7 without any motion sickness. Many people feel queasy when they experience weightlessness the first time, but this was something entirely different and new in the American space program. And it was very worrisome. The astronauts discussed the problem on private tapes they could downlink to Mission Control without broadcasting to the public, and when NASA got around to playing the tapes, the chief flight surgeon, Dr. Charles Berry, became alarmed.

As he saw it, there were three possibilities: motion sickness, a virus of some kind, or radiation sickness. On its way to the Moon, Apollo 8 passed directly through the Van Allen radiation belts, spending two hours in this high radiation environment, the first humans to do so. The total radiation dose was estimated as roughly the same as one would receive from a chest X-ray, but the composition of the radiation was different and the exposure was over an extended time, so nobody could be sure it was safe. The fact that Lovell and Anders had experienced no symptoms argued against the radiation explanation. Berry concluded that a virus was the most probable cause and, based upon the mission rules said, “I'm recommending that we consider canceling the mission.” The risk of proceeding with the commander unable to keep food down and possibly carrying a virus which the other astronauts might contract was too great in his opinion. This recommendation was passed up to the crew. Borman, usually calm and collected even by astronaut standards, exclaimed, “What? That is pure, unadulterated horseshit.” The mission would proceed, and within a day his stomach had settled.

This was the first case of space adaptation syndrome to afflict an American astronaut. (Apparently some Soviet cosmonauts had been affected, but this was covered up to preserve their image as invincible exemplars of the New Soviet Man.) It is now known to affect around a third of people experiencing weightlessness in environments large enough to move around, and spontaneously clears up in two to four (miserable) days.

The two most dramatic and critical events in Apollo 8's voyage would occur on the far side of the Moon, with 3500 km of rock between the spacecraft and the Earth totally cutting off all communications. The crew would be on their own, aided by the computer and guidance system and calculations performed on the Earth and sent up before passing behind the Moon. The first would be lunar orbit insertion (LOI), scheduled for 69 hours and 8 minutes after launch. The big Service Propulsion System (SPS) engine (it was so big—twice as large as required for Apollo missions as flown—because it was designed to be able to launch the entire Apollo spacecraft from the Moon if a “direct ascent” mission mode had been selected) would burn for exactly four minutes and seven seconds to bend the spacecraft's trajectory around the Moon into a closed orbit around that world.

If the SPS failed to fire for the LOI burn, it would be a huge disappointment but survivable. Apollo 8 would simply continue on its free-return trajectory, swing around the Moon, and fall back to Earth where it would perform a normal re-entry and splashdown. But if the engine fired and cut off too soon, the spacecraft would be placed into an orbit which would not return them to Earth, marooning the crew in space to die when their supplies ran out. If it burned just a little too long, the spacecraft's trajectory would intersect the surface of the Moon—lithobraking is no way to land on the Moon.

When the SPS engine shut down precisely on time and the computer confirmed the velocity change of the burn and orbital parameters, the three astronauts were elated, but they were the only people in the solar system aware of the success. Apollo 8 was still behind the Moon, cut off from communications. The first clue Mission Control would have of the success or failure of the burn would be when Apollo 8's telemetry signal was reacquired as it swung around the limb of the Moon. If too early, it meant the burn had failed and the spacecraft was coming back to Earth; that moment passed with no signal. Now tension mounted as the clock ticked off the seconds to the time expected for a successful burn. If that time came and went with no word from Apollo 8, it would be a really bad day. Just on time, the telemetry signal locked up and Jim Lovell reported, “Go ahead, Houston, this is Apollo 8. Burn complete. Our orbit 160.9 by 60.5.” (Lovell was using NASA's preferred measure of nautical miles; in proper units it was 311 by 112 km. The orbit would subsequently be circularised by another SPS burn to 112.7 by 114.7 km.) The Mission Control room erupted into an un-NASA-like pandemonium of cheering.

Apollo 8 would orbit the Moon ten times, spending twenty hours in a retrograde orbit with an inclination of 12 degrees to the lunar equator, which would allow it to perform high-resolution photography of candidate sites for early landing missions under lighting conditions similar to those expected at the time of landing. In addition, precision tracking of the spacecraft's trajectory in lunar orbit would allow mapping of the Moon's gravitational field, including the “mascons” which perturb the orbits of objects in low lunar orbits and would be important for longer duration Apollo orbital missions in the future.

During the mission, the crew were treated to amazing sights and, in particular, the dramatic difference between the near side, with its many flat “seas”, and the rugged highlands of the far side. Coming around the Moon they saw the spectacle of earthrise for the first time and, hastily grabbing a magazine of colour film and setting aside the planned photography schedule, Bill Anders snapped the photo of the Earth rising above the lunar horizon which became one of the most iconic photographs of the twentieth century. Here is a reconstruction of the moment that photo was taken.

On the ninth and next-to-last orbit, the crew conducted a second television transmission which was broadcast worldwide. It was Christmas Eve on much of the Earth, and, coming at the end of the chaotic, turbulent, and often tragic year of 1968, it was a magical event, remembered fondly by almost everybody who witnessed it and felt pride for what the human species had just accomplished.

You have probably heard this broadcast from the Moon, often with the audio overlaid on imagery of the Moon from later missions, with much higher resolution than was actually seen in that broadcast. Here, in three parts, is what people, including this scrivener, actually saw on their televisions that enchanted night. The famous reading from Genesis is in the third part. This description is eerily similar to that in Jules Verne's 1870 Autour de la lune.

After the end of the broadcast, it was time to prepare for the next and absolutely crucial maneuver, also performed on the far side of the Moon: trans-Earth injection, or TEI. This would boost the spacecraft out of lunar orbit and send it back on a trajectory to Earth. This time the SPS engine had to work, and perfectly. If it failed to fire, the crew would be trapped in orbit around the Moon with no hope of rescue. If it cut off too soon or burned too long, or the spacecraft was pointed in the wrong direction when it fired, Apollo 8 would miss the Earth and orbit forever far from its home planet or come in too steep and burn up when it hit the atmosphere. Once again the tension rose to a high pitch in Mission Control as the clock counted down to the two fateful times: this time they'd hear from the spacecraft earlier if it was on its way home and later or not at all if things had gone tragically awry. Exactly when expected, the telemetry screens came to life and a second later Jim Lovell called, “Houston, Apollo 8. Please be informed there is a Santa Claus.”

Now it was just a matter of falling the 375,000 kilometres from the Moon, hitting the precise re-entry corridor in the Earth's atmosphere, executing the intricate “double dip” re-entry trajectory, and splashing down near the aircraft carrier which would retrieve the Command Module and crew. Earlier unmanned tests gave confidence it would all work, but this was the first time men would be trying it.

There was some unexpected and embarrassing excitement on the way home. Mission Control had called up a new set of co-ordinates for the “barbecue roll” which the spacecraft executed to even out temperature. Lovell was asked to enter “verb 3723, noun 501” into the computer. But, weary and short on sleep, he fat-fingered the commands and entered “verb 37, noun 01”. This told the computer the spacecraft was back on the launch pad, pointing straight up, and it immediately slewed to what it thought was that orientation. Lovell quickly figured out what he'd done, “It was my goof”, but by this time he'd “lost the platform”: the stable reference the guidance system used to determine in which direction the spacecraft was pointing in space. He had to perform a manual alignment, taking sightings on a number of stars, to recover the correct orientation of the stable platform. This was completely unplanned but, as it happens, in doing so Lovell acquired experience that would prove valuable when he had to perform the same operation in much more dire circumstances on Apollo 13 after an explosion disabled the computer and guidance system in the Command Module. Here is the author of the book, Jeffrey Kluger, discussing Jim Lovell's goof.

The re-entry went completely as planned, flown entirely under computer control, with the spacecraft splashing into the Pacific Ocean just 6 km from the aircraft carrier Yorktown. But because the splashdown occurred before dawn, it was decided to wait until the sky brightened to recover the crew and spacecraft. Forty-three minutes after splashdown, divers from the Yorktown arrived at the scene, and forty-five minutes after that the crew was back on the ship. Apollo 8 was over, a total success. This milestone in the space race had been won definitively by the U.S., and shortly thereafter the Soviets abandoned their Zond circumlunar project, judging it an anticlimax and admission of defeat to fly by the Moon after the Americans had already successfully orbited it.

This is the official NASA contemporary documentary about Apollo 8.

Here is an evening with the Apollo 8 astronauts recorded at the National Air and Space Museum on 2008-11-13 to commemorate the fortieth anniversary of the flight.

This is a reunion of the Apollo 8 astronauts on 2009-04-23.

As of this writing, all of the crew of Apollo 8 are alive, and, in a business where divorce was common, remain married to the women they wed as young military officers.

 Permalink

Kotkin, Stephen. Stalin, Vol. 1: Paradoxes of Power, 1878–1928. New York: Penguin Press, 2014. ISBN 978-0-14-312786-4.
In a Levada Center poll in 2017, Russians who responded named Joseph Stalin the “most outstanding person” in world history. Now, you can argue about the meaning of “outstanding”, but it's pretty remarkable that citizens of a country whose chief of government (albeit several regimes ago) presided over an entirely avoidable famine which killed millions of citizens of his country, ordered purges which executed more than 700,000 people, including senior military leadership, leaving his nation unprepared for the German attack in 1941, which would, until the final victory, claim the lives of around 27 million Soviet citizens, military and civilian, would be considered an “outstanding person” as opposed to a super-villain.

The story of Stalin's career is even less plausible, and should give pause to those who believe history can be predicted without the contingency of things that “just happen”. Ioseb Besarionis dze Jughashvili (the author uses Roman alphabet transliterations of all individuals' names in their native languages, which can occasionally be confusing when they later Russified their names) was born in 1878 in the town of Gori in the Caucasus. Gori, part of the territory of Georgia which had long been ruled by the Ottoman Empire, had been seized by Imperial Russia in a series of bloody conflicts ending in the 1860s with complete incorporation of the territory into the Czar's empire. Ioseb, who was called by the Georgian dimunitive “Sosa” throughout his youth, was the third son born to his parents, but, as both of his older brothers had died not long after birth, was raised as an only child.

Sosa's father, Besarion Jughashvili (often written in the Russian form, Vissarion) was a shoemaker with his own shop in Gori but, as time passed his business fell on hard times and he closed the shop and sought other work, ending his life as a vagrant. Sosa's mother, Ketevan “Keke” Geladze, was ambitious and wanted the best for her son, and left her husband and took a variety of jobs to support the family. She arranged for eight year old Sosa to attend Russian language lessons given to the children of a priest in whose house she was boarding. Knowledge of Russian was the key to advancement in Czarist Georgia, and he had a head start when Keke arranged for him to be enrolled in the parish school's preparatory and four year programs. He was the first member of either side of his family to attend school and he rose to the top of his class under the patronage of a family friend, “Uncle Yakov” Egnatashvili. After graduation, his options were limited. The Russian administration, wary of the emergence of a Georgian intellectual class that might champion independence, refused to establish a university in the Caucasus. Sosa's best option was the highly selective Theological Seminary in Tiflis where he would prepare, in a six year course, for life as a parish priest or teacher in Georgia but, for those who graduated near the top, could lead to a scholarship at a university in another part of the empire.

He took the examinations and easily passed, gaining admission, petitioning and winning a partial scholarship that paid most of his fees. “Uncle Yakov” paid the rest, and he plunged into his studies. Georgia was in the midst of an intense campaign of Russification, and Sosa further perfected his skills in the Russian language. Although completely fluent in spoken and written Russian along with his native Georgian (the languages are completely unrelated, having no more in common than Finnish and Italian), he would speak Russian with a Georgian accent all his life and did not publish in the Russian language until he was twenty-nine years old.

Long a voracious reader, at the seminary Sosa joined a “forbidden literature” society which smuggled in and read works, not banned by the Russian authorities, but deemed unsuitable for priests in training. He read classics of Russian, French, English, and German literature and science, including Capital by Karl Marx. The latter would transform his view of the world and path in life. He made the acquaintance of a former seminarian and committed Marxist, Lado Ketskhoveli, who would guide his studies. In August 1898, he joined the newly formed “Third Group of Georgian Marxists”—many years later Stalin would date his “party card” to then.

Prior to 1905, imperial Russia was an absolute autocracy. The Czar ruled with no limitations on his power. What he decreed and ordered his functionaries to do was law. There was no parliament, political parties, elected officials of any kind, or permanent administrative state that did not serve at the pleasure of the monarch. Political activity and agitation were illegal, as were publishing and distributing any kind of political literature deemed to oppose imperial rule. As Sosa became increasingly radicalised, it was only a short step from devout seminarian to underground agitator. He began to neglect his studies, became increasingly disrespectful to authority figures, and, in April 1899, left the seminary before taking his final examinations.

Saddled with a large debt to the seminary for leaving without becoming a priest or teacher, he drifted into writing articles for small, underground publications associated with the Social Democrat movement, at the time the home of most Marxists. He took to public speaking and, while eschewing fancy flights of oratory, spoke directly to the meetings of workers he addressed in their own dialect and terms. Inevitably, he was arrested for “incitement to disorder and insubordination against higher authority” in April 1902 and jailed. After fifteen months in prison at Batum, he was sentenced to three years of internal exile in Siberia. In January 1904 he escaped and made it back to Tiflis, in Georgia, where he resumed his underground career. By this time the Social Democratic movement had fractured into Lenin's Bolshevik faction and the larger Menshevik group. Sosa, who during his imprisonment had adopted the revolutionary nickname “Koba”, after the hero in a Georgian novel of revenge, continued to write and speak and, in 1905, after the Czar was compelled to cede some of his power to a parliament, organised Battle Squads which stole printing equipment, attacked government forces, and raised money through protection rackets targeting businesses.

In 1905, Koba Jughashvili was elected one of three Bolshevik delegates from Georgia to attend the Third Congress of the Russian Social Democratic Workers' Party in Tampere, Finland, then part of the Russian empire. It was there he first met Lenin, who had been living in exile in Switzerland. Koba had read Lenin's prolific writings and admired his leadership of the Bolshevik cause, but was unimpressed in this first in-person encounter. He vocally took issue with Lenin's position that Bolsheviks should seek seats in the newly-formed State Duma (parliament). When Lenin backed down in the face of opposition, he said, “I expected to see the mountain eagle of our party, a great man, not only politically but physically, for I had formed for myself a picture of Lenin as a giant, as a stately representative figure of a man. What was my disappointment when I saw the most ordinary individual, below average height, distinguished from ordinary mortals by, literally, nothing.”

Returning to Georgia, he resumed his career as an underground revolutionary including, famously, organising a robbery of the Russian State Bank in Tiflis in which three dozen people were killed and two dozen more injured, “expropriating” 250,000 rubles for the Bolshevik cause. Koba did not participate directly, but he was the mastermind of the heist. This and other banditry, criminal enterprises, and unauthorised publications resulted in multiple arrests, imprisonments, exiles to Siberia, escapes, re-captures, and life underground in the years that followed. In 1912, while living underground in Saint Petersburg after yet another escape, he was named the first editor of the Bolshevik party's new daily newspaper, Pravda, although his name was kept secret. In 1913, with the encouragement of Lenin, he wrote an article titled “Marxism and the National Question” in which he addressed how a Bolshevik regime should approach the diverse ethnicities and national identities of the Russian Empire. As a Georgian Bolshevik, Jughashvili was seen as uniquely qualified and credible to address this thorny question. He published the article under the nom de plume “K. [for Koba] Stalin”, which literally translated, meant “Man of Steel” and paralleled Lenin's pseudonym. He would use this name for the rest of his life, reverting to the Russified form of his given name, “Joseph” instead of the nickname Koba (by which his close associates would continue to address him informally). I shall, like the author, refer to him subsequently as “Stalin”.

When Russia entered the Great War in 1914, events were set into motion which would lead to the end of Czarist rule, but Stalin was on the sidelines: in exile in Siberia, where he spent much of his time fishing. In late 1916, as manpower shortages became acute, exiled Bolsheviks including Stalin received notices of conscription into the army, but when he appeared at the induction centre he was rejected due to a crippled left arm, the result of a childhood injury. It was only after the abdication of the Czar in the February Revolution of 1917 that he returned to Saint Petersburg, now renamed Petrograd, and resumed his work for the Bolshevik cause. In April 1917, in elections to the Bolshevik Central Committee, Stalin came in third after Lenin (who had returned from exile in Switzerland) and Zinoviev. Despite having been out of circulation for several years, Stalin's reputation from his writings and editorship of Pravda, which he resumed, elevated him to among the top rank of the party.

As Kerensky's Provisional Government attempted to consolidate its power and continue the costly and unpopular war, Stalin and Trotsky joined Lenin's call for a Bolshevik coup to seize power, and Stalin was involved in all aspects of the eventual October Revolution, although often behind the scenes, while Lenin was the public face of the Bolshevik insurgency.

After seizing power, the Bolsheviks faced challenges from all directions. They had to disentangle Russia from the Great War without leaving the country open to attack and territorial conquest by Germany or Poland. Despite their ambitious name, they were a minority party and had to subdue domestic opposition. They took over a country which the debts incurred by the Czar to fund the war had effectively bankrupted. They had to exert their control over a sprawling, polyglot empire in which, outside of the big cities, their party had little or no presence. They needed to establish their authority over a military in which the officer corps largely regarded the Czar as their legitimate leader. They must restore agricultural production, severely disrupted by levies of manpower for the war, before famine brought instability and the risk of a counter-coup. And for facing these formidable problems, all at the same time, they were utterly unprepared.

The Bolsheviks were, to a man (and they were all men), professional revolutionaries. Their experience was in writing and publishing radical tracts and works of Marxist theory, agitating and organising workers in the cities, carrying out acts of terror against the regime, and funding their activities through banditry and other forms of criminality. There was not a military man, agricultural expert, banker, diplomat, logistician, transportation specialist, or administrator among them, and suddenly they needed all of these skills and more, plus the ability to recruit and staff an administration for a continent-wide empire. Further, although Lenin's leadership was firmly established and undisputed, his subordinates were all highly ambitious men seeking to establish and increase their power in the chaotic and fluid situation.

It was in this environment that Stalin made his mark as the reliable “fixer”. Whether it was securing levies of grain from the provinces, putting down resistance from counter-revolutionary White forces, stamping out opposition from other parties, developing policies for dealing with the diverse nations incorporated into the Russian Empire (indeed, in a real sense, it was Stalin who invented the Soviet Union as a nominal federation of autonomous republics which, in fact, were subject to Party control from Moscow), or implementing Lenin's orders, even when he disagreed with them, Stalin was on the job. Lenin recognised Stalin's importance as his right hand man by creating the post of General Secretary of the party and appointing him to it.

This placed Stalin at the centre of the party apparatus. He controlled who was hired, fired, and promoted. He controlled access to Lenin (only Trotsky could see Lenin without going through Stalin). This was a finely-tuned machine which allowed Lenin to exercise absolute power through a party machine which Stalin had largely built and operated.

Then, in May of 1922, the unthinkable happened: Lenin was felled by a stroke which left him partially paralysed. He retreated to his dacha at Gorki to recuperate, and his communication with the other senior leadership was almost entirely through Stalin. There had been no thought of or plan for a succession after Lenin (he was only fifty-two at the time of his first stroke, although he had been unwell for much of the previous year). As Lenin's health declined, ending in his death in January 1924, Stalin increasingly came to run the party and, through it, the government. He had appointed loyalists in key positions, who saw their own careers as linked to that of Stalin. By the end of 1924, Stalin began to move against the “Old Bolsheviks” who he saw as rivals and potential threats to his consolidation of power. When confronted with opposition, on three occasions he threatened to resign, each exercise in brinksmanship strengthening his grip on power, as the party feared the chaos that would ensue from a power struggle at the top. His status was reflected in 1925 when the city of Tsaritsyn was renamed Stalingrad.

This ascent to supreme power was not universally applauded. Felix Dzierzynski (Polish born, he is often better known by the Russian spelling of his name, Dzerzhinsky) who, as the founder of the Soviet secret police (Cheka/GPU/OGPU) knew a few things about dictatorship, warned in 1926, the year of his death, that “If we do not find the correct line and pace of development our opposition will grow and the country will get its dictator, the grave digger of the revolution irrespective of the beautiful feathers on his costume.”

With or without feathers, the dictatorship was beginning to emerge. In 1926 Stalin published “On Questions of Leninism” in which he introduced the concept of “Socialism in One Country” which, presented as orthodox Leninist doctrine (which it wasn't), argued that world revolution was unnecessary to establish communism in a single country. This set the stage for the collectivisation of agriculture and rapid industrialisation which was to come. In 1928, what was to be the prototype of the show trials of the 1930s opened in Moscow, the Shakhty trial, complete with accusations of industrial sabotage (“wrecking”), denunciations of class enemies, and Andrei Vyshinsky presiding as chief judge. Of the fifty-three engineers accused, five were executed and forty-four imprisoned. A country desperately short on the professionals its industry needed to develop had begin to devour them.

It is a mistake to regard Stalin purely as a dictator obsessed with accumulating and exercising power and destroying rivals, real or imagined. The one consistent theme throughout Stalin's career was that he was a true believer. He was a devout believer in the Orthodox faith while at the seminary, and he seamlessly transferred his allegiance to Marxism once he had been introduced to its doctrines. He had mastered the difficult works of Marx and could cite them from memory (as he often did spontaneously to buttress his arguments in policy disputes), and went on to similarly internalise the work of Lenin. These principles guided his actions, and motivated him to apply them rigidly, whatever the cost may be.

Starting in 1921, Lenin had introduced the New Economic Policy, which lightened state control over the economy and, in particular, introduced market reforms in the agricultural sector, resulting in a mixed economy in which socialism reigned in big city industries, but in the countryside the peasants operated under a kind of market economy. This policy had restored agricultural production to pre-revolutionary levels and largely ended food shortages in the cities and countryside. But to a doctrinaire Marxist, it seemed to risk destruction of the regime. Marx believed that the political system was determined by the means of production. Thus, accepting what was essentially a capitalist economy in the agricultural sector was to infect the socialist government with its worst enemy.

Once Stalin had completed his consolidation of power, he then proceeded as Marxist doctrine demanded: abolish the New Economic Policy and undertake the forced collectivisation of agriculture. This began in 1928.

And it is with this momentous decision that the present volume comes to an end. This massive work (976 pages in the print edition) is just the first in a planned three volume biography of Stalin. The second volume, Stalin: Waiting for Hitler, 1929–1941, was published in 2017 and the concluding volume is not yet completed.

Reading this book, and the entire series, is a major investment of time in a single historical figure. But, as the author observes, if you're interested in the phenomenon of twentieth century totalitarian dictatorship, Stalin is the gold standard. He amassed more power, exercised by a single person with essentially no checks or limits, over more people and a larger portion of the Earth's surface than any individual in human history. He ruled for almost thirty years, transformed the economy of his country, presided over deliberate famines, ruthless purges, and pervasive terror that killed tens of millions, led his country to victory at enormous cost in the largest land conflict in history and ended up exercising power over half of the European continent, and built a military which rivaled that of the West in a bipolar struggle for global hegemony.

It is impossible to relate the history of Stalin without describing the context in which it occurred, and this is as much a history of the final days of imperial Russia, the revolutions of 1917, and the establishment and consolidation of Soviet power as of Stalin himself. Indeed, in this first volume, there are lengthy parts of the narrative in which Stalin is largely offstage: in prison, internal exile, or occupied with matters peripheral to the main historical events. The level of detail is breathtaking: the Bolsheviks seem to have been as compulsive record-keepers as Germans are reputed to be, and not only are the votes of seemingly every committee meeting recorded, but who voted which way and why. There are more than two hundred pages of end notes, source citations, bibliography, and index.

If you are interested in Stalin, the Soviet Union, the phenomenon of Bolshevism, totalitarian dictatorship, or how destructive madness can grip a civilised society for decades, this is an essential work. It is unlikely it will ever be equalled.

 Permalink

Cawdron, Peter. Losing Mars. Brisbane, Australia: Independent, 2018. ISBN 978-1-7237-4729-8.
Peter Cawdron has established himself as the contemporary grandmaster of first contact science fiction. In a series of novels including Anomaly (December 2011), Xenophobia (August 2013), Little Green Men (September 2013), Feedback (February 2014), and My Sweet Satan (September 2014), he has explored the first encounter of humans with extraterrestrial life in a variety of scenarios, all with twists and turns that make you question the definition of life and intelligence.

This novel is set on Mars, where a nominally international but strongly NASA-dominated station has been set up by the six-person crew first to land on the red planet. The crew of Shepard station, three married couples, bring a variety of talents to their multi-year mission of exploration: pilot, engineer, physician, and even botanist: Cory Anderson (the narrator) is responsible for the greenhouse which will feed the crew during their mission. They have a fully-fueled Mars Return Vehicle, based upon NASA's Orion capsule, ready to go, and their ticket back to Earth, the Schiaparelli return stage, waiting in Mars orbit, but orbital mechanics dictates when they can return to Earth, based on the two-year cycle of Earth-Mars transfer opportunities. The crew is acutely aware that the future of Mars exploration rests on their shoulders: failure, whether a tragedy in which they were lost, or even cutting their mission short, might result in “losing Mars” in the same way humanity abandoned the Moon for fifty years after “flags and footprints” visits had accomplished their chest-beating goal.

The Shepard crew are confronted with a crisis not of their making when a Chinese mission, completely unrelated to theirs, attempting to do “Mars on a shoestring” by exploring its moon Phobos, faces disaster when a poorly-understood calamity kills two of its four crew and disables their spacecraft. The two surviving taikonauts show life signs on telemetry but have not communicated with their mission control and, with their ship disabled, are certain to die when their life support consumables are exhausted.

The crew, in consultation with NASA, conclude the only way to mount a rescue mission is for the pilot and Cory, the only crew member who can be spared, to launch in the return vehicle, rendezvous with the Schiaparelli, use it to match orbits with the Chinese ship, rescue the survivors, and then return to Earth with them. (The return vehicle is unable to land back on Mars, being unequipped for a descent and soft landing through its thin atmosphere.) This will leave the four remaining crew of the Shepard with no way home until NASA can send a rescue mission, which will take two years to arrive at Mars. However unappealing the prospect, they conclude that abandoning the Chinese crew to die when rescue was possible would be inhuman, and proceed with the plan.

It is only after arriving at Phobos, after the half-way point in the book, that things begin to get distinctly weird and we suddenly realise that Peter Cawdron is not writing a novelisation of a Kerbal Space Program rescue scenario but is rather up to his old tricks and there is much more going on here than you've imagined from the story so far.

Babe Ruth hit 714 home runs, but he struck out 1,330 times. For me, this story is a swing and a miss. It takes a long, long time to get going, and we must wade through a great deal of social justice virtue signalling to get there. (Lesbians in space? Who could have imagined? Oh, right….) Once we get to the “good part”, the narrative is related in a fractured manner reminiscent of Vonnegut (I'm trying to avoid spoilers—you'll know what I'm talking about if you make it that far). And the copy editing and fact checking…oh, dear.

There are no fewer than seven idiot “it's/its” bungles, two on one page. A solar powered aircraft is said to have “turboprop engines”. Alan Shepard's suborbital mission is said to have been launched on a “prototype Redstone rocket” (it wasn't), which is described as an “intercontinental ballistic missile” (it was a short range missile with a maximum range of 323 km), which subjected the astronaut to “nine g's [sic] launching” (it was actually 6.3 g), with reentry g loads “more than that of the gas giant Saturn” (which is correct, but local gravity on Saturn is just 1.065 g, as the planet is very large and less dense than water). Military officers who defy orders are tried by a court martial, not “court marshaled”. The Mercury-Atlas 3 launch failure which Shepard witnessed at the Cape did not “[end] up in a fireball a couple of hundred feet above the concrete”: in fact it was destroyed by ground command forty-three seconds after launch at an altitude of several kilometres due to a guidance system failure, and the launch escape system saved the spacecraft and would have allowed an astronaut, had one been on board, to land safely. It's “bungee” cord, not “Bungie”. “Navy” is not an acronym, and hence is not written “NAVY”. The Juno orbiter at Jupiter does not “broadcast with the strength of a cell phone”; it has a 25 watt transmitter which is between twelve and twenty-five times more powerful than the maximum power of a mobile phone. He confuses “ecliptic” and “elliptical”, and states that the velocity of a spacecraft decreases as it approaches closer to a body in free fall (it increases). A spacecraft is said to be “accelerating at fifteen meters per second” which is a unit of velocity, not acceleration. A daughter may be the spitting image of her mother, but not “the splitting image”. Thousands of tiny wires do not “rap” around a plastic coated core, they “wrap”, unless they are special hip-hop wires which NASA has never approved for space flight. We do not live in a “barreled galaxy”, but rather a barred spiral galaxy.

Now, you may think I'm being harsh in pointing out these goofs which are not, after all, directly relevant to the plot of the novel. But errors of this kind, all of which could be avoided by research no more involved than looking things up in Wikipedia or consulting a guide to English usage, are indicative of a lack of attention to detail which, sadly, is also manifest in the main story line. To discuss these we must step behind the curtain.

Spoiler warning: Plot and/or ending details follow.  
It is implausible in the extreme that the Schiaparelli would have sufficient extra fuel to perform a plane change maneuver from its orbital inclination of nearly twenty degrees to the near-equatorial orbit of Phobos, then raise its orbit to rendezvous with the moon. The fuel on board the Schiaparelli would have been launched from Earth, and would be just sufficient to return to Earth without any costly maneuvers in Mars orbit. The cost of launching such a large additional amount of fuel, not to mention the larger tanks to hold it, would be prohibitive.

(We're already in a spoiler block, but be warned that the following paragraph is a hideous spoiler of the entire plot.) Cory's ethical dilemma, on which the story turns, is whether to reveal the existence of the advanced technology alien base on Phobos to a humanity which he believes unprepared for such power and likely to use it to destroy themselves. OK, fine, that's his call (and that of Hedy, who also knows enough to give away the secret). But in the conclusion, we're told that, fifty years after the rescue mission, there's a thriving colony on Mars with eight thousand people in two subsurface towns, raising families. How probable is it, even if not a word was said about what happened on Phobos, that this thriving colony and the Earth-based space program which supported it would not, over half a century, send another exploration mission to Phobos, which is scientifically interesting in its own right? And given what Cory found there, any mission which investigated Phobos would have found what he did.

Finally, in the Afterword, the author defends his social justice narrative as follows.

At times, I've been criticized for “jumping on the [liberal] bandwagon” on topics like gay rights and Black Lives Matter across a number of books, but, honestly, it's the 21st century—the cruelty that still dominates how we humans deal with each other is petty and myopic. Any contact with an intelligent extraterrestrial species will expose not only a vast technological gulf, but a moral one as well.
Well, maybe, but isn't it equally likely that when they arrive in their atomic space cars and imbibe what passes for culture and morality among the intellectual élite of the global Davos party and how obsessed these talking apes seem to be about who is canoodling whom with what, that after they stop laughing they may decide that we are made of atoms which they can use for something else.
Spoilers end here.  

Peter Cawdron's earlier novels have provided many hours of thought-provoking entertainment, spinning out the possibilities of first contact. The present book…didn't, although it was good for a few laughs. I'm not going to write off a promising author due to one strike-out. I hope his next outing resumes the home run streak.

A Kindle edition is available, which is free for Kindle Unlimited subscribers.

 Permalink

Marighella, Carlos. Minimanual of the Urban Guerrilla. Seattle: CreateSpace, [1970] 2018. ISBN 978-1-4664-0680-3.
Carlos Marighella joined the Brazilian Communist Party in 1934, abandoning his studies in civil engineering to become a full time agitator for communism. He was arrested for subversion in 1936 and, after release from prison the following year, went underground. He was recaptured in 1939 and imprisoned until 1945 as part of an amnesty of political prisoners. He successfully ran for the federal assembly in 1946 but was removed from office when the Communist party was again banned in 1948. Resuming his clandestine life, he served in several positions in the party leadership and in 1953–1954 visited China to study the Maoist theory of revolution. In 1964, after a military coup in Brazil, he was again arrested, being shot in the process. After being once again released from prison, he broke with the Communist Party and began to advocate armed revolution against the military regime, travelling to Cuba to participate in a conference of Latin American insurgent movements. In 1968, he formed his own group, the Ação Libertadora Nacional (ALN) which, in September 1969, kidnapped U.S. Ambassador Charles Burke Elbrick, who was eventually released in exchange for fifteen political prisoners. In November 1969, Marighella was killed in a police ambush, prompted by a series of robberies and kidnappings by the ALN.

In June 1969, Marighella published this short book (or pamphlet: it is just 40 pages with plenty of white space at the ends of chapters) as a guide for revolutionaries attacking Brazil's authoritarian regime in the big cities. There is little or no discussion of the reasons for the rebellion; the work is addressed to those already committed to the struggle who seek practical advice for wreaking mayhem in the streets. Marighella has entirely bought into the Mao/Guevara theory of revolution: that the ultimate struggle must take place in the countryside, with rural peasants rising en masse against the regime. The problem with this approach was that the peasants seemed to be more interested in eking out their subsistence from the land than taking up arms in support of ideas championed by a few intellectuals in the universities and big cities. So, Marighella's guide is addressed to those in the cities with the goal of starting the armed struggle where there were people indoctrinated in the communist ideology on which it was based. This seems to suffer from the “step two problem”. In essence, his plan is:

  1. Blow stuff up, rob banks, and kill cops in the big cities.
  2. ?
  3. Communist revolution in the countryside.

The book is a manual of tactics: formation of independent cells operating on their own initiative and unable to compromise others if captured, researching terrain and targets and planning operations, mobility and hideouts, raising funds through bank robberies, obtaining weapons by raiding armouries and police stations, breaking out prisoners, kidnapping and exchange for money and prisoners, sabotaging government and industrial facilities, executing enemies and traitors, terrorist bombings, and conducting psychological warfare.

One problem with this strategy is that if you ignore the ideology which supposedly justifies and motivates this mayhem, it is essentially indistinguishable from the outside from the actions of non-politically-motivated outlaws. As the author notes,

The urban guerrilla is a man who fights the military dictatorship with arms, using unconventional methods. A political revolutionary, he is a fighter for his country's liberation, a friend of the people and of freedom. The area in which the urban guerrilla acts is in the large Brazilian cities. There are also bandits, commonly known as outlaws, who work in the big cities. Many times assaults by outlaws are taken as actions by urban guerrillas.

The urban guerrilla, however, differs radically from the outlaw. The outlaw benefits personally from the actions, and attacks indiscriminately without distinguishing between the exploited and the exploiters, which is why there are so many ordinary men and women among his victims. The urban guerrilla follows a political goal and only attacks the government, the big capitalists, and the foreign imperialists, particularly North Americans.

These fine distinctions tend to be lost upon innocent victims, especially since the proceeds of the bank robberies of which the “urban guerrillas” are so fond are not used to aid the poor but rather to finance still more attacks by the ever-so-noble guerrillas pursuing their “political goal”.

This would likely have been an obscure and largely forgotten work of a little-known Brazilian renegade had it not been picked up, translated to English, and published in June and July 1970 by the Berkeley Tribe, a California underground newspaper. It became the terrorist bible of groups including Weatherman, the Black Liberation Army, and Symbionese Liberation Army in the United States, the Red Army Faction in Germany, the Irish Republican Army, the Sandanistas in Nicaragua, and the Palestine Liberation Organisation. These groups embarked on crime and terror campaigns right out of Marighella's playbook with no more thought about step two. They are largely forgotten now because their futile acts had no permanent consequences and their existence was an embarrassment to the élites who largely share their pernicious ideology but have chosen to advance it through subversion, not insurrection.

A Kindle edition is available from a different publisher. You can read the book on-line for free at the Marxists Internet Archive.

 Permalink

Burrough, Bryan. Days of Rage. New York: Penguin Press, 2015. ISBN 978-0-14-310797-2.
In the year 1972, there were more than 1900 domestic bombings in the United States. Think about that—that's more than five bombings a day. In an era when the occasional terrorist act by a “lone wolf” nutcase gets round the clock coverage on cable news channels, it's hard to imagine that not so long ago, most of these bombings and other mayhem, committed by “revolutionary” groups such as Weatherman, the Black Liberation Army, FALN, and The Family, often made only local newspapers on page B37, below the fold.

The civil rights struggle and opposition to the Vietnam war had turned out large crowds and radicalised the campuses, but in the opinion of many activists, yielded few concrete results. Indeed, in the 1968 presidential election, pro-war Democrat Humphrey had been defeated by pro-war Republican Nixon, with anti-war Democrats McCarthy marginalised and Robert Kennedy assassinated.

In this bleak environment, a group of leaders of one of the most radical campus organisations, the Students for a Democratic Society (SDS), gathered in Chicago to draft what became a sixteen thousand word manifesto bristling with Marxist jargon that linked the student movement in the U.S. to Third World guerrilla insurgencies around the globe. They advocated a Che Guevara-like guerrilla movement in America led, naturally, by themselves. They named the manifesto after the Bob Dylan lyric, “You don't need a weatherman to know which way the wind blows.” Other SDS members who thought the idea of armed rebellion in the U.S. absurd and insane quipped, “You don't need a rectal thermometer to know who the assholes are.”

The Weatherman faction managed to blow up (figuratively) the SDS convention in June 1969, splitting the organisation but effectively taking control of it. They called a massive protest in Chicago for October. Dubbed the “National Action”, it would soon become known as the “Days of Rage”.

Almost immediately the Weatherman plans began to go awry. Their plans to rally the working class (who the Ivy League Weatherman élite mocked as “greasers”) got no traction, with some of their outrageous “actions” accomplishing little other than landing the perpetrators in the slammer. Come October, the Days of Rage ended up in farce. Thousands had been expected, ready to take the fight to the cops and “oppressors”, but come the day, no more than two hundred showed up, most SDS stalwarts who already knew one another. They charged the police and were quickly routed with six shot (none seriously), many beaten, and more than 120 arrested. Bail bonds alone added up to US$ 2.3 million. It was a humiliating defeat. The leadership decided it was time to change course.

So what did this intellectual vanguard of the masses decide to do? Well, obviously, destroy the SDS (their source of funding and pipeline of recruitment), go underground, and start blowing stuff up. This posed a problem, because these middle-class college kids had no idea where to obtain explosives (they didn't know that at the time you could buy as much dynamite as you could afford over the counter in many rural areas with, at most, showing a driver's license), what to do with it, and how to build an underground identity. This led to, not Keystone Kops, but Klueless Kriminal misadventures, culminating in March 1970 when they managed to blow up an entire New York townhouse where a bomb they were preparing to attack a dance at Fort Dix, New Jersey detonated prematurely, leaving three of the Weather collective dead in the rubble. In the aftermath, many Weather hangers-on melted away.

This did not deter the hard core, who resolved to learn more about their craft. They issued a communiqué declaring their solidarity with the oppressed black masses (not one of whom, oppressed or otherwise, was a member of Weatherman), and vowed to attack symbols of “Amerikan injustice”. Privately, they decided to avoid killing people, confining their attacks to property. And one of their members hit the books to become a journeyman bombmaker.

The bungling Bolsheviks of Weatherman may have had Marxist theory down pat, but they were lacking in authenticity, and acutely aware of it. It was hard for those whose addresses before going underground were élite universities to present themselves as oppressed. The best they could do was to identify themselves with the cause of those they considered victims of “the system” but who, to date, seemed little inclined to do anything about it themselves. Those who cheered on Weatherman, then, considered it significant when, in the spring of 1971, a new group calling itself the “Black Liberation Army” (BLA) burst onto the scene with two assassination-style murders of New York City policemen on routine duty. Messages delivered after each attack to Harlem radio station WLIB claimed responsibility. One declared,

Every policeman, lackey or running dog of the ruling class must make his or her choice now. Either side with the people: poor and oppressed, or die for the oppressor. Trying to stop what is going down is like trying to stop history, for as long as there are those who will dare to live for freedom there are men and women who dare to unhorse the emperor.

All power to the people.

Politicians, press, and police weren't sure what to make of this. The politicians, worried about the opinion of their black constituents, shied away from anything which sounded like accusing black militants of targeting police. The press, although they'd never write such a thing or speak it in polite company, didn't think it plausible that street blacks could organise a sustained revolutionary campaign: certainly that required college-educated intellectuals. The police, while threatened by these random attacks, weren't sure there was actually any organised group behind the BLA attacks: they were inclined to believe it was a matter of random cop killers attributing their attacks to the BLA after the fact. Further, the BLA had no visible spokesperson and issued no manifestos other than the brief statements after some attacks. This contributed to the mystery, which largely persists to this day because so many participants were killed and the survivors have never spoken out.

In fact, the BLA was almost entirely composed of former members of the New York chapter of the Black Panthers, which had collapsed in the split between factions following Huey Newton and those (including New York) loyal to Eldridge Cleaver, who had fled to exile in Algeria and advocated violent confrontation with the power structure in the U.S. The BLA would perpetrate more than seventy violent attacks between 1970 and 1976 and is said to be responsible for the deaths of thirteen police officers. In 1982, they hijacked a domestic airline flight and pocketed a ransom of US$ 1 million.

Weatherman (later renamed the “Weather Underground” because the original name was deemed sexist) and the BLA represented the two poles of the violent radicals: the first, intellectual, college-educated, and mostly white, concentrated mostly on symbolic bombings against property, usually with warnings in advance to avoid human casualties. As pressure from the FBI increased upon them, they became increasingly inactive; a member of the New York police squad assigned to them quipped, “Weatherman, Weatherman, what do you do? Blow up a toilet every year or two.” They managed the escape of Timothy Leary from a minimum-security prison in California. Leary basically just walked away, with a group of Weatherman members paid by Leary supporters picking him up and arranging for he and his wife Rosemary to obtain passports under assumed names and flee the U.S. for exile in Algeria with former Black Panther leader Eldridge Cleaver.

The Black Liberation Army, being composed largely of ex-prisoners with records of violent crime, was not known for either the intelligence or impulse control of its members. On several occasions, what should have been merely tense encounters with the law turned into deadly firefights because a BLA militant opened fire for no apparent reason. Had they not been so deadly to those they attacked and innocent bystanders, the exploits of the BLA would have made a fine slapstick farce.

As the dour decade of the 1970s progressed, other violent underground groups would appear, tending to follow the model of either Weatherman or the BLA. One of the most visible, it not successful, was the “Symbionese Liberation Army” (SLA), founded by escaped convict and grandiose self-styled revolutionary Daniel DeFreeze. Calling himself “General Field Marshal Cinque”, which he pronounced “sin-kay”, and ending his fevered communications with “DEATH TO THE FASCIST INSECT THAT PREYS UPON THE LIFE OF THE PEOPLE”, this band of murderous bozos struck their first blow for black liberation by assassinating Marcus Foster, the first black superintendent of the Oakland, California school system for his “crimes against the people” of suggesting that police be called into deal with violence in the city's schools and that identification cards be issued to students. Sought by the police for the murder, they struck again by kidnapping heiress, college student, and D-list celebrity Patty Hearst, whose abduction became front page news nationwide. If that wasn't sufficiently bizarre, the abductee eventually issued a statement saying she had chosen to “stay and fight”, adopting the name “Tania”, after the nom de guerre of a Cuban revolutionary and companion of Che Guevara. She was later photographed by a surveillance camera carrying a rifle during a San Francisco bank robbery perpetrated by the SLA. Hearst then went underground and evaded capture until September 1975 after which, when being booked into jail, she gave her occupation as “Urban Guerrilla”. Hearst later claimed she had agreed to join the SLA and participate in its crimes only to protect her own life. She was convicted and sentenced to 35 years in prison, later reduced to 7 years. The sentence was later commuted to 22 months by U.S. President Jimmy Carter and she was released in 1979, and was the recipient of one of Bill Clinton's last day in office pardons in January, 2001. Six members of the SLA, including DeFreeze, died in a house fire during a shootout with the Los Angeles Police Department in May, 1974.

Violence committed in the name of independence for Puerto Rico was nothing new. In 1950, two radicals tried to assassinate President Harry Truman, and in 1954, four revolutionaries shot up the U.S. House of Representatives from the visitors' gallery, wounding five congressmen on the floor, none fatally. The Puerto Rican terrorists had the same problem as their Weatherman, BLA, or SLA bomber brethren: they lacked the support of the people. Most of the residents of Puerto Rico were perfectly happy being U.S. citizens, especially as this allowed them to migrate to the mainland to escape the endemic corruption and the poverty it engendered in the island. As the 1960s progressed, the Puerto Rico radicals increasingly identified with Castro's Cuba (which supported them ideologically, if not financially), and promised to make a revolutionary Puerto Rico a beacon of prosperity and liberty like Cuba had become.

Starting in 1974, a new Puerto Rican terrorist group, the Fuerzas Armadas de Liberación Nacional (FALN) launched a series of attacks in the U.S., most in the New York and Chicago areas. One bombing, that of the Fraunces Tavern in New York in January 1975, killed four people and injured more than fifty. Between 1974 and 1983, a total of more than 130 bomb attacks were attributed to the FALN, most against corporate targets. In 1975 alone, twenty-five bombs went off, around one every two weeks.

Other groups, such as the “New World Liberation Front” (NWLF) in northern California and “The Family” in the East continued the chaos. The NWLF, formed originally from remains of the SLA, detonated twice as many bombs as the Weather Underground. The Family carried out a series of robberies, including the deadly Brink's holdup of October 1981, and jailbreaks of imprisoned radicals.

In the first half of the 1980s, the radical violence sputtered out. Most of the principals were in prison, dead, or living underground and keeping a low profile. A growing prosperity had replaced the malaise and stagflation of the 1970s and there were abundant jobs for those seeking them. The Vietnam War and draft were receding into history, leaving the campuses with little to protest, and the remaining radicals had mostly turned from violent confrontation to burrowing their way into the culture, media, administrative state, and academia as part of Gramsci's “long march through the institutions”.

All of these groups were plagued with the “step two problem”. The agenda of Weatherman was essentially:

  1. Blow stuff up, kill cops, and rob banks.
  2. ?
  3. Proletarian revolution.

Other groups may have had different step threes: “Black liberation” for the BLA, “¡Puerto Rico libre!” for FALN, but none of them seemed to make much progress puzzling out step two. Deep thinker Bill Harris of the SLA's best attempt was, when he advocated killing policemen at random, arguing that “If they killed enough, … the police would crack down on the oppressed minorities of the Bay Area, who would then rise up and begin the revolution.”—sure thing.

In sum, all of this violence and the suffering that resulted from it accomplished precisely none of the goals of those who perpetrated it (which is a good thing: they mostly advocated for one flavour or another of communist enslavement of the United States). All it managed to do is contribute the constriction of personal liberty in the name of “security”, with metal detectors, bomb-sniffing dogs, X-ray machines, rent-a-cops, surveillance cameras, and the first round of airport security theatre springing up like mushrooms everywhere. The amount of societal disruption which can be caused by what amounted to around one hundred homicidal nutcases is something to behold. There were huge economic losses not just due to bombings, but by evacuations due to bomb threats, many doubtless perpetrated by copycats motivated by nothing more political than the desire for a day off from work. Violations of civil liberties by the FBI and other law enforcement agencies who carried out unauthorised wiretaps, burglaries, and other invasions of privacy and property rights not only discredited them, but resulted in many of the perpetrators of the mayhem walking away scot-free. Weatherman founders Bill Ayres and Bernardine Dohrn would, in 1995, launch the political career of Barack Obama at a meeting in their home in Chicago, where Ayers is now a Distinguished Professor at the University of Illinois at Chicago. Ayres, who bombed the U.S. Capitol in 1971 and the Pentagon in 1972, remarked in the 1980s that he was “Guilty as hell, free as a bird—America is a great country.”

This book is an excellent account of a largely-forgotten era in recent history. In a time when slaver radicals (a few of them the same people who set the bombs in their youth) declaim from the cultural heights of legacy media, academia, and their new strongholds in the technology firms which increasingly mediate our communications and access to information, advocate “active resistance”, “taking to the streets”, or “occupying” this or that, it's a useful reminder of where such action leads, and that it's wise to work out step two before embarking on step one.

 Permalink

Stross, Charles. Iron Sunrise. New York: Ace, 2005. ISBN 978-0-441-01296-1.
In Accelerando (July 2011), a novel assembled from nine previously-published short stories, the author chronicles the arrival of a technological singularity on Earth: the almost-instantaneously emerging super-intellect called the Eschaton which departed the planet toward the stars. Simultaneously, nine-tenths of Earth's population vanished overnight, and those left behind, after a period of chaos, found that with the end of scarcity brought about by “cornucopia machines” produced in the first phase of the singularity, they could dispense with anachronisms such as economic systems and government. After humans achieved faster than light travel, they began to discover that the Eschaton had relocated 90% of Earth's population to habitable worlds around various stars and left them to develop in their own independent directions, guided only by this message from the Eschaton, inscribed on a monument on each world.

  1. I am the Eschaton. I am not your god.
  2. I am descended from you, and I exist in your future.
  3. Thou shalt not violate causality within my historic light cone. Or else.

The wormholes used by the Eschaton to relocate Earth's population in the great Diaspora, a technology which humans had yet to understand, not only permitted instantaneous travel across interstellar distances but also in time: the more distant the planet from Earth, the longer the settlers deposited there have had to develop their own cultures and civilisations before being contacted by faster than light ships. With cornucopia machines to meet their material needs and allow them to bootstrap their technology, those that descended into barbarism or incessant warfare did so mostly due to bad ideas rather than their environment.

Rachel Mansour, secret agent for the Earth-based United Nations, operating under the cover of an entertainment officer (or, if you like, cultural attaché), who we met in the previous novel in the series, Singularity Sky (February 2011), and her companion Martin Springfield, who has a back-channel to the Eschaton, serve as arms control inspectors—their primary mission to insure that nothing anybody on Earth or the worlds who have purchased technology from Earth invites the wrath of the Eschaton—remember that “Or else.”

A terrible fate has befallen the planet Moscow, a diaspora “McWorld” accomplished in technological development and trade, when its star, a G-type main sequence star like the Sun, explodes in a blast releasing a hundredth the energy of a supernova, destroying all life on planet Moscow within an instant of the wavefront reaching it, and the entire planet within an hour.

The problem is, type G stars just don't explode on their own. Somebody did this, quite likely using technologies which risk Big E's “or else” on whoever was responsible (or it concluded was responsible). What's more, Moscow maintained a slower-than-light deterrent fleet with relativistic planet-buster weapons to avenge any attack on their home planet. This fleet, essentially undetectable en route, has launched against New Dresden, a planet with which Moscow had a nonviolent trade dispute. The deterrent fleet can be recalled only by coded messages from two Moscow system ambassadors who survived the attack at their postings in other systems, but can also be sent an irrevocable coercion code, which cancels the recall and causes any further messages to be ignored, by three ambassadors. And somebody seems to be killing off the remaining Moscow ambassadors: if the number falls below two, the attack will arrive at New Dresden in thirty-five years and wipe out the planet and as many of its eight hundred million inhabitants as have not been evacuated.

Victoria Strowger, who detests her name and goes by “Wednesday”, has had an invisible friend since childhood, “Herman”, who speaks to her through her implants. As she's grown up, she has come to understand that, in some way, Herman is connected to Big E and, in return for advice and assistance she values highly, occasionally asks her for favours. Wednesday and her family were evacuated from one of Moscow's space stations just before the deadly wavefront from the exploded star arrived, with Wednesday running a harrowing last “errand” for Herman before leaving. Later, in her new home in an asteroid in the Septagon system, she becomes the target of an attack seemingly linked to that mystery mission, and escapes only to find her family wiped out by the attackers. With Herman's help, she flees on an interstellar liner.

While Singularity Sky was a delightful romp describing a society which had deliberately relinquished technology in order to maintain a stratified class system with the subjugated masses frozen around the Victorian era, suddenly confronted with the merry pranksters of the Festival, who inject singularity-epoch technology into its stagnant culture, Iron Sunrise is a much more conventional mystery/adventure tale about gaining control of the ambassadorial keys, figuring out who are the good and bad guys, and trying to avert a delayed but inexorably approaching genocide.

This just didn't work for me. I never got engaged in the story, didn't find the characters particularly interesting, nor came across any interesting ways in which the singularity came into play (and this is supposed to be the author's “Singularity Series”). There are some intriguing concepts, for example the “causal channel”, in which quantum-entangled particles permit instantaneous communication across spacelike separations as long as the previously-prepared entangled particles have first been delivered to the communicating parties by slower than light travel. This is used in the plot to break faster than light communication where it would be inconvenient for the story line (much as all those circumstances in Star Trek where the transporter doesn't work for one reason or another when you're tempted to say “Why don't they just beam up?”). The apparent villains, the ReMastered, (think Space Nazis who believe in a Tipler-like cult of Omega Point out-Eschaton-ing the Eschaton, with icky brain-sucking technology) were just over the top.

Accelerando and Singularity Sky were thought-provoking and great fun. This one doesn't come up to that standard.

 Permalink