Friday, September 5, 2014

Reading List: The South Pole

Amundsen, Roald. The South Pole. New York: Cooper Square Press, [1913] 2001. ISBN 978-0-8154-1127-7.
In modern warfare, it has been observed that “generals win battles, but logisticians win wars.” So it is with planning an exploration mission to a remote destination where no human has ever set foot, and the truths are as valid for polar exploration in the early 20th century as they will be for missions to Mars in the 21st. On December 14th, 1911, Roald Amundsen and his five-man southern party reached the South Pole after a trek from the camp on the Ross Ice Shelf where they had passed the previous southern winter, preparing for an assault on the pole as early as the weather would permit. By over-wintering, they would be able to depart southward well before a ship would be able to land an expedition, since a ship would have to wait until the sea ice dispersed sufficiently to make a landing.

Amundsen's plan was built around what space mission architects call “in-situ resource utilisation” and “depots”, as well as “propulsion staging”. This allowed for a very lightweight push to the pole, both in terms of the amount of supplies which had to be landed by their ship, the Fram, and in the size of the polar party and the loading of their sledges. Upon arriving in Antarctica, Amundsen's party immediately began to hunt the abundant seals near the coast. More than two hundred seals were killed, processed, and stored for later use. (Since the temperature on the Ross Ice Shelf and the Antarctic interior never rises above freezing, the seal meat would keep indefinitely.) Then parties were sent out in the months remaining before the arrival of winter in 1911 to establish depots at every degree of latitude between the base camp and 82° south. These depots contained caches of seal meat for the men and dogs and kerosene for melting snow for water and cooking food. The depot-laying journeys familiarised the explorers with driving teams of dogs and operating in the Antarctic environment.

Amundsen had chosen dogs to pull his sledges. While his rival to be first at the pole, Robert Falcon Scott, experimented with pulling sledges by ponies, motorised sledges, and man-hauling, Amundsen relied upon the experience of indigenous people in Arctic environments that dogs were the best solution. Dogs reproduced and matured sufficiently quickly that attrition could be made up by puppies born during the expedition, they could be fed on seal meat, which could be obtained locally, and if a dog team were to fall into a crevasse (as was inevitable when crossing uncharted terrain), the dogs could be hauled out, no worse for wear, by the drivers of other sledges. For ponies and motorised sledges, this was not the case.

Further, Amundsen adopted a strategy which can best be described as “dog eat dog”. On the journey to the pole, he started with 52 dogs. Seven of these had died from exhaustion or other causes before the ascent to the polar plateau. (Dogs who died were butchered and fed to the other dogs. Greenland sled dogs, being only slightly removed from wolves, had no hesitation in devouring their erstwhile comrades.) Once reaching the plateau, 27 dogs were slaughtered, their meat divided between the surviving dogs and the five men. Only 18 dogs would proceed to the pole. Dog carcasses were cached for use on the return journey.

Beyond the depots, the polar party had to carry everything required for the trip. but knowing the depots would be available for the return allowed them to travel lightly. After reaching the pole, they remained for three days to verify their position, send out parties to ensure they had encircled the pole's position, and built a cairn to commemorate their achievement. Amundsen left a letter which he requested Captain Scott deliver to King Haakon VII of Norway should Amundsen's party be lost on its return to base. (Sadly, that was the fate which awaited Scott, who arrived at the pole on January 17th, 1912, only to find the Amundsen expedition's cairn there.)

This book is Roald Amundsen's contemporary memoir of the expedition. Originally published in two volumes, the present work includes both. Appendices describe the ship, the Fram, and scientific investigations in meteorology, geology, astronomy, and oceanography conducted during the expedition. Amundsen's account is as matter-of-fact as the memoirs of some astronauts, but a wry humour comes through when discussing dealing with sled dogs who have will of their own and also the foibles of humans cooped up in a small cabin in an alien environment during a night which lasts for months. He evinces great respect for his colleagues and competitors in polar exploration, particularly Scott and Shackleton, and worries whether his own approach to reaching the pole would be proved superior to theirs. At the time the book was published, the tragic fate of Scott's expedition was not known.

Today, we might not think of polar exploration as science, but a century ago it was as central to the scientific endeavour as robotic exploration of Mars is today. Here was an entire continent, known only in sketchy detail around its coast, with only a few expeditions into the interior. When Amundsen's party set out on their march to the pole, they had no idea whether they would encounter mountain ranges along the way and, if so, whether they could find a way over or around them. They took careful geographic and meteorological observations along their trek (as well as oceanographical measurements on the trip to Antarctica and back), and these provided some of the first data points toward understanding weather in the southern hemisphere.

In Norway, Amundsen was hailed as a hero. But it is clear from this narrative he never considered himself such. He wrote:

I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.

This work is in the public domain, and there are numerous editions of it available, in print and in electronic form, many from independent publishers. The independent publishers, for the most part, did not distinguish themselves in their respect for this work. Many of their editions were produced by running an optical character recognition program over a print copy of the book, then putting it together with minimal copy-editing. Some (including the one I was foolish enough to buy) elide all of the diagrams, maps, and charts from the original book, which renders parts of the text incomprehensible. The paperback edition cited above, while expensive, is a facsimile edition of the original 1913 two volume English translation of Amundsen's original work, including all of the illustrations. I know of no presently-available electronic edition which has comparable quality and includes all of the material in the original book. Be careful—if you follow the link to the paperback edition, you'll see a Kindle edition listed, but this is from a different publisher and is rife with errors and includes none of the illustrations. I made the mistake of buying it, assuming it was the same as the highly-praised paperback. It isn't; don't be fooled.

Posted at 15:17 Permalink

Friday, August 29, 2014

Reading List: The Man Who Changed Everything

Mahon, Basil. The Man Who Changed Everything. Chichester, UK: John Wiley & Sons, 2003. ISBN 978-0-470-86171-4.
In the 19th century, science in general and physics in particular grew up, assuming their modern form which is still recognisable today. At the start of the century, the word “scientist” was not yet in use, and the natural philosophers of the time were often amateurs. University research in the sciences, particularly in Britain, was rare. Those working in the sciences were often occupied by cataloguing natural phenomena, and apart from Newton's monumental achievements, few people focussed on discovering mathematical laws to explain the new physical phenomena which were being discovered such as electricity and magnetism.

One person, James Clerk Maxwell, was largely responsible for creating the way modern science is done and the way we think about theories of physics, while simultaneously restoring Britain's standing in physics compared to work on the Continent, and he created an institution which would continue to do important work from the time of his early death until the present day. While every physicist and electrical engineer knows of Maxwell and his work, he is largely unknown to the general public, and even those who are aware of his seminal work in electromagnetism may be unaware of the extent his footprints are found all over the edifice of 19th century physics.

Maxwell was born in 1831 to a Scottish lawyer, John Clerk, and his wife Frances Cay. Clerk subsequently inherited a country estate, and added “Maxwell” to his name in honour of the noble relatives from whom he inherited it. His son's first name, then was “James” and his surname “Clerk Maxwell”: this is why his full name is always used instead of “James Maxwell”. From childhood, James was curious about everything he encountered, and instead of asking “Why?” over and over like many children, he drove his parents to distraction with “What's the go o' that?”. His father did not consider science a suitable occupation for his son and tried to direct him toward the law, but James's curiosity did not extend to legal tomes and he concentrated on topics that interested him. He published his first scientific paper, on curves with more than two foci, at the age of 14. He pursued his scientific education first at the University of Edinburgh and later at Cambridge, where he graduated in 1854 with a degree in mathematics. He came in second in the prestigious Tripos examination, earning the title of Second Wrangler.

Maxwell was now free to begin his independent research, and he turned to the problem of human colour vision. It had been established that colour vision worked by detecting the mixture of three primary colours, but Maxwell was the first to discover that these primaries were red, green, and blue, and that by mixing them in the correct proportion, white would be produced. This was a matter to which Maxwell would return repeatedly during his life.

In 1856 he accepted an appointment as a full professor and department head at Marischal College, in Aberdeen Scotland. In 1857, the topic for the prestigious Adams Prize was the nature of the rings of Saturn. Maxwell's submission was a tour de force which proved that the rings could not be either solid nor a liquid, and hence had to be made of an enormous number of individually orbiting bodies. Maxwell was awarded the prize, the significance of which was magnified by the fact that his was the only submission: all of the others who aspired to solve the problem had abandoned it as too difficult.

Maxwell's next post was at King's College London, where he investigated the properties of gases and strengthened the evidence for the molecular theory of gases. It was here that he first undertook to explain the relationship between electricity and magnetism which had been discovered by Michael Faraday. Working in the old style of physics, he constructed an intricate mechanical thought experiment model which might explain the lines of force that Faraday had introduced but which many scientists thought were mystical mumbo-jumbo. Maxwell believed the alternative of action at a distance without any intermediate mechanism was wrong, and was able, with his model, to explain the phenomenon of rotation of the plane of polarisation of light by a magnetic field, which had been discovered by Faraday. While at King's College, to demonstrate his theory of colour vision, he took and displayed the first colour photograph.

Maxwell's greatest scientific achievement was done while living the life of a country gentleman at his estate, Glenair. In his textbook, A Treatise on Electricity and Magnetism, he presented his famous equations which showed that electricity and magnetism were two aspects of the same phenomenon. This was the first of the great unifications of physical laws which have continued to the present day. But that isn't all they showed. The speed of light appeared as a conversion factor between the units of electricity and magnetism, and the equations allowed solutions of waves oscillating between an electric and magnetic field which could propagate through empty space at the speed of light. It was compelling to deduce that light was just such an electromagnetic wave, and that waves of other frequencies outside the visual range must exist. Thus was laid the foundation of wireless communication, X-rays, and gamma rays. The speed of light is a constant in Maxwell's equations, not depending upon the motion of the observer. This appears to conflict with Newton's laws of mechanics, and it was not until Einstein's 1905 paper on special relativity that the mystery would be resolved. In essence, faced with a dispute between Newton and Maxwell, Einstein decided to bet on Maxwell, and he chose wisely. Finally, when you look at Maxwell's equations (in their modern form, using the notation of vector calculus), they appear lopsided. While they unify electricity and magnetism, the symmetry is imperfect in that while a moving electric charge generates a magnetic field, there is no magnetic charge which, when moved, generates an electric field. Such a charge would be a magnetic monopole, and despite extensive experimental searches, none has ever been found. The existence of monopoles would make Maxwell's equations even more beautiful, but sometimes nature doesn't care about that. By all evidence to date, Maxwell got it right.

In 1871 Maxwell came out of retirement to accept a professorship at Cambridge and found the Cavendish Laboratory, which would focus on experimental science and elevate Cambridge to world-class status in the field. To date, 29 Nobel Prizes have been awarded for work done at the Cavendish.

Maxwell's theoretical and experimental work on heat and gases revealed discrepancies which were not explained until the development of quantum theory in the 20th century. His suggestion of Maxwell's demon posed a deep puzzle in the foundations of thermodynamics which eventually, a century later, showed the deep connections between information theory and statistical mechanics. His practical work on automatic governors for steam engines foreshadowed what we now call control theory. He played a key part in the development of the units we use for electrical quantities.

By all accounts Maxwell was a modest, generous, and well-mannered man. He wrote whimsical poetry, discussed a multitude of topics (although he had little interest in politics), was an enthusiastic horseman and athlete (he would swim in the sea off Scotland in the winter), and was happily married, with his wife Katherine an active participant in his experiments. All his life, he supported general education in science, founding a working men's college in Cambridge and lecturing at such colleges throughout his career.

Maxwell lived only 48 years—he died in 1879 of the same cancer which had killed his mother when he was only eight years old. When he fell ill, he was engaged in a variety of research while presiding at the Cavendish Laboratory. We shall never know what he might have done had he been granted another two decades.

Apart from the significant achievements Maxwell made in a wide variety of fields, he changed the way physicists look at, describe, and think about natural phenomena. After using a mental model to explore electromagnetism, he discarded it in favour of a mathematical description of its behaviour. There is no theory behind Maxwell's equations: the equations are the theory. To the extent they produce the correct results when experimental conditions are plugged in, and predict new phenomena which are subsequently confirmed by experiment, they are valuable. If they err, they should be supplanted by something more precise. But they say nothing about what is really going on—they only seek to model what happens when you do experiments. Today, we are so accustomed to working with theories of this kind: quantum mechanics, special and general relativity, and the standard model of particle physics, that we don't think much about it, but it was revolutionary in Maxwell's time. His mathematical approach, like Newton's, eschewed explanation in favour of prediction: “We have no idea how it works, but here's what will happen if you do this experiment.” This is perhaps Maxwell's greatest legacy.

This is an excellent scientific biography of Maxwell which also gives the reader a sense of the man. He was such a quintessentially normal person there aren't a lot of amusing anecdotes to relate. He loved life, loved his work, cherished his friends, and discovered the scientific foundations of the technologies which allow you to read this. In the Kindle edition, at least as read on an iPad, the text appears in a curious, spidery, almost vintage, font in which periods are difficult to distinguish from commas. Numbers sometimes have spurious spaces embedded within them, and the index cites pages in the print edition which are useless since the Kindle edition does not include real page numbers.

Posted at 23:45 Permalink

Thursday, August 21, 2014

Reading List: Savage Continent

Lowe, Keith. Savage Continent. New York: Picador, [2012] 2013. ISBN 978-1-250-03356-7.
On May 8th, 1945, World War II in Europe formally ended when the Allies accepted the unconditional surrender of Germany. In popular myth, especially among those too young to have lived through the war and its aftermath, the defeat of Italy and Germany ushered in, at least in Western Europe not occupied by Soviet troops, a period of rebuilding and rapid economic growth, spurred by the Marshall Plan. The French refer to the three decades from 1945 to 1975 as Les Trente Glorieuses. But that isn't what actually happened, as this book documents in detail. Few books cover the immediate aftermath of the war, or concentrate exclusively upon that chaotic period. The author has gone to great lengths to explore little-known conflicts and sort out conflicting accounts of what happened still disputed today by descendants of those involved.

The devastation wreaked upon cities where the conflict raged was extreme. In Germany, Berlin, Hanover, Duisburg, Dortmund, and Cologne lost more than half their habitable buildings, with the figure rising to 70% in the latter city. From Stalingrad to Warsaw to Caen in France, destruction was general with survivors living in the rubble. The transportation infrastructure was almost completely obliterated, along with services such as water, gas, electricity, and sanitation. The industrial plant was wiped out, and along with it the hope of employment. This was the state of affairs in May 1945, and the Marshall Plan did not begin to deliver assistance to Western Europe until three years later, in April 1948. Those three years were grim, and compounded by score-settling, revenge, political instability, and multitudes of displaced people returning to areas with no infrastructure to support them.

And this was in Western Europe. As is the case with just about everything regarding World War II in Europe, the further east you go, the worse things get. In the Soviet Union, 70,000 villages were destroyed, along with 32,000 factories. The redrawing of borders, particularly those of Poland and Germany, set the stage for a paroxysm of ethnic cleansing and mass migration as Poles were expelled from territory now incorporated into the Soviet Union and Germans from the western part of Poland. Reprisals against those accused of collaboration with the enemy were widespread, with murder not uncommon. Thirst for revenge extended to the innocent, including children fathered by soldiers of occupying armies.

The end of the War did not mean an end to the wars. As the author writes, “The Second World War was therefore not only a traditional conflict for territory: it was simultaneously a war of race, and a war of ideology, and was interlaced with half a dozen civil wars fought for purely local reasons.” Defeat of Germany did nothing to bring these other conflicts to an end. Guerrilla wars continued in the Baltic states annexed by the Soviet Union as partisans resisted the invader. An all-out civil war between communists and anti-communists erupted in Greece and was ended only through British and American aid to the anti-communists. Communist agitation escalated to violence in Italy and France. And country after country in Eastern Europe came under Soviet domination as puppet regimes were installed through coups, subversion, or rigged elections.

When reading a detailed history of a period most historians ignore, one finds oneself exclaiming over and over, “I didn't know that!”, and that is certainly the case here. This was a dark period, and no group seemed immune from regrettable acts, including Jews liberated from Nazi death camps and slave labourers freed as the Allies advanced: both sometimes took their revenge upon German civilians. As the author demonstrates, the aftermath of this period still simmers beneath the surface among the people involved—it has become part of the identity of ethnic groups which will outlive any person who actually remembers the events of the immediate postwar period.

In addition to providing an enlightening look at this neglected period, the events in the years following 1945 have much to teach us about those playing out today around the globe. We are seeing long-simmering ethnic and religious strife boil into open conflict as soon as the system is perturbed enough to knock the lid off the kettle. Borders drawn by politicians mean little when people's identity is defined by ancestry or faith, and memories are very long, measured sometimes in centuries. Even after a cataclysmic conflict which levels cities and reduces populations to near-medieval levels of subsistence, many people do not long for peace but instead seek revenge. Economic growth and prosperity can, indeed, change the attitude of societies and allow for alliances among former enemies (imagine how odd the phrase “Paris-Berlin axis”, heard today in discussions of the European Union, would have sounded in 1946), but the results of a protracted conflict can prevent the emergence of the very prosperity which might allow consigning it to the past.

Posted at 22:53 Permalink

Tuesday, August 12, 2014

Reading List: Black List

Thor, Brad. Black List. New York: Pocket Books, 2012. ISBN 978-1-4391-9302-0.
This is the twelfth in the author's Scot Harvath series, which began with The Lions of Lucerne (October 2010). Brad Thor has remarked in interviews that he strives to write thrillers which anticipate headlines which will break after their publication, and with this novel he hits a grand slam.

Scot Harvath is ambushed in Paris by professional killers who murder a member of his team. After narrowly escaping, he goes to ground and covertly travels to a remote region in Basque country where he has trusted friends. He is then attacked there, again by trained killers, and he has to conclude that the probability is high that the internal security of his employer, the Carlton Group, has been breached, perhaps from inside.

Meanwhile, his employer, Reed Carlton, is attacked at his secure compound by an assault team and barely escapes with his life. When Carlton tries to use his back channels to contact members of his organisation, they all appear to have gone dark. To Carlton, a career spook with tradecraft flowing in his veins, this indicates his entire organisation has been wiped out, for no apparent motive and by perpetrators unknown.

Harvath, Carlton, and the infovore dwarf Nicholas, operating independently, must begin to pick up the pieces to figure out what is going on, while staying under the radar of a pervasive surveillance state which employs every technological means to track them down and target them for summary extra-judicial elimination.

If you pick up this book and read it today, you might think it's based upon the revelations of Edward Snowden about the abuses of the NSA conducting warrantless surveillance on U.S. citizens. But it was published in 2012, a full year before the first of Snowden's disclosures. The picture of the total information awareness state here is, if anything, more benign than what we now know to be the case in reality. What is different is that when Harvath, Carlton, and Nicholas get to the bottom of the mystery, the reaction in high places is what one would hope for in a constitutional republic, as opposed to the “USA! USA! USA!” cheerleading or silence which has greeted the exposure of abuses by the NSA on the part of all too many people.

This is a prophetic thriller which demonstrates how the smallest compromises of privacy: credit card transactions, telephone call metadata, license plate readers, facial recognition, Web site accesses, search engine queries, etc. can be woven into a dossier on any person of interest which makes going dark to the snooper state equivalent to living technologically in 1950. This not just a cautionary tale for individuals who wish to preserve a wall of privacy around themselves from the state, but also a challenge for writers of thrillers. Just as mobile telephones would have wrecked the plots of innumerable mystery and suspense stories written before their existence, the emergence of the panopticon state will make it difficult for thriller writers to have both their heroes and villains operating in the dark. I am sure the author will rise to this challenge.

Posted at 23:30 Permalink

Thursday, July 31, 2014

Reading List: Conversations with My Agent (and Set Up, Joke, Set Up, Joke)

Long, Rob. Conversations with My Agent (and Set Up, Joke, Set Up, Joke). London: Bloomsbury Publishing, [1996, 2005] 2014. ISBN 978-1-4088-5583-6.
Hollywood is a strange place, where the normal rules of business, economics, and personal and professional relationships seem to have been suspended. When he arrived in Hollywood in 1930, P. G. Wodehouse found the customs and antics of its denizens so bizarre that he parodied them in a series of hilarious stories. After a year in Hollywood, he'd had enough and never returned. When Rob Long arrived in Hollywood to attend UCLA film school, the television industry was on the threshold of a technology-driven change which would remake it and forever put an end to the domination by three large networks which had existed since its inception. The advent of cable and, later, direct to home satellite broadcasting eliminated the terrestrial bandwidth constraints which had made establishing a television outlet forbiddingly expensive and, at the same time, side-stepped many of the regulatory constraints which forbade “edgy” content on broadcast channels. Long began his television career as a screenwriter for Cheers in 1990, and became an executive producer of the show in 1992. After the end of Cheers, he created and produced other television projects, including Sullivan & Son, which is currently on the air.

Television ratings measure both “rating points”: the absolute number of television sets tuned into the program, and “share points”: the fraction of television sets turned on at the time viewing the program. In the era of Cheers, a typical episode might have a rating equivalent to more than 22 million viewers and a share of 32%, meaning it pulled in around one third of all television viewers in its time slot. The proliferation of channels makes it unlikely any show will achieve numbers like this again. The extremely popular 24 attracted between 9 and 14 million viewers in its eight seasons, and the highly critically regarded Mad Men never topped a mean viewership of 2.7 million in its best season.

It was into this new world of diminishing viewership expectations but voracious thirst for content to fill all the new channels that the author launched his post-Cheers career. The present volume collects two books originally published independently, Conversations with My Agent from 1998, and 2005's Set Up, Joke, Set Up, Joke, written as Hollywood's перестро́йка was well-advanced. The volumes fit together almost seamlessly, and many readers will barely notice the transition.

This is a very funny book, but there is also a great deal of wisdom about the ways of Hollywood, how television projects are created, pitched to a studio, marketed to a network, and the tortuous process leading from concept to script to pilot to series and, all too often, to cancellation. The book is written as a screenplay, complete with scene descriptions, directions, dialogue, transitions, and sound effect call-outs. Most of the scenes are indeed conversations between the author and his agent in various circumstances, but we also get to be a fly on the wall at story pitches, meetings with the network, casting, shooting an episode, focus group testing, and many other milestones in the life cycle of a situation comedy. The circumstances are fictional, but are clearly informed by real-life experience. Anybody contemplating a career in Hollywood, especially as a television screenwriter, would be insane not to read this book. You'll laugh a lot, but also learn something on almost every page.

The reader will also begin to appreciate the curious ways of Hollywood business, what the author calls “HIPE”: the Hollywood Inversion Principle of Economics. “The HIPE, as it will come to be known, postulates that every commonly understood, standard business practice of the outside world has its counterpart in the entertainment industry. Only it's backwards.” And anybody who thinks accounting is not a creative profession has never had experience with a Hollywood project. The culture of the entertainment business is also on display—an intricate pecking order involving writers, producers, actors, agents, studio and network executives, and “below the line” specialists such as camera operators and editors, all of whom have to read the trade papers to know who's up and who's not.

This book provides an insider's perspective on the strange way television programs come to be. In a way, it resembles some aspects of venture capital: most projects come to nothing, and most of those which are funded fail, losing the entire investment. But the few which succeed can generate sufficient money to cover all the losses and still yield a large return. One television show that runs for five years, producing solid ratings and 100+ episodes to go into syndication, can set up its writers and producers for life and cover the studio's losses on all of the dogs and cats.

Posted at 22:15 Permalink

Wednesday, July 30, 2014

Reading List: Robert A. Heinlein: In Dialogue with His Century. Vol 1

Patterson, William H., Jr. Robert A. Heinlein: In Dialogue with His Century. Vol. 1 New York: Tor Books, 2010. ISBN 978-0-765-31960-9.
Robert Heinlein came from a family who had been present in America before there were the United States, and whose members had served in all of the wars of the Republic. Despite being thin, frail, and with dodgy eyesight, he managed to be appointed to the U.S. Naval Academy where, despite demerits for being a hellion, he graduated and was commissioned as a naval officer. He was on the track to a naval career when felled by tuberculosis (which was, in the 1930s, a potential death sentence, with the possibility of recurrence any time in later life).

Heinlein had written while in the Navy, but after his forced medical retirement, turned his attention to writing science fiction for pulp magazines, and after receiving a cheque for US$ 70 for his first short story, “Life-Line”, he exclaimed, “How long has this racket been going on? And why didn't anybody tell me about it sooner?” Heinlein always viewed writing as a business, and kept a thermometer on which he charted his revenue toward paying off the mortgage on his house.

While Heinlein fit in very well with the Navy, and might have been, absent medical problems, a significant commander in the fleet in World War II, he was also, at heart, a bohemian, with a soul almost orthogonal to military tradition and discipline. His first marriage was a fling with a woman who introduced him to physical delights of which he was unaware. That ended quickly, and then he married Leslyn, who was his muse, copy-editor, and business manager in a marriage which persisted throughout World War II, when both were involved in war work. Leslyn worked herself in this effort into insanity and alcoholism, and they divorced in 1947.

It was Robert Heinlein who vaulted science fiction from the ghetto of the pulp magazines to the “slicks” such as Collier's and the Saturday Evening Post. This was due to a technological transition in the publishing industry which is comparable to that presently underway in the migration from print to electronic publishing. Rationing of paper during World War II helped to create the “pocket book” or paperback publishing industry. After the end of the war, these new entrants in the publishing market saw a major opportunity in publishing anthologies of stories previously published in the pulps. The pulp publishers viewed this as an existential threat—who would buy a pulp magazine if, for almost the same price, one could buy a collection of the best stories from the last decade in all of those magazines?

Heinlein found his fiction entrapped in this struggle. While today, when you sell a story to a magazine in the U.S., you usually only sell “First North American serial rights”, in the 1930s and 1940s, authors sold all rights, and it was up to the publisher to release their rights for republication of a work in an anthology or adaptation into a screenplay. This is parallel to the contemporary battle between traditional publishers and independent publishing platforms, which have become the heart of science fiction.

Heinlein was complex. While an exemplary naval officer, he was a nudist, married three times, interested in the esoteric (and a close associate of Jack Parsons and L. Ron Hubbard). He was an enthusiastic supporter of Upton Sinclair's EPIC movement and his “Social Credit” agenda.

This authorised biography, with major contributions from Heinlein's widow, Virginia, chronicles the master storyteller's life in his first forty years—until he found, or created, an audience receptive to the tales of wonder he spun. If you've read all of Heinlein's fiction, it may be difficult to imagine how much of it was based in Heinlein's own life. If you thought Heinlein's later novels were weird, appreciate how the master was weird before you were born.

I had the privilege of meeting Robert and Virginia Heinlein in 1984. I shall always cherish that moment.

Posted at 02:27 Permalink

Sunday, July 27, 2014

Reading List: The Guns of August

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988, 1994] 2004. ISBN 978-0-345-47609-8.
One hundred years ago the world was on the brink of a cataclysmic confrontation which would cause casualties numbered in the tens of millions, destroy the pre-existing international order, depose royalty and dissolve empires, and plant the seeds for tyrannical regimes and future conflicts with an even more horrific toll in human suffering. It is not exaggeration to speak of World War I as the pivotal event of the 20th century, since so much that followed can be viewed as sequelæ which can be traced directly to that conflict.

It is thus important to understand how that war came to be, and how in the first month after its outbreak the expectations of all parties to the conflict, arrived at through the most exhaustive study by military and political élites, were proven completely wrong and what was expected to be a short, conclusive war turned instead into a protracted blood-letting which would continue for more than four years of largely static warfare. This magnificent book, which covers the events leading to the war and the first month after its outbreak, provides a highly readable narrative history of the period with insight into both the grand folly of war plans drawn up in isolation and mechanically followed even after abundant evidence of their faults have caused tragedy, but also how contingency—chance, and the decisions of fallible human beings in positions of authority can tilt the balance of history.

The author is not an academic historian, and she writes for a popular audience. This has caused some to sniff at her work, but as she noted, Herodotus, Thucydides, Gibbon, and MacCauley did not have Ph.D.s. She immerses the reader in the world before the war, beginning with the 1910 funeral in London of Edward VII where nine monarchs rode in the cortège, most of whose nations would be at war four years hence. The system of alliances is described in detail, as is the mobilisation plans of the future combatants, all of which would contribute to fatal instability of the system to a small perturbation.

Germany, France, Russia, and Austria-Hungary had all drawn up detailed mobilisation plans for assembling, deploying, and operating their conscript armies in the event of war. (Britain, with an all-volunteer regular army which was tiny by continental standards, had no pre-defined mobilisation plan.) As you might expect, Germany's plan was the most detailed, specifying railroad schedules and the composition of individual trains. Now, the important thing to keep in mind about these plans is that, together, they created a powerful first-mover advantage. If Russia began to mobilise, and Germany hesitated in its own mobilisation in the hope of defusing the conflict, it might be at a great disadvantage if Russia had only a few days of advance in assembling its forces. This means that there was a powerful incentive in issuing the mobilisation order first, and a compelling reason for an adversary to begin his own mobilisation order once news of it became known.

Compounding this instability were alliances which compelled parties to them to come to the assistance of others. France had no direct interest in the conflict between Germany and Austria-Hungary and Russia in the Balkans, but it had an alliance with Russia, and was pulled into the conflict. When France began to mobilise, Germany activated its own mobilisation and the Schlieffen plan to invade France through Belgium. Once the Germans violated the neutrality of Belgium, Britain's guarantee of that neutrality required (after the customary ambiguity and dithering) a declaration of war against Germany, and the stage was set for a general war in Europe.

The focus here is on the initial phase of the war: where Germany, France, and Russia were all following their pre-war plans, all initially expecting a swift conquest of their opponents—the Battle of the Frontiers, which occupied most of the month of August 1914. An afterword covers the First Battle of the Marne where the German offensive on the Western front was halted and the stage set for the static trench warfare which was to ensue. At the conclusion of that battle, all of the shining pre-war plans were in tatters, many commanders were disgraced or cashiered, and lessons learned through the tragedy “by which God teaches the law to kings” (p. 275).

A century later, the lessons of the outbreak of World War I could not be more relevant. On the eve of the war, many believed that the interconnection of the soon-to-be belligerents through trade was such that war was unthinkable, as it would quickly impoverish them. Today, the world is even more connected and yet there are conflicts all around the margins, with alliances spanning the globe. Unlike 1914, when the world was largely dominated by great powers, now there are rogue states, non-state actors, movements dominated by religion, and neo-barbarism and piracy loose upon the stage, and some of these may lay their hands on weapons whose destructive power dwarf those of 1914–1918. This book, published more than fifty years ago, about a conflict a century old, could not be more timely.

Posted at 22:49 Permalink

Thursday, July 24, 2014

Floating Point Benchmark: Lua Language Added

I have posted an update to my trigonometry-intense floating point benchmark which adds Lua to the list of languages in which the benchmark is implemented. A new release of the benchmark collection including Lua is now available for downloading.

Lua was developed with the intention of being a small-footprint scripting language which could be easily embedded in applications. Despite this design goal, which it has achieved superbly, being widely adopted as the means of extensibility for numerous games and applications, it is a remarkably sophisticated language, with support for floating point, complex data structures, object oriented programming, and functional programming. It is a modern realisation of what I attempted to achieve with Atlast in 1990, but with a syntax which most programmers will find familiar and a completely memory-safe architecture (unless compromised by user extensions). If I were developing an application for which I needed scripting or user extensibility, Lua would be my tool of choice, and in porting the benchmark to the language I encountered no problems whatsoever—indeed, it worked the first time.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
JavaScript 27.6
39.1
46.9
Opera 8.0, Linux
Internet Explorer 6.0.2900, Windows XP
Mozilla Firefox 1.0.6, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console

The performance of the reference implementation of Lua is comparable to other scripting languages which compile to and execute byte-codes, such as Perl, Python, and Ruby. Raw CPU performance is rarely important in a scripting language, as it is mostly used as “glue” to invoke facilities of the host application which run at native code speed.

Update: I have added results in the above table for the benchmark run under the LuaJIT just-in-time compiler for Lua, generating code for the x86_64 architecture. This runs almost ten times faster than the standard implementation of Lua and is comparable with other compiled languages. The benchmark ran without any modifications on LuaJIT.

I have also added results from running the Python benchmark under the PyPy just-in-time compiler for Python. Again, there was a dramatic speed increase, vaulting Python into the ranks of compiled languages. Since the Python benchmark was last run with the standard implementation of Python in 2006, I re-ran it on Python 2.7.6 and found, compared to C on an x86_64 architecture, to be substantially slower. I do not know whether this is due to better performance in C code, worse performance in Python, or due to changes in machine architecture compared to the 32-bit system on which the benchmark was run in 2006. (2014-07-26 22:25 UTC)

Posted at 22:32 Permalink

Wednesday, July 16, 2014

Atlast 2.0 (64-bit) Released

I have just posted the first update to Atlast since 2007. Atlast is a FORTH-like language toolkit intended to make it easy to open the internal facilities of applications to users, especially on embedded platforms with limited computing and memory resources.

Like FORTH, Atlast provides low-level access to the memory architecture of the machine on which it runs, and is sensitive to the length of data objects. The 1.x releases of Atlast assume integers and pointers are 32 bit quantities and floating point numbers are 64 bit, occupying two stack items. This assumption is no longer the case when building programs in native mode on 64-bit systems: integers, pointers, and floating point values are all 64 bits.

Release 2.0 of Atlast is a dedicated 64-bit implementation of the language. If you are developing on a 64-bit platform and are confident you will only target such platforms, it provides a simpler architecture (no need for double word operations for floating point) and a larger address space and integers. This comes at the cost of loss of source code compatibility with the 32-bit 1.x releases, particularly for floating point code. If your target platform is a 32-bit system and your development machine is 64-bit, it's best to use version 1.2 (which is functionally identical to 2.0), cross-compiled as 32-bit code. If you don't use floating point or do low-level memory twiddling, it's likely your programs will work on both 32- and 64-bit versions.

Although Atlast includes comprehensive pointer and stack limit checking, it is not memory-safe, and consequently I do not encourage its use in modern applications. When it was originally developed in the late 1980s, its ability to fit in a small memory footprint was of surpassing importance. With the extravagant memory and compute power of contemporary machines, this is less important and other scripting languages which are both entirely safe and less obscure in syntax will usually be preferable. Still, some people working with embedded systems very close to the hardware continue to find Atlast useful, and this release updates it for 64-bit architectures.

The distribution archive has been re-organised in 2.0, collecting the regression test, examples from the user manual, and benchmarks in subdirectories. An implementation of my floating point benchmark is included among the examples.

Posted at 23:56 Permalink

Saturday, June 28, 2014

Reading List: The Case for Space Solar Power

Mankins, John C. The Case for Space Solar Power. Houston: Virginia Edition, 2014. ISBN 978-0-9913370-0-2.
As world population continues to grow and people in the developing world improve their standard of living toward the level of residents of industrialised nations, demand for energy will increase enormously. Even taking into account anticipated progress in energy conservation and forecasts that world population will reach a mid-century peak and then stabilise, the demand for electricity alone is forecasted to quadruple in the century from 2000 to 2100. If electric vehicles shift a substantial part of the energy consumed for transportation from hydrocarbon fuels to electricity, the demand for electric power will be greater still.

Providing this electricity in an affordable, sustainable way is a tremendous challenge. Most electricity today is produced by burning fuels such as coal, natural gas, and petroleum; by nuclear fission reactors; and by hydroelectric power generated by dams. Quadrupling electric power generation by any of these means poses serious problems. Fossil fuels may be subject to depletion, pose environmental consequences both in extraction and release of combustion products into the atmosphere, and are distributed unevenly around the world, leading to geopolitical tensions between have and have-not countries. Uranium fission is a technology with few environmental drawbacks, but operating it in a safe manner is very demanding and requires continuous vigilance over the decades-long lifespan of a power station. Further, the risk exists that nuclear material can be diverted for weapons use, especially if nuclear power stations proliferate into areas which are politically unstable. Hydroelectric power is clean, generally reliable (except in the case of extreme droughts), and inexhaustible, but unfortunately most rivers which are suitable for its generation have already been dammed, and potential projects which might be developed are insufficient to meet the demand.

Well, what about those “sustainable energy” projects the environmentalists are always babbling about: solar panels, eagle shredders (wind turbines), and the like? They do generate energy without fuel, but they are not the solution to the problem. In order to understand why, we need to look into the nature of the market for electricity, which is segmented into two components, even though the current flows through the same wires. The first is “base load” power. The demand for electricity varies during the day, from day to day, and seasonally (for example, electricity for air conditioning peaks during the mid-day hours of summer). The base load is the electricity demand which is always present, regardless of these changes in demand. If you look at a long-term plot of electricity demand and draw a line through the troughs in the curve, everything below that line is base load power and everything above it is “peak” power. Base load power is typically provided by the sources discussed in the previous paragraph: hydrocarbon, nuclear, and hydroelectric. Because there is a continuous demand for the power they generate, these plants are designed to run non-stop (with excess capacity to cover stand-downs for maintenance), and may be complicated to start up or shut down. In Switzerland, for example, 56% of base load power is produced from hydroelectric plants and 39% from nuclear fission reactors.

The balance of electrical demand, peak power, is usually generated by smaller power plants which can be brought on-line and shut down quickly as demand varies. Peaking plants sell their power onto the grid at prices substantially higher than base load plants, which compensates for their less efficient operation and higher capital costs for intermittent operation. In Switzerland, most peak energy is generated by thermal plants which can burn either natural gas or oil.

Now the problem with “alternative energy” sources such as solar panels and windmills becomes apparent: they produce neither base load nor peak power. Solar panels produce electricity only during the day, and when the Sun is not obscured by clouds. Windmills, obviously, only generate when the wind is blowing. Since there is no way to efficiently store large quantities of energy (all existing storage technologies raise the cost of electricity to uneconomic levels), these technologies cannot be used for base load power, since they cannot be relied upon to continuously furnish power to the grid. Neither can they be used for peak power generation, since the times at which they are producing power may not coincide with times of peak demand. That isn't to say these energy sources cannot be useful. For example, solar panels on the roofs of buildings in the American southwest make a tremendous amount of sense since they tend to produce power at precisely the times the demand for air conditioning is greatest. This can smooth out, but not replace, the need for peak power generation on the grid.

If we wish to dramatically expand electricity generation without relying on fossil fuels for base load power, there are remarkably few potential technologies. Geothermal power is reliable and inexpensive, but is only available in a limited number of areas and cannot come close to meeting the demand. Nuclear fission, especially modern, modular designs is feasible, but faces formidable opposition from the fear-based community. If nuclear fusion ever becomes practical, we will have a limitless, mostly clean energy source, but after sixty years of research we are still decades away from an operational power plant, and it is entirely possible the entire effort may fail. The liquid fluoride thorium reactor, a technology demonstrated in the 1960s, could provide centuries of energy without the nuclear waste or weapons diversion risks of uranium-based nuclear power, but even if it were developed to industrial scale it's still a “nuclear reactor” and can be expected to stimulate the same hysteria as existing nuclear technology.

This book explores an entirely different alternative. Think about it: once you get above the Earth's atmosphere and sufficiently far from the Earth to avoid its shadow, the Sun provides a steady 1.368 kilowatts per square metre, and will continue to do so, non-stop, for billions of years into the future (actually, the Sun is gradually brightening, so on the scale of hundreds of millions of years this figure will increase). If this energy could be harvested and delivered efficiently to Earth, the electricity needs of a global technological civilisation could be met with a negligible impact on the Earth's environment. With present-day photovoltaic cells, we can convert 40% of incident sunlight to electricity, and wireless power transmission in the microwave band (to which the Earth's atmosphere is transparent, even in the presence of clouds and precipitation) has been demonstrated at 40% efficiency, with 60% end-to-end efficiency expected for future systems.

Thus, no scientific breakthrough of any kind is required to harvest abundant solar energy which presently streams past the Earth and deliver it to receiving stations on the ground which feed it into the power grid. Since the solar power satellites would generate energy 99.5% of the time (with short outages when passing through the Earth's shadow near the equinoxes, at which time another satellite at a different longitude could pick up the load), this would be base load power, with no fuel source required. It's “just a matter of engineering” to calculate what would be required to build the collector satellite, launch it into geostationary orbit (where it would stay above the same point on Earth), and build the receiver station on the ground to collect the energy beamed down by the satellite. Then, given a proposed design, one can calculate the capital cost to bring such a system into production, its operating cost, the price of power it would deliver to the grid, and the time to recover the investment in the system.

Solar power satellites are not a new idea. In 1968, Peter Glaser published a description of a system with photovoltaic electricity generation and microwave power transmission to an antenna on Earth; in 1973 he was granted U.S. patent 3,781,647 for the system. In the 1970s NASA and the Department of Energy conducted a detailed study of the concept, publishing a reference design in 1979 which envisioned a platform in geostationary orbit with solar arrays measuring 5 by 25 kilometres and requiring a monstrous space shuttle with payload of 250 metric tons and space factories to assemble the platforms. Design was entirely conventional, using much the same technologies as were later used in the International Space Station (ISS) (but for a structure twenty times its size). Given that the ISS has a cost estimated at US$ 150 billion, NASA's 1979 estimate that a complete, operational solar power satellite system comprising 60 power generation platforms and Earth-based infrastructure would cost (in 2014 dollars) between 2.9 and 8.7 trillion might be considered optimistic. Back then, a trillion dollars was a lot of money, and this study pretty much put an end to serious consideration of solar power satellites in the U.S.for almost two decades. In the late 1990s, NASA, realising that much progress has been made in many of the enabling technologies for space solar power, commissioned a “Fresh Look Study”, which concluded that the state of the art was still insufficiently advanced to make power satellites economically feasible.

In this book, the author, after a 25-year career at NASA, recounts the history of solar power satellites to date and presents a radically new design, SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), which he argues is congruent with 21st century manufacturing technology. There are two fundamental reasons previous cost estimates for solar power satellites have come up with such forbidding figures. First, space hardware is hideously expensive to develop and manufacture. Measured in US$ per kilogram, a laptop computer is around $200/kg, a Boeing 747 $1400/kg, and a smart phone $1800/kg. By comparison, the Space Shuttle Orbiter cost $86,000/kg and the International Space Station around $110,000/kg. Most of the exorbitant cost of space hardware has little to do with the space environment, but is due to its being essentially hand-built in small numbers, and thus never having the benefit of moving down the learning curve as a product is put into mass production nor of automation in manufacturing (which isn't cost-effective when you're only making a few of a product). Second, once you've paid that enormous cost per kilogram for the space hardware, you have launch it from the Earth into space and transport it to the orbit in which it will operate. For communication satellites which, like solar power satellites, operate in geostationary orbit, current launchers cost around US$ 50,000 per kilogram delivered there. New entrants into the market may substantially reduce this cost, but without a breakthrough such as full reusability of the launcher, it will stay at an elevated level.

SPS-ALPHA tackles the high cost of space hardware by adopting a “hyper modular” design, in which the power satellite is composed of huge numbers of identical modules of just eight different types. Each of these modules is on a scale which permits prototypes to be fabricated in facilities no more sophisticated than university laboratories and light enough they fall into the “smallsat” category, permitting inexpensive tests in the space environment as required. A production power satellite, designed to deliver 2 gigawatts of electricity to Earth, will have almost four hundred thousand of each of three types of these modules, assembled in space by 4,888 robot arm modules, using more than two million interconnect modules. These are numbers where mass production economies kick in: once the module design has been tested and certified you can put it out for bids for serial production. And a factory which invests in making these modules inexpensively can be assured of follow-on business if the initial power satellite is a success, since there will a demand for dozens or hundreds more once its practicality is demonstrated. None of these modules is remotely as complicated as an iPhone, and once they are made in comparable quantities shouldn't cost any more. What would an iPhone cost if they only made five of them?

Modularity also requires the design to be distributed and redundant. There is no single-point failure mode in the system. The propulsion and attitude control module is replicated 200 times in the full design. As modules fail, for whatever cause, they will have minimal impact on the performance of the satellite and can be swapped out as part of routine maintenance. The author estimates than on an ongoing basis, around 3% of modules will be replaced per year.

The problem of launch cost is addressed indirectly by the modular design. Since no module masses more than 600 kg (the propulsion module) and none of the others exceed 100 kg, they do not require a heavy lift launcher. Modules can simply be apportioned out among a large number of flights of the most economical launchers available. Construction of a full scale solar power satellite will require between 500 and 1000 launches per year of a launcher with a capacity in the 10 to 20 metric ton range. This dwarfs the entire global launch industry, and will provide motivation to fund the development of new, reusable, launcher designs and the volume of business to push their cost down the learning curve, with a goal of reducing cost for launch to low Earth orbit to US$ 300–500 per kilogram. Note that the SpaceX Falcon Heavy, under development with a projected first flight in 2015, already is priced around US$ 1000/kg without reusability of the three core stages which is expected to be introduced in the future.

The author lays out five “Design Reference Missions” which progress from small-scale tests of a few modules in low Earth orbit to a full production power satellite delivering 2 gigawatts to the electrical grid. He estimates a cost of around US$ 5 billion to the pilot plant demonstrator and 20 billion to the first full scale power satellite. This is not a small sum of money, but is comparable to the approximately US$ 26 billion cost of the Three Gorges Dam in China. Once power satellites start to come on line, each feeding power into the grid with no cost for fuel and modest maintenance expenses (comparable to those for a hydroelectric dam), the initial investment does not take long to be recovered. Further, the power satellite effort will bootstrap the infrastructure for routine, inexpensive access to space, and the power satellite modules can also be used in other space applications (for example, very high power communication satellites).

The most frequently raised objection when power satellites are mentioned is fear that they could be used as a “death ray”. This is, quite simply, nonsense. The microwave power beam arriving at the Earth's surface will have an intensity between 10–20% of summer sunlight, so a mirror reflecting the Sun would be a more effective death ray. Extensive tests were done to determine if the beam would affect birds, insects, and aircraft flying through it and all concluded there was no risk. A power satellite which beamed down its power with a laser could be weaponised, but nobody is proposing that, since it would have problems with atmospheric conditions and cost more than microwave transmission.

This book provides a comprehensive examination of the history of the concept of solar power from space, the various designs proposed over the years and studies conducted of them, and an in-depth presentation of the technology and economic rationale for the SPS-ALPHA system. It presents an energy future which is very different from that which most people envision, provides a way to bring the benefits of electrification to developing regions without any environmental consequences whatever, and ensure a secure supply of electricity for the foreseeable future.

This is a rewarding, but rather tedious read. Perhaps it's due to the author's 25 years at NASA, but the text is cluttered with acronyms—there are fourteen pages of them defined in a glossary at the end of the book—and busy charts, some of which are difficult to read as reproduced in the Kindle edition. Copy editing is so-so: I noted 28 errors, and I wasn't especially looking for them. The index in the Kindle edition lists page numbers in the print edition which are useless because the electronic edition does not contain page numbers.

Posted at 18:24 Permalink