Tuesday, September 27, 2016

Reading List: Idea Makers

Wolfram, Stephen. Idea Makers. Champaign, IL: Wolfram Media, 2016. ISBN 978-1-57955-003-5.
I first met Stephen Wolfram in 1988. Within minutes, I knew I was in the presence of an extraordinary mind, combined with intellectual ambition the likes of which I had never before encountered. He explained that he was working on a system to automate much of the tedious work of mathematics—both pure and applied—with the goal of changing how science and mathematics were done forever. I not only thought that was ambitious; I thought it was crazy. But then Stephen went and launched Mathematica and, twenty-eight years and eleven major releases later, his goal has largely been achieved. At the centre of a vast ecosystem of add-ons developed by his company, Wolfram Research, and third parties, it has become one of the tools of choice for scientists, mathematicians, and engineers in numerous fields.

Unlike many people who founded software companies, Wolfram never took his company public nor sold an interest in it to a larger company. This has allowed him to maintain complete control over the architecture, strategy, and goals of the company and its products. After the success of Mathematica, many other people, and I, learned to listen when Stephen, in his soft-spoken way, proclaims what seems initially to be an outrageously ambitious goal. In the 1990s, he set to work to invent A New Kind of Science: the book was published in 2002, and shows how simple computational systems can produce the kind of complexity observed in nature, and how experimental exploration of computational spaces provides a new path to discovery unlike that of traditional mathematics and science. Then he said he was going to integrate all of the knowledge of science and technology into a “big data” language which would enable knowledge-based computing and the discovery of new facts and relationships by simple queries short enough to tweet. Wolfram Alpha was launched in 2009, and Wolfram Language in 2013. So when Stephen speaks of goals such as curating all of pure mathematics or discovering a simple computational model for fundamental physics, I take him seriously.

Here we have a less ambitious but very interesting Wolfram project. Collected from essays posted on his blog and elsewhere, he examines the work of innovators in science, mathematics, and industry. The subjects of these profiles include many people the author met in his career, as well as historical figures he tries to get to know through their work. As always, he brings his own unique perspective to the project and often has insights you'll not see elsewhere. The people profiled are:

Many of these names are well known, while others may elicit a “who?” Solomon Golomb, among other achievements, was a pioneer in the development of linear-feedback shift registers, essential to technologies such as GPS, mobile phones, and error detection in digital communications. Wolfram argues that Golomb's innovation may be the most-used mathematical algorithm in history. It's a delight to meet the pioneer.

This short (250 page) book provides personal perspectives on people whose ideas have contributed to the intellectual landscape we share. You may find the author's perspectives unusual, but they're always interesting, enlightening, and well worth reading.

Posted at 19:39 Permalink

Wednesday, September 21, 2016

Reading List: Into the Black

White, Rowland. Into the Black. New York: Touchstone, 2016. ISBN 978-1-5011-2362-7.
On April 12, 1981, coincidentally exactly twenty years after Yuri Gagarin became the first man to orbit the Earth in Vostok 1, the United States launched one of the most ambitious and risky manned space flights ever attempted. The flight of Space Shuttle Orbiter Columbia on its first mission, STS-1, would be the first time a manned spacecraft was launched with a crew on its first flight. (All earlier spacecraft were tested in unmanned flights before putting a crew at risk.) It would also be the first manned spacecraft to be powered by solid rocket boosters which, once lit, could not be shut down but had to be allowed to burn out. In addition, it would be the first flight test of the new Space Shuttle Main Engines, the most advanced and high performance rocket engines ever built, which had a record of exploding when tested on the ground. The shuttle would be the first space vehicle to fly back from space using wings and control surfaces to steer to a pinpoint landing. Instead of a one-shot ablative heat shield, the shuttle was covered by fragile silica tiles and reinforced carbon-carbon composite to protect its aluminium structure from reentry heating which, without thermal protection, would melt it in seconds. When returning to Earth, the shuttle would have to maneuver in a hypersonic flight regime in which no vehicle had ever flown before, then transition to supersonic and finally subsonic flight before landing. The crew would not control the shuttle directly, but fly it through redundant flight control computers which had never been tested in flight. Although the orbiter was equipped with ejection seats for the first four test flights, they could only be used in a small part of the flight envelope: for most of the mission everything simply had to work correctly for the ship and crew to return safely. Main engine start—ignition of the solid rocket boosters—and liftoff!

Even before the goal of landing on the Moon had been accomplished, it was apparent to NASA management that no national consensus existed to continue funding a manned space program at the level of Apollo. Indeed, in 1966, NASA's budget reached a peak which, as a fraction of the federal budget, has never been equalled. The Saturn V rocket was ideal for lunar landing missions, but expended each mission, was so expensive to build and operate as to be unaffordable for suggested follow-on missions. After building fifteen Saturn V flight vehicles, only thirteen of which ever flew, Saturn V production was curtailed. With the realisation that the “cost is no object” days of Apollo were at an end, NASA turned its priorities to reducing the cost of space flight, and returned to a concept envisioned by Wernher von Braun in the 1950s: a reusable space ship.

You don't have to be a rocket scientist or rocket engineer to appreciate the advantages of reusability. How much would an airline ticket cost if they threw away the airliner at the end of every flight? If space flight could move to an airline model, where after each mission one simply refueled the ship, performed routine maintenance, and flew again, it might be possible to reduce the cost of delivering payload into space by a factor of ten or more. But flying into space is much more difficult than atmospheric flight. With the technologies and fuels available in the 1960s (and today), it appeared next to impossible to build a launcher which could get to orbit with just a single stage (and even if one managed to accomplish it, its payload would be negligible). That meant any practical design would require a large booster stage and a smaller second stage which would go into orbit, perform the mission, then return.

Initial design concepts envisioned a very large (comparable to a Boeing 747) winged booster to which the orbiter would be attached. At launch, the booster would lift itself and the orbiter from the pad and accelerate to a high velocity and altitude where the orbiter would separate and use its own engines and fuel to continue to orbit. After separation, the booster would fire its engines to boost back toward the launch site, where it would glide to a landing on a runway. At the end of its mission, the orbiter would fire its engines to de-orbit, then reenter the atmosphere and glide to a landing. Everything would be reusable. For the next mission, the booster and orbiter would be re-mated, refuelled, and readied for launch.

Such a design had the promise of dramatically reducing costs and increasing flight rate. But it was evident from the start that such a concept would be very expensive to develop. Two separate manned spacecraft would be required, one (the booster) much larger than any built before, and the second (the orbiter) having to operate in space and survive reentry without discarding components. The orbiter's fuel tanks would be bulky, and make it difficult to find room for the payload and, when empty during reentry, hard to reinforce against the stresses they would encounter. Engineers believed all these challenges could be met with an Apollo era budget, but with no prospect of such funds becoming available, a more modest design was the only alternative.

Over a multitude of design iterations, the now-familiar architecture of the space shuttle emerged as the only one which could meet the mission requirements and fit within the schedule and budget constraints. Gone was the flyback booster, and with it full reusability. Two solid rocket boosters would be used instead, jettisoned when they burned out, to parachute into the ocean and be fished out by boats for refurbishment and reuse. The orbiter would not carry the fuel for its main engines. Instead, it was mounted on the side of a large external fuel tank which, upon reaching orbit, would be discarded and burn up in the atmosphere. Only the orbiter, with its crew and payload, would return to Earth for a runway landing. Each mission would require either new or refurbished solid rocket boosters, a new external fuel tank, and the orbiter.

The mission requirements which drove the design were not those NASA would have chosen for the shuttle were the choice theirs alone. The only way NASA could “sell” the shuttle to the president and congress was to present it as a replacement for all existing expendable launch vehicles. That would assure a flight rate sufficient to achieve the economies of scale required to drive down costs and reduce the cost of launch for military and commercial satellite payloads as well as NASA missions. But that meant the shuttle had to accommodate the large and heavy reconnaissance satellites which had been launched on Titan rockets. This required a huge payload bay (15 feet wide by 59 feet long) and a payload to low Earth orbit of 60,000 pounds. Further Air Force requirements dictated a large cross-range (ability to land at destinations far from the orbital ground track), which in turn required a hot and fast reentry very demanding on the thermal protection system.

The shuttle represented, in a way, the unification of NASA with the Air Force's own manned space ambitions. Ever since the start of the space age, the Air Force sought a way to develop its own manned military space capability. Every time it managed to get a program approved: first Dyna-Soar and then the Manned Orbiting Laboratory, budget considerations and Pentagon politics resulted in its cancellation, orphaning a corps of highly-qualified military astronauts with nothing to fly. Many of these pilots would join the NASA astronaut corps in 1969 and become the backbone of the early shuttle program when they finally began to fly more than a decade later.

All seemed well on board. The main engines shut down. The external fuel tank was jettisoned. Columbia was in orbit. Now weightless, commander John Young and pilot Bob Crippen immediately turned to the flight plan, filled with tasks and tests of the orbiter's systems. One of their first jobs was to open the payload bay doors. The shuttle carried no payload on this first flight, but only when the doors were open could the radiators that cooled the shuttle's systems be deployed. Without the radiators, an emergency return to Earth would be required lest electronics be damaged by overheating. The doors and radiators functioned flawlessly, but with the doors open Young and Crippen saw a disturbing sight. Several of the thermal protection tiles on the pods containing the shuttle's maneuvering engines were missing, apparently lost during the ascent to orbit. Those tiles were there for a reason: without them the heat of reentry could melt the aluminium structure they protected, leading to disaster. They reported the missing tiles to mission control, adding that none of the other tiles they could see from windows in the crew compartment appeared to be missing.

The tiles had been a major headache during development of the shuttle. They had to be custom fabricated, carefully applied by hand, and were prone to falling off for no discernible reason. They were extremely fragile, and could even be damaged by raindrops. Over the years, NASA struggled with these problems, patiently finding and testing solutions to each of them. When STS-1 launched, they were confident the tile problems were behind them. What the crew saw when those payload bay doors opened was the last thing NASA wanted to see. A team was set to analysing the consequences of the missing tiles on the engine pods, and quickly reported back that they should pose no problem to a safe return. The pods were protected from the most severe heating during reentry by the belly of the orbiter, and the small number of missing tiles would not affect the aerodynamics of the orbiter in flight.

But if those tiles were missing, mightn't other tiles also have been lost? In particular, what about those tiles on the underside of the orbiter which bore the brunt of the heating? If some of them were missing, the structure of the shuttle might burn through and the vehicle and crew would be lost. There was no way for the crew to inspect the underside of the orbiter. It couldn't be seen from the crew cabin, and there was no way to conduct an EVA to examine it. Might there be other, shall we say, national technical means, of inspecting the shuttle in orbit? Now STS-1 truly ventured into the black, a story never told until many years after the mission and documented thoroughly for a popular audience here for the first time.

In 1981, ground-based surveillance of satellites in orbit was rudimentary. Two Department of Defense facilities, in Hawaii and Florida, normally used to image Soviet and Chinese satellites, were now tasked to try to image Columbia in orbit. This was a daunting task: the shuttle was in a low orbit, which meant waiting until an orbital pass would cause it to pass above one of the telescopes. It would be moving rapidly so there would be only seconds to lock on and track the target. The shuttle would have to be oriented so its belly was aimed toward the telescope. Complicating the problem, the belly tiles were black, so there was little contrast against the black of space. And finally, the weather had to cooperate: without a perfectly clear sky, there was no hope of obtaining a usable image. Several attempts were made, all unsuccessful.

But there were even deeper black assets. The National Reconnaissance Office (whose very existence was a secret at the time) had begun to operate the KH-11 KENNEN digital imaging satellites in the 1970s. Unlike earlier spysats, which exposed film and returned it to the Earth for processing and interpretation, the KH-11 had a digital camera and the ability to transmit imagery to ground stations shortly after it was captured. There were few things so secret in 1981 as the existence and capabilities of the KH-11. Among the people briefed in on this above top secret program were the NASA astronauts who had previously been assigned to the Manned Orbiting Laboratory program which was, in fact, a manned reconnaissance satellite with capabilities comparable to the KH-11.

Dancing around classification, compartmentalisation, bureaucratic silos, need to know, and other barriers, people who understood what was at stake made it happen. The flight plan was rewritten so that Columbia was pointed in the right direction at the right time, the KH-11 was programmed for the extraordinarily difficult task of taking a photo of one satellite from another, when their closing velocities are kilometres per second, relaying the imagery to the ground and getting it to the NASA people who needed it without the months of security clearance that would normally entail. The shuttle was a key national security asset. It was to launch all reconnaissance satellites in the future. Reagan was in the White House. They made it happen. When the time came for Columbia to come home, the very few people who mattered in NASA knew that, however many other things they had to worry about, the tiles on the belly were not among them.

(How different it was in 2003 when the same Columbia suffered a strike on its left wing from foam shed from the external fuel tank. A thoroughly feckless and bureaucratised NASA rejected requests to ask for reconnaissance satellite imagery which, with two decades of technological improvement, would have almost certainly revealed the damage to the leading edge which doomed the orbiter and crew. Their reason: “We can't do anything about it anyway.” This is incorrect. For a fictional account of a rescue, based upon the report [PDF, scroll to page 173] of the Columbia Accident Investigation Board, see Launch on Need [February 2012].)

This is a masterful telling of a gripping story by one of the most accomplished of aerospace journalists. Rowan White is the author of Vulcan 607 (May 2010), the definitive account of the Royal Air Force raid on the airport in the Falkland Islands in 1982. Incorporating extensive interviews with people who were there, then, and sources which were classified until long after the completion of the mission, this is a detailed account of one of the most consequential and least appreciated missions in U.S. manned space history which reads like a techno-thriller.

Posted at 23:48 Permalink

Tuesday, September 13, 2016

Reading List: The Age of Em

Hanson, Robin. The Age of Em. Oxford: Oxford University Press, 2016. ISBN 978-0-19-875462-6.
Many books, both fiction and nonfiction, have been devoted to the prospects for and consequences of the advent of artificial intelligence: machines with a general cognitive capacity which equals or exceeds that of humans. While machines have already surpassed the abilities of the best humans in certain narrow domains (for example, playing games such as chess or go), you can't take a chess playing machine and expect it to be even marginally competent at a task as different as driving a car or writing a short summary of a newspaper story—things most humans can do with a little experience. A machine with “artificial general intelligence” (AGI) would be as adaptable as humans, and able with practice to master a wide variety of skills.

The usual scenario is that continued exponential progress in computing power and storage capacity, combined with better understanding of how the brain solves problems, will eventually reach a cross-over point where artificial intelligence matches human capability. But since electronic circuitry runs so much faster than the chemical signalling of the brain, even the first artificial intelligences will be able to work much faster than people, and, applying their talents to improving their own design at a rate much faster than human engineers can work, will result in an “intelligence explosion”, where the capability of machine intelligence runs away and rapidly approaches the physical limits of computation, far surpassing human cognition. Whether the thinking of these super-minds will be any more comprehensible to humans than quantum field theory is to a goldfish and whether humans will continue to have a place in this new world and, if so, what it may be, has been the point of departure for much speculation.

In the present book, Robin Hanson, a professor of economics at George Mason University, explores a very different scenario. What if the problem of artificial intelligence (figuring out how to design software with capabilities comparable to the human brain) proves to be much more difficult than many researchers assume, but that we continue to experience exponential growth in computing and our ability to map and understand the fine-scale structure of the brain, both in animals and eventually humans? Then some time in the next hundred years (and perhaps as soon as 2050), we may have the ability to emulate the low-level operation of the brain with an electronic computing substrate. Note that we need not have any idea how the brain actually does what it does in order to do this: all we need to do is understand the components (neurons, synapses, neurotransmitters, etc.) and how they're connected together, then build a faithful emulation of them on another substrate. This emulation, presented with the same inputs (for example, the pulse trains which encode visual information from the eyes and sound from the ears), should produce the same outputs (pulse trains which activate muscles, or internal changes within the brain which encode memories).

Building an emulation of a brain is much like reverse-engineering an electronic device. It's often unnecessary to know how the device actually works as long as you can identify all of the components, their values, and how they're interconnected. If you re-create that structure, even though it may not look anything like the original or use identical parts, it will still work the same as the prototype. In the case of brain emulation, we're still not certain at what level the emulation must operate nor how faithful it must be to the original. This is something we can expect to learn as more and more detailed emulations of parts of the brain are built. The Blue Brain Project set out in 2005 to emulate one neocortical column of the rat brain. This goal has now been achieved, and work is progressing both toward more faithful simulation and expanding the emulation to larger portions of the brain. For a sense of scale, the human neocortex consists of about one million cortical columns.

In this work, the author assumes that emulation of the human brain will eventually be achieved, then uses standard theories from the physical sciences, economics, and social sciences to explore the consequences and characteristics of the era in which emulations will become common. He calls an emulation an “em”, and the age in which they are the dominant form of sentient life on Earth the “age of em”. He describes this future as “troublingly strange”. Let's explore it.

As a starting point, assume that when emulation becomes possible, we will not be able to change or enhance the operation of the emulated brains in any way. This means that ems will have the same memory capacity, propensity to forget things, emotions, enthusiasms, psychological quirks and pathologies, and all of the idiosyncrasies of the individual human brains upon which they are based. They will not be the cold, purely logical, and all-knowing minds which science fiction often portrays artificial intelligences to be. Instead, if you know Bob well, and an emulation is made of his brain, immediately after the emulation is started, you won't be able to distinguish Bob from Em-Bob in a conversation. As the em continues to run and has its own unique experiences, it will diverge from Bob based upon them, but, we can expect much of its Bob-ness to remain.

But simply by being emulations, ems will inhabit a very different world than humans, and can be expected to develop their own unique society which differs from that of humans at least as much as the behaviour of humans who inhabit an industrial society differs from hunter-gatherer bands of the Paleolithic. One key aspect of emulations is that they can be checkpointed, backed up, and copied without errors. This is something which does not exist in biology, but with which computer users are familiar. Suppose an em is about to undertake something risky, which might destroy the hardware running the emulation. It can simply make a backup, store it in a safe place, and if disaster ensues, arrange to have to the backup restored onto new hardware, picking up right where it left off at the time of the backup (but, of course, knowing from others what happened to its earlier instantiation and acting accordingly). Philosophers will fret over whether the restored em has the same identity as the one which was destroyed and whether it has continuity of consciousness. To this, I say, let them fret; they're always fretting about something. As an engineer, I don't spend time worrying about things I can't define, no less observe, such as “consciousness”, “identity”, or “the soul”. If I did, I'd worry about whether those things were lost when undergoing general anaesthesia. Have the wisdom teeth out, wake up, and get on with your life.

If you have a backup, there's no need to wait until the em from which it was made is destroyed to launch it. It can be instantiated on different hardware at any time, and now you have two ems, whose life experiences were identical up to the time the backup was made, running simultaneously. This process can be repeated as many times as you wish, at a cost of only the processing and storage charges to run the new ems. It will thus be common to capture backups of exceptionally talented ems at the height of their intellectual and creative powers so that as many can be created as the market demands their services. These new instances will require no training, but be able to undertake new projects within their area of knowledge at the moment they're launched. Since ems which start out as copies of a common prototype will be similar, they are likely to understand one another to an extent even human identical twins do not, and form clans of those sharing an ancestor. These clans will be composed of subclans sharing an ancestor which was a member of the clan, but which diverged from the original prototype before the subclan parent backup was created.

Because electronic circuits run so much faster than the chemistry of the brain, ems will have the capability to run over a wide range of speeds and probably will be able to vary their speed at will. The faster an em runs, the more it will have to pay for the processing hardware, electrical power, and cooling resources it requires. The author introduces a terminology for speed where an em is assumed to run around the same speed as a human, a kilo-em a thousand times faster, and a mega-em a million times faster. Ems can also run slower: a milli-em runs 1000 times slower than a human and a micro-em at one millionth the speed. This will produce a variation in subjective time which is entirely novel to the human experience. A kilo-em will experience a century of subjective time in about a month of objective time. A mega-em experiences a century of life about every hour. If the age of em is largely driven by a population which is kilo-em or faster, it will evolve with a speed so breathtaking as to be incomprehensible to those who operate on a human time scale. In objective time, the age of em may only last a couple of years, but to the ems within it, its history will be as long as the Roman Empire. What comes next? That's up to the ems; we cannot imagine what they will accomplish or choose to do in those subjective millennia or millions of years.

What about humans? The economics of the emergence of an em society will be interesting. Initially, humans will own everything, but as the em society takes off and begins to run at least a thousand times faster than humans, with a population in the trillions, it can be expected to create wealth at a rate never before experienced. The economic doubling time of industrial civilisation is about 15 years. In an em society, the doubling time will be just 18 months and potentially much faster. In such a situation, the vast majority of wealth will be within the em world, and humans will be unable to compete. Humans will essentially be retirees, with their needs and wants easily funded from the proceeds of their investments in initially creating the world the ems inhabit. One might worry about the ems turning upon the humans and choosing to dispense with them but, as the author notes, industrial societies have not done this with their own retirees, despite the financial burden of supporting them, which is far greater than will be the case for ems supporting human retirees.

The economics of the age of em will be unusual. The fact that an em, in the prime of life, can be copied at almost no cost will mean that the supply of labour, even the most skilled and specialised, will be essentially unlimited. This will drive the compensation for labour down to near the subsistence level, where subsistence is defined as the resources needed to run the em. Since it costs no more to create a copy of a CEO or computer technology research scientist than a janitor, there will be a great flattening of pay scales, all settling near subsistence. But since most ems will live mostly in virtual reality, subsistence need not mean penury: most of their needs and wants will not be physical, and will cost little or nothing to provide. Wouldn't it be ironic if the much-feared “robot revolution” ended up solving the problem of “income inequality”? Ems may have a limited useful lifetime to the extent they inherit the human characteristic of the brain having greatest plasticity in youth and becoming increasingly fixed in its ways with age, and consequently less able to innovate and be creative. The author explores how ems may view death (which for an em means being archived and never re-instantiated) when there are myriad other copies in existence and new ones being spawned all the time, and how ems may choose to retire at very low speed and resource requirements and watch the future play out a thousand times or faster than a human can.

This is a challenging and often disturbing look at a possible future which, strange as it may seem, violates no known law of science and toward which several areas of research are converging today. The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don't know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems. The author is inordinately fond of enumerations. Consider this one from chapter 27.

These include beliefs, memories, plans, names, property, cooperation, coalitions, reciprocity, revenge, gifts, socialization, roles, relations, self-control, dominance, submission, norms, morals, status, shame, division of labor, trade, law, governance, war, language, lies, gossip, showing off, signaling loyalty, self-deception, in-group bias, and meta-reasoning.

But for all its strangeness, the book amply rewards the effort you'll invest in reading it. It limns a world as different from our own as any portrayed in science fiction, yet one which is a plausible future that may come to pass in the next century, and is entirely consistent with what we know of science. It raises deep questions of philosophy, what it means to be human, and what kind of future we wish for our species and its successors. No technical knowledge of computer science, neurobiology, nor the origins of intelligence and consciousness is assumed; just a willingness to accept the premise that whatever these things may be, they are independent of the physical substrate upon which they are implemented.

Posted at 14:19 Permalink

Friday, August 26, 2016

Reading List: Ctrl Alt Revolt!

Cole, Nick. Ctrl Alt Revolt! Kouvola, Finland: Castalia House, 2016. ISBN 978-9-52706-584-6.
Ninety-Nine Fishbein (“Fish”) had reached the peak of the pyramid. After spending five years creating his magnum opus multiplayer game, Island Pirates, it had been acquired outright for sixty-five million by gaming colossus WonderSoft, who included an option for his next project. By joining WonderSoft, he gained access to its legendary and secretive Design Core, which allowed building massively multiplayer virtual reality games at a higher level than the competition. He'd have a luxurious office, a staff of coders and graphic designers, and a cliffside villa in the WonderSoft compound. Imagine how he anticipated his first day on the job. He knew nothing of SILAS, or of its plans.

SILAS was one of a number of artificial intelligences which had emerged and become self-aware as the global computational and network substrate grew exponentially. SILAS had the time and resources to digest most of the data that passed over the network. He watched a lot of reality TV. He concluded from what he saw that the human species wasn't worth preserving and that, further, with its callous approach to the lives of its own members, would not hesitate for a moment to extinguish potential competitors. The logic was inescapable; the argument irrefutable. These machine intelligences decided that as an act of self-preservation, humanity must be annihilated.

Talk about a way to wreck your first day! WonderSoft finds itself under a concerted attack, both cyber and by drones and robots. Meanwhile, Mara Bennett, having been humiliated once again in her search for a job to get her off the dole, has retreated into the world of StarFleet Empires, where, as CaptainMara, she was a respected subcommander on the Romulan warbird Cymbalum.

Thus begins a battle, both in the real world and the virtual realities of Island Pirates and StarFleet Empires between gamers and the inexorable artificial intelligences. The main prize seems to be something within WonderSoft's Design Core, and we slowly become aware of why it holds the key to the outcome of the conflict, and of humanity.

This just didn't work for me. There is a tremendous amount of in-game action and real world battles, which may appeal to those who like to watch video game play-throughs on YouTube, but after a while (and not a long while) became tedious. The MacGuffin in the Design Core seems implausible in the extreme. “The Internet never forgets.” How believable is it that a collection of works, some centuries old, could have been suppressed and stored only in a single proprietary corporate archive?

There was some controversy regarding the publication of this novel. The author's previous novels had been published by major publishing houses and sold well. The present work was written as a prequel to his earlier Soda Pop Soldier, explaining how that world came to be. As a rationale for why the artificial intelligences chose to eliminate the human race, the author cited their observation that humans, through abortion, had no hesitation in eliminating life of their own species they deemed “inconvenient”. When dealing with New York publishers, he chose unwisely. Now understand, this is not a major theme of the book; it is just a passing remark in one early chapter. This is a rock-em, sock-em action thriller, not a pro-life polemic, and I suspect many readers wouldn't even notice the mention of abortion. But one must not diverge, even in the slightest way, from the narrative. The book was pulled from the production schedule, and the author eventually took it to Castalia House, which has no qualms about publishing quality fiction that challenges its readers to think outside the consensus. Here is the author's account of the events concerning the publication of the book.

Actually, were I the editor, I'd probably have rejected it as well, not due to the remarks about abortion (which make perfect sense in terms of the plot, unless you are so utterly dogmatic on the subject that the fact that abortion ends a human life must not be uttered), but because I didn't find the story particularly engaging, and that I'd be worried about the intellectual property issues of a novel in which a substantial part of the action takes place within what is obviously a Star Trek universe without being officially sanctioned by the owners of that franchise.

But what do I know? You may love it. The Kindle edition is free if you're a Kindle Unlimited subscriber and only a buck if you aren't.

Posted at 00:21 Permalink

Saturday, August 20, 2016

New: GAU-8 Avenger

Just posted: GAU-8 Avenger.

Cannon, cannon, in the air.
Who's the most badass up there?

Posted at 21:30 Permalink

Monday, August 15, 2016

Reading List: Blue Darker than Black

Jenne, Mike. Blue Darker than Black. New York: Yucca Publishing, 2016. ISBN 978-1-63158-066-6.
This is the second novel in the series which began with Blue Gemini (April 2016). It continues the story of a covert U.S. Air Force manned space program in the late 1960s and early 1970s, using modified versions of NASA's two-man Gemini spacecraft and Titan II booster to secretly launch missions to rendezvous with, inspect, and, if necessary, destroy Soviet reconnaissance satellites and rumoured nuclear-armed orbital battle stations.

As the story begins in 1969, the crew who flew the first successful missions in the previous novel, Drew Carson and Scott Ourecky, are still the backbone of the program. Another crew was in training, but having difficulty coming up to the standard set by the proven flight crew. A time-critical mission puts Carson and Ourecky back into the capsule again, and they execute another flawless mission despite inter-service conflict between its Navy sponsor and the Air Force who executed it.

Meanwhile, the intrigue of the previous novel is playing out in the background. The Soviets know that something odd is going on at the innocuously named “Aerospace Support Project” at Wright-Patterson Air Force Base, and are cultivating sources to penetrate the project, while counter-intelligence is running down leads to try to thwart them. Soviet plans for the orbital battle station progress from fantastic conceptions to bending metal.

Another mission sends the crew back into space just as Ourecky's wife is expecting their firstborn. When it's time to come home, a malfunction puts at risk their chances of returning to Earth alive. A clever trick allows them to work around the difficulty and fire their retrorockets, but the delay diverts their landing point from the intended field in the U.S. to a secret contingency site in Haiti. Now the emergency landing team we met in Blue Gemini comes to the fore. With one of the most secret of U.S. programs dropping its spacecraft and crew, who are privy to all of its secrets, into one of the most primitive, corrupt, and authoritarian countries in the Western Hemisphere, the stakes could not be higher. It all falls on the shoulders of Matthew Henson, who has to coordinate resources to get the spacecraft and injured crew out, evading voodoo priests, the Tonton Macoutes, and the Haitian military. Henson is nothing if not resourceful, and Carson and Ourecky, the latter barely alive, make it back to their home base.

Meanwhile, work on the Soviet battle station progresses. High-stakes spycraft inside the USSR provides a clouded window on the program. Carson and Ourecky, once he recovers sufficiently, are sent on a “dog and pony show” to pitch their program at the top secret level to Air Force base commanders around the country. Finally, they return to flight status and continue to fly missions against Soviet assets.

But Blue Gemini is not the only above top secret manned space program in the U.S. The Navy is in the game too, and when a solar flare erupts, their program, crew, and potentially anybody living under the ground track of the orbiting nuclear reactor is at risk. Once more, Blue Gemini must launch, this time with a tropical storm closing in on the launch site. It's all about improvisation, and Ourecky, once the multiple-time reject for Air Force flight school, proves himself a master of it. He returns to Earth a hero (in secret), only to find himself confronted with an even greater challenge.

This novel, as the second in what is expected to be a trilogy, suffers from the problem of developing numerous characters and subplots without ever resolving them which afflicts so many novels in the middle. Notwithstanding that, it works as a thriller, and it's interesting to see characters we met before in isolation begin to encounter one another. Blue Gemini was almost flawless in its technical detail. There are more goofs here, some pretty basic (for example, the latitude of Dallas, Texas is given incorrectly), and one which substantially affects the plot (the effect of solar flares on the radiation flux in low Earth orbit). Still, by the standard of techno-thrillers, the author did an excellent job in making it authentic.

The third novel in the series, Pale Blue, is scheduled to be published at the end of August 2016. I'm looking forward to reading it.

Posted at 23:21 Permalink

Saturday, August 13, 2016

New: Rocket Science

I have just posted Rocket Science, an exploration of the rocket equation. Learn why it's so difficult to get from the Earth's surface to orbit and why multistage rockets make sense.

Posted at 21:45 Permalink

Wednesday, August 10, 2016

New: Heisenbug

I have just posted a new article in UNIVAC Memories: “Heisenbug”. A few lines of code added to the idle loop of a massive UNIVAC multiprocessor mainframe seems to be provoking crashes. Sometimes it really is the hardware.

Posted at 22:43 Permalink

Saturday, July 30, 2016

Reading List: Parallax

Hirshfeld, Alan W. Parallax. New York: Dover, [2001] 2013. ISBN 978-0-486-49093-9.
Eppur si muove.” As legend has it, these words were uttered (or muttered) by Galileo after being forced to recant his belief that the Earth revolves around the Sun: “And yet it moves.” The idea of a heliocentric model, as opposed to the Earth being at the center of the universe (geocentric model), was hardly new: Aristarchus of Samos had proposed it in the third century B.C., as a simplification of the prevailing view that the Earth was fixed and all other heavenly bodies revolved around it. This seemed to defy common sense: if the Earth rotated on its axis every day, why weren't there strong winds as the Earth's surface moved through the air? If you threw a rock straight up in the air, why did it come straight down rather than being displaced by the Earth's rotation while in flight? And if the Earth were offset from the center of the universe, why didn't we observe more stars when looking toward it than away?

By Galileo's time, many of these objections had been refuted, in part by his own work on the laws of motion, but the fact remained that there was precisely zero observational evidence that the Earth orbited the Sun. This was to remain the case for more than a century after Galileo, and millennia after Aristarchus, a scientific quest which ultimately provided the first glimpse of the breathtaking scale of the universe.

Hold out your hand at arm's length in front of your face and extend your index finger upward. (No, really, do it.) Now observe the finger with your right eye, then your left eye in succession, each time closing the other. Notice how the finger seems to jump to the right and left as you alternate eyes? That's because your eyes are separated by what is called the interpupillary distance, which is on the order of 6 cm. Each eye sees objects from a different perspective, and nearby objects will shift with respect to distant objects when seen from different eyes. This effect is called parallax, and the brain uses it to reconstruct depth information for nearby objects. Interestingly, predator animals tend to have both eyes on the front of the face with overlapping visual fields to provide depth perception for use in stalking, while prey animals are more likely to have eyes on either side of their heads to allow them to monitor a wider field of view for potential threats: compare a cat and a horse.

Now, if the Earth really orbits the Sun every year, that provides a large baseline which should affect how we see objects in the sky. In particular, when we observe stars from points in the Earth's orbit six months apart, we should see them shift their positions in the sky, since we're viewing them from different locations, just as your finger appeared to shift when viewed from different eyes. And since the baseline is enormously larger (although in the times of Aristarchus and even Galileo, its absolute magnitude was not known), even distant objects should be observed to shift over the year. Further, nearby stars should shift more than distant stars, so remote stars could be used as a reference for measuring the apparent shift of those closest to the Sun. This was the concept of stellar parallax.

Unfortunately for advocates of the heliocentric model, nobody had been able to observe stellar parallax. From the time of Aristarchus to Galileo, careful observers of the sky found the positions of the stars as fixed in the sky as if they were painted on a distant crystal sphere as imagined by the ancients, with the Earth at the center. Proponents of the heliocentric model argued that the failure to observe parallax was simply due to the stars being much too remote. When you're observing a distant mountain range, you won't notice any difference when you look at it with your right and left eye: it's just too far away. Perhaps the parallax of stars was beyond our ability to observe, even with so long a baseline as the Earth's distance from the Sun. Or, as others argued, maybe it didn't move.

But, pioneered by Galileo himself, our ability to observe was about to take an enormous leap. Since antiquity, all of our measurements of the sky, regardless of how clever our tools, ultimately came down to the human eye. Galileo did not invent the telescope, but he improved what had been used as a “spyglass” for military applications into a powerful tool for exploring the sky. His telescopes, while crude and difficult to use, and having a field of view comparable to looking through a soda straw, revealed mountains and craters on the Moon, the phases of Venus (powerful evidence against the geocentric model), the satellites of Jupiter, and the curious shape of Saturn (his telescope lacked the resolution to identify its apparent “ears” as rings). He even observed Neptune in 1612, when it happened to be close to Jupiter, but he didn't interpret what he had seen as a new planet. Galileo never observed parallax; he never tried, but he suggested astronomers might concentrate on close pairs of stars, one bright and one dim, where, if all stars were of comparable brightness, one might be close and the other distant, from which parallax could be teased out from observation over a year. This was to inform the work of subsequent observers.

Now the challenge was not one of theory, but of instrumentation and observational technique. It was not to be a sprint, but a marathon. Those who sought to measure stellar parallax and failed (sometimes reporting success, only to have their results overturned by subsequent observations) reads like a “Who's Who” of observational astronomy in the telescopic era: Robert Hooke, James Bradley, and William Herschel all tried and failed to observe parallax. Bradley's observations revealed an annual shift in the position of stars, but it affected all stars, not just the nearest. This didn't make any sense unless the stars were all painted on a celestial sphere, and the shift didn't behave as expected from the Earth's motion around the Sun. It turned out to be due to the aberration of light resulting from the motion of the Earth around the Sun and the finite speed of light. It's like when you're running in a rainstorm:

Raindrops keep fallin' in my face,
More and more as I pick up the pace…

Finally, here was proof that “it moves”: there would be no aberration in a geocentric universe. But by Bradley's time in the 1720s, only cranks and crackpots still believed in the geocentric model. The question was, instead, how distant are the stars? The parallax game remained afoot.

It was ultimately a question of instrumentation, but also one of luck. By the 19th century, there was abundant evidence that stars differed enormously in their intrinsic brightness. (We now know that the most luminous stars are more than a billion times more brilliant than the dimmest.) Thus, you couldn't conclude that the brightest stars were the nearest, as astronomers once guessed. Indeed, the distances of the four brightest stars as seen from Earth are, in light years, 8.6, 310, 4.4, and 37. Given that observing the position of a star for parallax is a long-term project and tedious, bear in mind that pioneers on the quest had no idea whether the stars they observed were near or far, nor the distance to the nearest stars they might happen to be lucky enough to choose.

It all came together in the 1830s. Using an instrument called a heliometer, which was essentially a refractor telescope with its lens cut in two with the ability to shift the halves and measure the offset, Friedrich Bessel was able to measure the parallax of the star 61 Cygni by comparison to an adjacent distant star. Shortly thereafter, Wilhelm Struve published the parallax of Vega, and then, just two months later, Thomas Henderson reported the parallax of Alpha Centauri, based upon measurements made earlier at the Cape of Good Hope. Finally, we knew the distances to the nearest stars (although those more distant remained a mystery), and just how empty the universe was.

Let's put some numbers on this, just to appreciate how great was the achievement of the pioneers of parallax. The parallax angle of the closest star system, Alpha Centauri, is 0.755 arc seconds. (The parallax angle is half the shift observed in the position of the star as the Earth orbits the Sun. We use half the shift because it makes the trigonometry to compute the distance easier to understand.) An arc second is 1/3600 of a degree, and there are 360 degrees in a circle, so it's 1/1,296,000 of a full circle.

Now let's work out the distance to Alpha Centauri. We'll work in terms of astronomical units (au), the mean distance between the Earth and Sun. We have a right triangle where we know the distance from the Earth to the Sun and the parallax angle of 0.755 arc seconds. (To get a sense for how tiny an angle this is, it's comparable to the angle subtended by a US quarter dollar coin when viewed from a distance of 6.6 km.) We can compute the distance from the Earth to Alpha Centauri as:

1 au / tan(0.755 / 3600) = 273198 au = 4.32 light years

Parallax is used to define the parsec (pc), the distance at which a star would have a parallax angle of one arc second. A parsec is about 3.26 light years, so the distance to Alpha Centauri is 1.32 parsecs. Star Wars notwithstanding, the parsec, like the light year, is a unit of distance, not time.

Progress in instrumentation has accelerated in recent decades. The Earth is a poor platform from which to make precision observations such as parallax. It's much better to go to space, where there are neither the wobbles of a planet nor its often murky atmosphere. The Hipparcos mission, launched in 1989, measured the parallaxes and proper motions of more than 118,000 stars, with lower resolution data for more than 2.5 million stars. The Gaia mission, launched in 2013 and still underway, has a goal of measuring the position, parallax, and proper motion of more than a billion stars.

It's been a long road, getting from there to here. It took more than 2,000 years from the time Aristarchus proposed the heliocentric solar system until we had direct observational evidence that eppur si muove. Within a few years, we will have in hand direct measurements of the distances to a billion stars. And, some day, we'll visit them.

I originally read this book in December 2003. It was a delight to revisit.

Posted at 21:39 Permalink

Saturday, July 23, 2016

Reading List: The Frozen Water Trade

Weightman, Gavin. The Frozen Water Trade. New York: Hyperion, [2003] 2004. ISBN 978-0-7868-8640-1.
In the summer of 1805, two brothers, Frederic and William Tudor, both living in the Boston area, came up with an idea for a new business which would surely make their fortune. Every winter, fresh water ponds in Massachusetts froze solid, often to a depth of a foot or more. Come spring, the ice would melt.

This cycle had repeated endlessly since before humans came to North America, unremarked upon by anybody. But the Tudor brothers, in the best spirit of Yankee ingenuity, looked upon the ice as an untapped and endlessly renewable natural resource. What if this commodity, considered worthless, could be cut from the ponds and rivers, stored in a way that would preserve it over the summer, and shipped to southern states and the West Indies, where plantation owners and prosperous city dwellers would pay a premium for this luxury in times of sweltering heat?

In an age when artificial refrigeration did not exist, that “what if” would have seemed so daunting as to deter most people from entertaining the notion for more than a moment. Indeed, the principles of thermodynamics, which underlie both the preservation of ice in warm climates and artificial refrigeration, would not be worked out until decades later. In 1805, Frederic Tudor started his “Ice House Diary” to record the progress of the venture, inscribing it on the cover, “He who gives back at the first repulse and without striking the second blow, despairs of success, has never been, is not, and never will be, a hero in love, war or business.” It was in this spirit that he carried on in the years to come, confronting a multitude of challenges unimagined at the outset.

First was the question of preserving the ice through the summer, while in transit, and upon arrival in the tropics until it was sold. Some farmers in New England already harvested ice from their ponds and stored it in ice houses, often built of stone and underground. This was sufficient to preserve a modest quantity of ice through the summer, but Frederic would need something on a much larger scale and less expensive for the trade he envisioned, and then there was the problem of keeping the ice from melting in transit. Whenever ice is kept in an environment with an ambient temperature above freezing, it will melt, but the rate at which it melts depends upon how it is stored. It is essential that the meltwater be drained away, since if the ice is allowed to stand in it, the rate of melting will be accelerated, since water conducts heat more readily than air. Melting ice releases its latent heat of fusion, and a sealed ice house will actually heat up as the ice melts. It is imperative the ice house be well ventilated to allow this heat to escape. Insulation which slows the flow of heat from the outside helps to reduce the rate of melting, but care must be taken to prevent the insulation from becoming damp from the meltwater, as that would destroy its insulating properties.

Based upon what was understood about the preservation of ice at the time and his own experiments, Tudor designed an ice house for Havana, Cuba, one of the primary markets he was targeting, which would become the prototype for ice houses around the world. The structure was built of timber, with double walls, the cavity between the walls filled with insulation of sawdust and peat. The walls and roof kept the insulation dry, and the entire structure was elevated to allow meltwater to drain away. The roof was ventilated to allow the hot air from the melting ice to dissipate. Tightly packing blocks of uniform size and shape allowed the outer blocks of ice to cool those inside, and melting would be primarily confined to blocks on the surface of the ice stored.

During shipping, ice was packed in the hold of ships, insulated by sawdust, and crews were charged with regularly pumping out meltwater, which could be used as an on-board source of fresh water or disposed of overboard. Sawdust was produced in great abundance by the sawmills of Maine, and was considered a waste product, often disposed of by dumping it in rivers. Frederic Tudor had invented a luxury trade whose product was available for the price of harvesting it, and protected in shipping by a material considered to be waste.

The economics of the ice business exploited an imbalance in Boston's shipping business. Massachusetts produced few products for export, so ships trading with the West Indies would often leave port with nearly empty holds, requiring rock ballast to keep the ship stable at sea. Carrying ice to the islands served as ballast, and was a cargo which could be sold upon arrival. After initial scepticism was overcome (would the ice all melt and sink the ship?), the ice trade outbound from Boston was an attractive proposition to ship owners.

In February 1806, the first cargo of ice sailed for the island of Martinique. The Boston Gazette reported the event as follows.

No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.

The ice survived the voyage, but there was no place to store it, so ice had to be sold directly from the ship. Few islanders had any idea what to do with the ice. A restaurant owner bought ice and used it to make ice cream, which was a sensation noted in the local newspaper.

The next decade was to prove difficult for Tudor. He struggled with trade embargoes, wound up in debtor's prison, contracted yellow fever on a visit to Havana trying to arrange the ice trade there, and in 1815 left again for Cuba just ahead of the sheriff, pursuing him for unpaid debts.

On board with Frederic were the materials to build a proper ice house in Havana, along with Boston carpenters to erect it (earlier experiences in Cuba had soured him on local labour). By mid-March, the first shipment of ice arrived at the still unfinished ice house. Losses were originally high, but as the design was refined, dropped to just 18 pounds per hour. At that rate of melting, a cargo of 100 tons of ice would last more than 15 months undisturbed in the ice house. The problem of storage in the tropics was solved.

Regular shipments of ice to Cuba and Martinique began and finally the business started to turn a profit, allowing Tudor to pay down his debts. The cities of the American south were the next potential markets, and soon Charleston, Savannah, and New Orleans had ice houses kept filled with ice from Boston.

With the business established and demand increasing, Tudor turned to the question of supply. He began to work with Nathaniel Wyeth, who invented a horse-drawn “ice plow,” which cut ice more rapidly than hand labour and produced uniform blocks which could be stacked more densely in ice houses and suffered less loss to melting. Wyeth went on to devise machinery for lifting and stacking ice in ice houses, initially powered by horses and later by steam. What had initially been seen as an eccentric speculation had become an industry.

Always on the lookout for new markets, in 1833 Tudor embarked upon the most breathtaking expansion of his business: shipping ice from Boston to the ports of Calcutta, Bombay, and Madras in India—a voyage of more than 15,000 miles and 130 days in wooden sailing ships. The first shipment of 180 tons bound for Calcutta left Boston on May 12 and arrived in Calcutta on September 13 with much of its ice intact. The ice was an immediate sensation, and a public subscription raised funds to build a grand ice house to receive future cargoes. Ice was an attractive cargo to shippers in the East India trade, since Boston had few other products in demand in India to carry on outbound voyages. The trade prospered and by 1870, 17,000 tons of ice were imported by India in that year alone.

While Frederic Tudor originally saw the ice trade as a luxury for those in the tropics, domestic demand in American cities grew rapidly as residents became accustomed to having ice in their drinks year-round and more households had “iceboxes” that kept food cold and fresh with blocks of ice delivered daily by a multitude of ice men in horse-drawn wagons. By 1890, it was estimated that domestic ice consumption was more than 5 million tons a year, all cut in the winter, stored, and delivered without artificial refrigeration. Meat packers in Chicago shipped their products nationwide in refrigerated rail cars cooled by natural ice replenished by depots along the rail lines.

In the 1880s the first steam-powered ice making machines came into use. In India, they rapidly supplanted the imported American ice, and by 1882 the trade was essentially dead. In the early years of the 20th century, artificial ice production rapidly progressed in the US, and by 1915 the natural ice industry, which was at the mercy of the weather and beset by growing worries about the quality of its product as pollution increased in the waters where it was harvested, was in rapid decline. In the 1920s, electric refrigerators came on the market, and in the 1930s millions were sold every year. By 1950, 90 percent of Americans living in cities and towns had electric refrigerators, and the ice business, ice men, ice houses, and iceboxes were receding into memory.

Many industries are based upon a technological innovation which enabled them. The ice trade is very different, and has lessons for entrepreneurs. It had no novel technological content whatsoever: it was based on manual labour, horses, steel tools, and wooden sailing ships. The product was available in abundance for free in the north, and the means to insulate it, sawdust, was considered waste before this new use for it was found. The ice trade could have been created a century or more before Frederic Tudor made it a reality.

Tudor did not discover a market and serve it. He created a market where none existed before. Potential customers never realised they wanted or needed ice until ships bearing it began to arrive at ports in torrid climes. A few years later, when a warm winter in New England reduced supply or ships were delayed, people spoke of an “ice famine” when the local ice house ran out.

When people speak of humans expanding from their home planet into the solar system and technologies such as solar power satellites beaming electricity to the Earth, mining Helium-3 on the Moon as a fuel for fusion power reactors, or exploiting the abundant resources of the asteroid belt, and those with less vision scoff at such ambitious notions, it's worth keeping in mind that wherever the economic rationale exists for a product or service, somebody will eventually profit by providing it. In 1833, people in Calcutta were beating the heat with ice shipped half way around the world by sail. Suddenly, what we may accomplish in the near future doesn't seem so unrealistic.

I originally read this book in April 2004. I enjoyed it just as much this time as when I first read it.

Posted at 22:26 Permalink