Fourmilog: None Dare Call It Reason

Reading List: The Religion War

Monday, June 13, 2016 21:59

Adams, Scott. The Religion War. Kansas City: Andrews McMeel, 2004 ISBN 978-0-7407-4788-5.
This a sequel to the author's 2001 novel God's Debris. In that work, which I considered profound and made my hair stand on end on several occasions, a package delivery man happens to encounter the smartest man in the world and finds his own view of the universe and his place in it up-ended, and his destiny to be something he'd never imagined. I believe that it's only because Scott Adams is also the creator of Dilbert that he is not appreciated as one of the most original and insightful thinkers of our time. His blog has been consistently right about the current political season in the U.S. while all of the double-domed mainstream pundits have fallen on their faces.

Forty years have passed since the events in God's Debris. The erstwhile delivery man has become the Avatar, thinking at a higher level and perceiving patterns which elude his contemporaries. These talents have made him one of the wealthiest people on Earth, but he remains unknown, dresses shabbily, wearing a red plaid blanket around his shoulders. The world has changed. A leader, al-Zee, arising in the Palestinian territories, has achieved his goal of eliminating Israel and consolidated the Islamic lands into a new Great Caliphate. Sitting on a large fraction of the world's oil supply, he funds “lone wolf”, modest scale terror attacks throughout the Dar al-Harb, always deniable and never so large as to invite reprisal. With the advent of model airplanes and satellite guidance able to deliver explosives to a target with precision over a long range, nobody can feel immune from the reach of the Caliphate.

In 2040, General Horatio Cruz came to power as Secretary of War of the Christian Alliance, with all of the forces of NATO at his command. The political structures of the western nations remained in place, but they had delegated their defence to Cruz, rendering him effectively a dictator in the military sphere. Cruz was not a man given to compromise. Faced with an opponent he evaluated as two billion people willing to die in a final war of conquest, he viewed the coming conflict not as one of preserving territory or self-defence, but of extermination—of one side or the other. There were dark rumours that al-Zee had in place his own plan of retaliation, with sleeper cells and weapons of mass destruction ready should a frontal assault begin.

The Avatar sees the patterns emerging, and sets out to avert the approaching cataclysm. He knows that bad ideas can only be opposed by better ones, but bad ideas first must be subverted by sowing doubt among those in thrall to them. Using his preternatural powers of persuasion, he gains access to the principals of the conflict and begins his work. But that may not be enough.

There are two overwhelming forces in the world. One is chaos; the other is order. God—the original singular speck—is forming again. He's gathering together his bits—we call it gravity. And in the process he is becoming self-aware to defeat chaos, to defeat evil if you will, to battle the devil. But something has gone terribly wrong.

Sometimes, when your computer is in a loop, the only thing you can do is reboot it: forcefully get it out of the destructive loop back to a starting point from which it can resume making progress. But how do you reboot a global technological civilisation on the brink of war? The Avatar must find the reboot button as time is running out.

Thirty years later, a delivery man rings the door. An old man with a shabby blanket answers and invites him inside.

There are eight questions to ponder at the end which expand upon the shiver-up-your-spine themes raised in the novel. Bear in mind, when pondering how prophetic this novel is of current and near-future events, that it was published twelve years ago.

Back

Reading List: Humans to Mars

Saturday, June 11, 2016 18:54

Portree, David S. F. Humans to Mars. Washington: National Aeronautics and Space Administration, 2001. NASA SP-2001-4521.
Ever since, in the years following World War II, people began to think seriously about the prospects for space travel, visionaries have looked beyond the near-term prospects for flights into Earth orbit, space stations, and even journeys to the Moon, toward the red planet: Mars. Unlike Venus, eternally shrouded by clouds, or the other planets which were too hot or cold to sustain life as we know it, Mars, about half the size of the Earth, had an atmosphere, a day just a little longer than the Earth's, seasons, and polar caps which grew and shrank with the seasons. There were no oceans, but water from the polar caps might sustain life on the surface, and there were dark markings which appeared to change during the Martian year, which some interpreted as plant life that flourished as polar caps melted in the spring and receded as they grew in the fall.

In an age where we have high-resolution imagery of the entire planet, obtained from orbiting spacecraft, telescopes orbiting Earth, and ground-based telescopes with advanced electronic instrumentation, it is often difficult to remember just how little was known about Mars in the 1950s, when people first started to think about how we might go there. Mars is the next planet outward from the Sun, so its distance and apparent size vary substantially depending upon its relative position to Earth in their respective orbits. About every two years, Earth “laps” Mars and it is closest (“at opposition”) and most easily observed. But because the orbit of Mars is elliptic, its distance varies from one opposition to the next, and it is only every 15 to 17 years that a near-simultaneous opposition and perihelion render Mars most accessible to Earth-based observation.

But even at a close opposition, Mars is a challenging telescopic target. At a close encounter, such as the one which will occur in the summer of 2018, Mars has an apparent diameter of only around 25 arc seconds. By comparison, the full Moon is about half a degree, or 1800 arc seconds: 72 times larger than Mars. To visual observers, even at a favourable opposition, Mars is a difficult object. Before the advent of electronic sensors in the 1980s, it was even more trying to photograph. Existing photographic film and plates were sufficiently insensitive that long exposures, measured in seconds, were required, and even from the best observing sites, the turbulence in the Earth's atmosphere smeared out details, leaving only the largest features recognisable. Visual observers were able to glimpse more detail in transient moments of still air, but had to rely upon their memory to sketch them. And the human eye is subject to optical illusions, seeing patterns where none exist. Were the extended linear features called “canals” real? Some observers saw and sketched them in great detail, while others saw nothing. Photography could not resolve the question.

Further, the physical properties of the planet were largely unknown. If you're contemplating a mission to land on Mars, it's essential to know the composition and density of its atmosphere, the temperatures expected at potential landing sites, and the terrain which a lander would encounter. None of these were known much beyond the level of educated guesses, which turned out to be grossly wrong once spacecraft probe data started to come in.

But ignorance of the destination didn't stop people from planning, or at least dreaming. In 1947–48, Wernher von Braun, then working with the U.S. Army at the White Sands Missile Range in New Mexico, wrote a novel called The Mars Project based upon a hypothetical Mars mission. A technical appendix presented detailed designs of the spacecraft and mission. While von Braun's talent as an engineer was legendary, his prowess as a novelist was less formidable, and the book never saw print, but in 1952 the appendix was published by itself.

One thing of which von Braun was never accused was thinking small, and in this first serious attempt to plan a Mars mission, he envisioned something more like an armada than the lightweight spacecraft we design today. At a time when the largest operational rocket, the V-2, had a payload of just one tonne, which it could throw no further than 320 km on a suborbital trajectory, von Braun's Mars fleet would consist of ten ships, each with a mass of 4,000 tons, and a total crew of seventy. The Mars ships would be assembled in orbit from parts launched on 950 flights of reusable three-stage ferry rockets. To launch all of the components of the Mars fleet and the fuel they would require would burn a total of 5.32 million tons of propellant in the ferry ships. Note that when von Braun proposed this, nobody had ever flown even a two stage rocket, and it would be ten years before the first unmanned Earth satellite was launched.

Von Braun later fleshed out his mission plans for an illustrated article in Collier's magazine as part of their series on the future of space flight. Now he envisioned assembling the Mars ships at the toroidal space station in Earth orbit which had figured in earlier installments of the series. In 1956, he published a book co-authored with Willy Ley, The Exploration of Mars, in which he envisioned a lean and mean expedition with just two ships and a crew of twelve, which would require “only” four hundred launches from Earth to assemble, provision, and fuel.

Not only was little understood about the properties of the destination, nothing at all was known about what human crews would experience in space, either in Earth orbit or en route to Mars and back. Could they even function in weightlessness? Would be they be zapped by cosmic rays or solar flares? Were meteors a threat to their craft and, if so, how serious a one? With the dawn of the space age after the launch of Sputnik in October, 1957, these data started to trickle in, and they began to inform plans for Mars missions at NASA and elsewhere.

Radiation was much more of a problem than had been anticipated. The discovery of the Van Allen radiation belts around the Earth and measurement of radiation from solar flares and galactic cosmic rays indicated that short voyages were preferable to long ones, and that crews would need shielding from routine radiation and a “storm shelter” during large solar flares. This motivated research into nuclear thermal and ion propulsion systems, which would not only reduce the transit time to and from Mars, but also, being much more fuel efficient than chemical rockets, dramatically reduce the mass of the ships compared to von Braun's flotilla.

Ernst Stuhlinger had been studying electric (ion) propulsion since 1953, and developed a design for constant-thrust, ion powered ships. These were featured in Walt Disney's 1957 program, “Mars and Beyond”, which aired just two months after the launch of Sputnik. This design was further developed by NASA in a 1962 mission design which envisioned five ships with nuclear-electric propulsion, departing for Mars in the early 1980s with a crew of fifteen and cargo and crew landers permitting a one month stay on the red planet. The ships would rotate to provide artificial gravity for the crew on the trip to and from Mars.

In 1965, the arrival of the Mariner 4 spacecraft seemingly drove a stake through the heart of the romantic view of Mars which had persisted since Percival Lowell. Flying by the southern hemisphere of the planet as close as 9600 km, it returned 21 fuzzy pictures which seemed to show Mars as a dead, cratered world resembling the Moon far more than the Earth. There was no evidence of water, nor of life. The atmosphere was determined to be only 1% as dense as that of Earth, not the 10% estimated previously, and composed mostly of carbon dioxide, not nitrogen. With such a thin and hostile atmosphere, there seemed no prospects for advanced life (anything more complicated than bacteria), and all of the ideas for winged Mars landers went away: the martian atmosphere proved just dense enough to pose a problem when slowing down on arrival, but not enough to allow a soft landing with wings or a parachute. The probe had detected more radiation than expected on its way to Mars, indicating crews would need more protection than anticipated, and it showed that robotic probes could do science at Mars without the need to put a crew at risk. I remember staying up and watching these pictures come in (the local television station didn't carry the broadcast, so I watched even more static-filled pictures than the original from a distant station). I can recall thinking, “Well, that's it then. Mars is dead. We'll probably never go there.”

Mars mission planning went on the back burner as the Apollo Moon program went into high gear in the 1960s. Apollo was conceived not as a single-destination project to land on the Moon, but to create the infrastructure for human expansion from the Earth into the solar system, including development of nuclear propulsion and investigation of planetary missions using Apollo derived hardware, mostly for flyby missions. In January of 1968, Boeing completed a study of a Mars landing mission, which would have required six launches of an uprated Saturn V, sending a crew of six to Mars in a 140 ton ship for a landing and a brief “flags and footprints” stay on Mars. By then, Apollo funding (even before the first lunar orbit and landing) was winding down, and it was clear there was no budget nor political support for such grandiose plans.

After the success of Apollo 11, NASA retrenched, reducing its ambition to a Space Shuttle. An ambitious Space Task Group plan for using the Shuttle to launch a Mars mission in the early 1980s was developed, but in an era of shrinking budgets and additional fly-by missions returning images of a Moon-like Mars, went nowhere. The Saturn V and the nuclear rocket which could have taken crews to Mars had been cancelled. It appeared the U.S. would remain stuck going around in circles in low Earth orbit. And so it remains today.

While planning for manned Mars missions stagnated, the 1970s dramatically changed the view of Mars. In 1971, Mariner 9 went into orbit around Mars and returned 7329 sharp images which showed the planet to be a complex world, with very different northern and southern hemispheres, a grand canyon almost as long as the United States, and features which suggested the existence, at least in the past, of liquid water. In 1976, two Viking orbiters and landers arrived at Mars, providing detailed imagery of the planet and ground truth. The landers were equipped with instruments intended to detect evidence of life, and they reported positive results, but later analyses attributed this to unusual soil chemistry. This conclusion is still disputed, including by the principal investigator for the experiment, but in any case the Viking results revealed a much more complicated and interesting planet than had been imagined from earlier missions. I had been working as a consultant at the Jet Propulsion Laboratory during the first Viking landing, helping to keep mission critical mainframe computers running, and I had the privilege of watching the first images from the surface of Mars arrive. I revised my view from 1965: now Mars was a place which didn't look much different from the high desert of California, where you could imagine going to explore and live some day. More importantly, detailed information about the atmosphere and surface of Mars was now in hand, so future missions could be planned accordingly.

And then…nothing. It was a time of malaise and retreat. After the last Viking landing in September of 1975, it would be more than twenty-one years until Mars Global Surveyor would orbit Mars and Mars Pathfinder would land there in 1996. And yet, with detailed information about Mars in hand, the intervening years were a time of great ferment in manned Mars mission planning, when the foundation of what may be the next great expansion of the human presence into the solar system was laid down.

President George H. W. Bush announced the Space Exploration Initiative on July 20th, 1989, the 20th anniversary of the Apollo 11 landing on the Moon. This was, in retrospect, the last gasp of the “Battlestar” concepts of missions to Mars. It became a bucket into which every NASA centre and national laboratory could throw their wish list: new heavy launchers, a Moon base, nuclear propulsion, space habitats: for a total price tag on the order of half a trillion dollars. It died, quietly, in congress.

But the focus was moving from leviathan bureaucracies of the coercive state to innovators in the private sector. In the 1990s, spurred by work of members of the “Mars Underground”, including Robert Zubrin and David Baker, the “Mars Direct” mission concept emerged. Earlier Mars missions assumed that all resources needed for the mission would have to be launched from Earth. But Zubrin and Baker realised that the martian atmosphere, based upon what we had learned from the Viking missions, contained everything needed to provide breathable air for the stay on Mars and rocket fuel for the return mission (with the addition of lightweight hydrogen brought from Earth). This turned the weight budget of a Mars mission upside-down. Now, an Earth return vehicle could be launched to Mars with empty propellant tanks. Upon arrival, it would produce fuel for the return mission and oxygen for the crew. After it was confirmed to have produced the necessary consumables, the crew of four would be sent in the next launch window (around 26 months later) and land near the return vehicle. They would use its oxygen while on the planet, and its fuel to return to Earth at the end of its mission. There would be no need for a space station in Earth orbit, nor orbital assembly, nor for nuclear propulsion: the whole mission could be done with hardware derived from that already in existence.

This would get humans to Mars, but it ran into institutional barriers at NASA, since many of its pet projects, including the International Space Station and Space Shuttle proved utterly unnecessary to getting to Mars. NASA responded with the Mars Design Reference Mission, published in various revisions between 1993 and 2014, which was largely based upon Mars Direct, but up-sized to a larger crew of six, and incorporating a new Earth Return Vehicle to bring the crew back to Earth in less austere circumstances than envisioned in Mars Direct.

NASA claim they are on a #JourneyToMars. They must be: there's a Twitter hashtag! But of course to anybody who reads this sad chronicle of government planning for planetary exploration over half a century, it's obvious they're on no such thing. If they were truly on a journey to Mars, they would be studying and building the infrastructure to get there using technologies such as propellant depots and in-orbit assembly which would get the missions done economically using resources already at hand. Instead, it's all about building a huge rocket which will cost so much it will fly every other year, at best, employing a standing army which will not only be costly but so infrequently used in launch operations they won't have the experience to operate the system safely, and whose costs will vacuum out the funds which might have been used to build payloads which would extend the human presence into space.

The lesson of this is that when the first humans set foot upon Mars, they will not be civil servants funded by taxes paid by cab drivers and hairdressers, but employees (and/or shareholders) of a private venture that sees Mars as a profit centre which, as its potential is developed, can enrich them beyond the dreams of avarice and provide a backup for human civilisation. I trust that when the history of that great event is written, it will not be as exasperating to read as this chronicle of the dead-end of government space programs making futile efforts to get to Mars.

This is an excellent history of the first half century of manned Mars mission planning. Although many proposed missions are omitted or discussed only briefly, the evolution of mission plans with knowledge of the destination and development of spaceflight hardware is described in detail, culminating with current NASA thinking about how best to accomplish such a mission. This book was published in 2001, but since existing NASA concepts for manned Mars missions are still largely based upon the Design Reference Mission described here, little has changed in the intervening fifteen years. In September of 2016, SpaceX plans to reveal its concepts for manned Mars missions, so we'll have to wait for the details to see how they envision doing it.

As a NASA publication, this book is in the public domain. The book can be downloaded for free as a PDF file from the NASA History Division. There is a paperback republication of this book available at Amazon, but at an outrageous price for such a short public domain work. If you require a paper copy, it's probably cheaper to download the PDF and print your own.

Back

Reading List: The Cosmic Web

Saturday, May 28, 2016 20:10

Gott, J. Richard. The Cosmic Web. Princeton: Princeton University Press, 2016. ISBN 978-0-691-15726-9.
Some works of popular science, trying to impress the reader with the scale of the universe and the insignificance of humans on the cosmic scale, argue that there's nothing special about our place in the universe: “an ordinary planet orbiting an ordinary star, in a typical orbit within an ordinary galaxy”, or something like that. But this is wrong! Surfaces of planets make up a vanishingly small fraction of the volume of the universe, and habitable planets, where beings like ourselves are neither frozen nor fried by extremes of temperature, nor suffocated or poisoned by a toxic atmosphere, are rarer still. The Sun is far from an ordinary star: it is brighter than 85% of the stars in the galaxy, and only 7.8% of stars in the Milky Way share its spectral class. Fully 76% of stars are dim red dwarves, the heavens' own 25 watt bulbs.

What does a typical place in the universe look like? What would you see if you were there? Well, first of all, you'd need a space suit and air supply, since the universe is mostly empty. And you'd see nothing. Most of the volume of the universe consists of great voids with few galaxies. If you were at a typical place in the universe, you'd be in one of these voids, probably far enough from the nearest galaxy that it wouldn't be visible to the unaided eye. There would be no stars in the sky, since stars are only formed within galaxies. There would only be darkness. Now look out the window: you are in a pretty special place after all.

One of the great intellectual adventures of the last century is learning our place in the universe and coming to understand its large scale structure. This book, by an astrophysicist who has played an important role in discovering that structure, explains how we pieced together the evidence and came to learn the details of the universe we inhabit. It provides an insider's look at how astronomers tease insight out of the messy and often confusing data obtained from observation.

It's remarkable not just how much we've learned, but how recently we've come to know it. At the start of the 20th century, most astronomers believed the solar system was part of a disc of stars which we see as the Milky Way. In 1610, Galileo's telescope revealed that the Milky Way was made up of a multitude of faint stars, and since the galaxy makes a band all around the sky, that the Sun must be within it. In 1918, by observing variable stars in globular clusters which orbit the Milky Way, Harlow Shapley was able to measure the size of the galaxy, which proved much larger than previously estimated, and determine that the Sun was about half way from the centre of the galaxy to its edge. Still, the universe was the galaxy.

There remained the mystery of the “spiral nebulæ”. These faint smudges of light had been revealed by photographic time exposures through large telescopes to be discs, some with prominent spiral arms, viewed from different angles. Some astronomers believed them to be gas clouds within the galaxy, perhaps other solar systems in the process of formation, while others argued they were galaxies like the Milky Way, far distant in the universe. In 1920 a great debate pitted the two views against one another, concluding that insufficient evidence existed to decide the matter.

That evidence would not be long in coming. Shortly thereafter, using the new 100 inch telescope on Mount Wilson in California, Edwin Hubble was able to photograph the Andromeda Nebula and resolve it into individual stars. Just as Galileo had done three centuries earlier for the Milky Way, Hubble's photographs proved Andromeda was not a gas cloud, but a galaxy composed of a multitude of stars. Further, Hubble was able to identify variable stars which allowed him to estimate its distance: due to details about the stars which were not understood at the time, he underestimated the distance by about a factor of two, but it was clear the galaxy was far beyond the Milky Way. The distances to other nearby galaxies were soon measured.

In one leap, the scale of the universe had become breathtakingly larger. Instead of one galaxy comprising the universe, the Milky Way was just one of a multitude of galaxies scattered around an enormous void. When astronomers observed the spectra of these galaxies, they noticed something odd: spectral lines from stars in most galaxies were shifted toward the red end of the spectrum compared to those observed on Earth. This was interpreted as a Doppler shift due to the galaxy's moving away from the Milky Way. Between 1929 and 1931, Edwin Hubble measured the distances and redshifts of a number of galaxies and discovered there was a linear relationship between the two. A galaxy twice as distant as another would be receding at twice the velocity. The universe was expanding, and every galaxy (except those sufficiently close to be gravitationally bound) was receding from every other galaxy.

The discovery of the redshift-distance relationship provided astronomers a way to chart the cosmos in three dimensions. Plotting the position of a galaxy on the sky and measuring its distance via redshift allowed building up a model of how galaxies were distributed in the universe. Were they randomly scattered, or would patterns emerge, suggesting larger-scale structure?

Galaxies had been observed to cluster: the nearest cluster, in the constellation Virgo, is made up of at least 1300 galaxies, and is now known to be part of a larger supercluster of which the Milky Way is an outlying member. It was not until the 1970s and 1980s that large-scale redshift surveys allowed plotting the positions of galaxies in the universe, initially in thin slices, and eventually in three dimensions. What was seen was striking. Galaxies were not sprinkled at random through the universe, but seemed to form filaments and walls, with great voids containing little or no galaxies. How did this come to be?

In parallel with this patient observational work, theorists were working out the history of the early universe based upon increasingly precise observations of the cosmic microwave background radiation, which provides a glimpse of the universe just 380,000 years after the Big Bang. This ushered in the era of precision cosmology, where the age and scale of the universe were determined with great accuracy, and the tiny fluctuations in temperature of the early universe were mapped in detail. This led to a picture of the universe very different from that imagined by astronomers over the centuries. Ordinary matter: stars, planets, gas clouds, and you and me—everything we observe in the heavens and the Earth—makes up less than 5% of the mass-energy of the universe. Dark matter, which interacts with ordinary matter only through gravitation, makes up 26.8% of the universe. It can be detected through its gravitational effects on the motion of stars and galaxies, but at present we don't have any idea what it's composed of. (It would be more accurate to call it “transparent matter” since it does not interact with light, but “dark matter” is the name we're stuck with.) The balance of the universe, 68.3%, is dark energy, a form of energy filling empty space and causing the expansion of the universe to accelerate. We have no idea at all about the nature of dark energy. These three components: ordinary matter, dark matter, and dark energy add up to give the universe a flat topology. It is humbling to contemplate the fact that everything we've learned in all of the sciences is about matter which makes up less than 5% of the universe: the other 95% is invisible and we don't know anything about it (although there are abundant guesses or, if you prefer, hypotheses).

This may seem like a flight of fancy, or a case of theorists making up invisible things to explain away observations they can't otherwise interpret. But in fact, dark matter and dark energy, originally inferred from astronomical observations, make predictions about the properties of the cosmic background radiation, and these predictions have been confirmed with increasingly high precision by successive space-based observations of the microwave sky. These observations are consistent with a period of cosmological inflation in which a tiny portion of the universe expanded to encompass the entire visible universe today. Inflation magnified tiny quantum fluctuations of the density of the universe to a scale where they could serve as seeds for the formation of structures in the present-day universe. Regions with greater than average density would begin to collapse inward due to the gravitational attraction of their contents, while those with less than average density would become voids as material within them fell into adjacent regions of higher density.

Dark matter, being more than five times as abundant as ordinary matter, would take the lead in this process of gravitational collapse, and ordinary matter would follow, concentrating in denser regions and eventually forming stars and galaxies. The galaxies formed would associate into gravitationally bound clusters and eventually superclusters, forming structure at larger scales. But what does the universe look like at the largest scale? Are galaxies distributed at random; do they clump together like meatballs in a soup; or do voids occur within a sea of galaxies like the holes in Swiss cheese? The answer is, surprisingly, none of the above, and the author explains the research, in which he has been a key participant, that discovered the large scale structure of the universe.

As increasingly more comprehensive redshift surveys of galaxies were made, what appeared was a network of filaments which connected to one another, forming extended structures. Between filaments were voids containing few galaxies. Some of these structures, such as the Sloan Great Wall, at 1.38 billion light years in length, are 1/10 the radius of the observable universe. Galaxies are found along filaments, and where filaments meet, rich clusters and superclusters of galaxies are observed. At this large scale, where galaxies are represented by single dots, the universe resembles a neural network like the human brain.

As ever more extensive observations mapped the three-dimensional structure of the universe we inhabit, progress in computing allowed running increasingly detailed simulations of the evolution of structure in models of the universe. Although the implementation of these simulations is difficult and complicated, they are conceptually simple. You start with a region of space, populate it with particles representing ordinary and dark matter in a sea of dark energy with random positions and density variations corresponding to those observed in the cosmic background radiation, then let the simulation run, computing the gravitational attraction of each particle on the others and tracking their motion under the influence of gravity. In 2005, Volker Springel and the Virgo Consortium ran the Millennium Simulation, which started from the best estimate of the initial conditions of the universe known at the time and tracked the motion of ten billion particles of ordinary and dark matter in a cube two billion light years on a side. As the simulation clock ran, the matter contracted into filaments surrounding voids, with the filaments joined at nodes rich in galaxies. The images produced by the simulation and the statistics calculated were strikingly similar to those observed in the real universe. The behaviour of this and other simulations increases confidence in the existence of dark matter and dark energy; if you leave them out of the simulation, you get results which don't look anything like the universe we inhabit.

At the largest scale, the universe isn't made of galaxies sprinkled at random, nor meatballs of galaxy clusters in a sea of voids, nor a sea of galaxies with Swiss cheese like voids. Instead, it resembles a sponge of denser filaments and knots interpenetrated by less dense voids. Both the denser and less dense regions percolate: it is possible to travel from one edge of the universe to another staying entirely within more or less dense regions. (If the universe were arranged like a honeycomb, for example, with voids surrounded by denser walls, this would not be possible.) Nobody imagined this before the observational results started coming in, and now we've discovered that given the initial conditions of the universe after the Big Bang, the emergence of such a structure is inevitable.

All of the structure we observe in the universe has evolved from a remarkably uniform starting point in the 13.8 billion years since the Big Bang. What will the future hold? The final chapter explores various scenarios for the far future. Because these depend upon the properties of dark matter and dark energy, which we don't understand, they are necessarily speculative.

The book is written for the general reader, but at a level substantially more difficult than many works of science popularisation. The author, a scientist involved in this research for decades, does not shy away from using equations when they illustrate an argument better than words. Readers are assumed to be comfortable with scientific notation, units like light years and parsecs, and logarithmically scaled charts. For some reason, in the Kindle edition dozens of hyphenated phrases are run together without any punctuation.

Back

Reading List: The B-58 Blunder

Wednesday, May 25, 2016 11:24

Holt, George, Jr. The B-58 Blunder. Randolph, VT: George Holt, 2015. ISBN 978-0-692-47881-3.
The B-58 Hustler was a breakthrough aircraft. The first generation of U.S. Air Force jet-powered bombers—the B-47 medium and B-52 heavy bombers—were revolutionary for their time, but were becoming increasingly vulnerable to high-performance interceptor aircraft and anti-aircraft missiles on the deep penetration bombing missions within the communist bloc for which they were intended. In the 1950s, it was believed the best way to reduce the threat was to fly fast and at high altitude, with a small aircraft that would be more difficult to detect with radar.

Preliminary studies of a next generation bomber began in 1949, and in 1952 Convair was selected to develop a prototype of what would become the B-58. Using a delta wing and four turbojet engines, the aircraft could cruise at up to twice the speed of sound (Mach 2, 2450 km/h) with a service ceiling of 19.3 km. With a small radar cross-section compared to the enormous B-52 (although still large compared to present-day stealth designs), the idea was that flying so fast and at high altitude, by the time an enemy radar site detected the B-58, it would be too late to scramble an interceptor to attack it. Contemporary anti-aircraft missiles lacked the capability to down targets at its altitude and speed.

The first flight of a prototype was in November 1956, and after a protracted development and test program, plagued by problems due to its radical design, the bomber entered squadron service in March of 1960. Rising costs caused the number purchased to be scaled back to just 116 (by comparison, 2,032 B-47s and 744 B-52s were built), deployed in two Strategic Air Command (SAC) bomber wings.

The B-58 was built to deliver nuclear bombs. Originally, it carried one B53 nine megaton weapon mounted below the fuselage. Subsequently, the ability to carry four B43 or B61 bombs on hardpoints beneath the wings was added. The B43 and B61 were variable yield weapons, with the B43 providing yields from 70 kilotons to 1 megaton and the B61 300 tons to 340 kilotons. The B-58 was not intended to carry conventional (non-nuclear, high explosive) bombs, and although some studies were done of conventional missions, its limited bomb load would have made it uncompetitive with other aircraft. Defensive weaponry was a single 20 mm radar-guided cannon in the tail. This was a last-ditch option: the B-58 was intended to outrun attackers, not fight them off. The crew of three consisted of a pilot, bombardier/navigator, and a defensive systems operator (responsible for electronic countermeasures [jamming] and the tail gun), each in their own cockpit with an ejection capsule. The navigation and bombing system included an inertial navigation platform with a star tracker for correction, a Doppler radar, and a search radar. The nuclear weapon pod beneath the fuselage could be replaced with a pod for photo reconnaissance. Other pods were considered, but never developed.

The B-58 was not easy to fly. Its delta wing required high takeoff and landing speeds, and a steep angle of attack (nose-up attitude), but if the pilot allowed the nose to rise too high, the aircraft would pitch up and spin. Loss of an engine, particularly one of the outboard engines, was, as they say, a very dynamic event, requiring instant response to counter the resulting yaw. During its operational history, a total of 26 B-58s were lost in accidents: 22.4% of the fleet.

During its ten years in service, no operational bomber equalled or surpassed the performance of the B-58. It set nineteen speed records, some which still stand today, and won prestigious awards for its achievements. It was a breakthrough, but ultimately a dead end: no subsequent operational bomber has exceeded its performance in speed and altitude, but that's because speed and altitude were judged insufficient to accomplish the mission. With the introduction of supersonic interceptors and high-performance anti-aircraft missiles by the Soviet Union, the B-58 was determined to be vulnerable in its original supersonic, high-altitude mission profile. Crews were retrained to fly penetration missions at near-supersonic speeds and very low altitude, making it difficult for enemy radar to acquire and track the bomber. Although it was not equipped with terrain-following radar like the B-52, an accurate radar altimeter allowed crews to perform these missions. The large, rigid delta wing made the B-58 relatively immune to turbulence at low altitudes. Still, abandoning the supersonic attack profile meant that many of the capabilities which made the B-58 so complicated and expensive to operate and maintain were wasted.

This book is the story of the decision to retire the B-58, told by a crew member and Pentagon staffer who strongly dissented and argues that the B-58 should have remained in service much longer. George “Sonny” Holt, Jr. served for thirty-one years in the U.S. Air Force, retiring with the rank of colonel. For three years he was a bombardier/navigator on a B-58 crew and later, in the Plans Division at the Pentagon, observed the process which led to the retirement of the bomber close-up, doing his best to prevent it. He would disagree with many of the comments about the disadvantages of the aircraft mentioned in previous paragraphs, and addresses them in detail. In his view, the retirement of the B-58 in 1970, when it had been originally envisioned as remaining in the fleet until the mid-1970s, was part of a deal by SAC, which offered the retirement of all of the B-58s in return for retaining four B-52 wings which were slated for retirement. He argues that SAC never really wanted to operate the B-58, and that they did not understand its unique capabilities. With such a small fleet, it did not figure large in their view of the bomber force (although with its large nuclear weapon load, it actually represented about half the yield of the bomber leg of the strategic triad).

He provides an insider's perspective on Pentagon politics, and how decisions are made at high levels, often without input from those actually operating the weapon systems. He disputes many of the claimed disadvantages of the B-58 and, in particular, argues that it performed superbly in the low-level penetration mission, something for which it was not designed.

What is not discussed is the competition posed to manned bombers of all kinds in the nuclear mission by the Minuteman missile, which began to be deployed in 1962. By June 1965, 800 missiles were on alert, each with a 1.2 megaton W56 warhead. Solid-fueled missiles like the Minuteman require little maintenance and are ready to launch immediately at any time. Unlike bombers, where one worries about the development of interceptor aircraft and surface to air missiles, no defense against a mass missile attack existed or was expected to be developed in the foreseeable future. A missile in a silo required only a small crew of launch and maintenance personnel, as opposed to the bomber which had flight crews, mechanics, a spare parts logistics infrastructure, and had to be supported by refueling tankers with their own overhead. From the standpoint of cost-effectiveness, a word very much in use in the 1960s Pentagon, the missiles, which were already deployed, were dramatically better than any bomber, and especially the most expensive one in the inventory. The bomber generals in SAC were able to save the B-52, and were willing to sacrifice the B-58 in order to do so.

The book is self-published by the author and is sorely in need of the attention of a copy editor. There are numerous spelling and grammatical errors, and nouns are capitalised in the middle of sentences for no apparent reason. There are abundant black and white illustrations from Air Force files.

Back

Reading List: Arkwright

Monday, May 23, 2016 11:27

Steele, Allen. Arkwright. New York: Tor, 2016. ISBN 978-0-7653-8215-3.
Nathan Arkwright was one of the “Big Four” science fiction writers of the twentieth century, along with Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein. Launching his career in the Golden Age of science fiction, he created the Galaxy Patrol space adventures, with 17 novels from 1950 to 1988, a radio drama, television series, and three movies. The royalties from his work made him a wealthy man. He lived quietly in his home in rural Massachusetts, dying in 2006.

Arkwright was estranged from his daughter and granddaughter, Kate Morressy, a freelance science journalist. Kate attends the funeral and meets Nathan's long-term literary agent, Margaret (Maggie) Krough, science fiction writer Harry Skinner, and George Hallahan, a research scientist long involved with military and aerospace projects. After the funeral, the three meet with Kate, and Maggie explains that Arkwright's will bequeaths all of his assets including future royalties from his work to the non-profit Arkwright Foundation, which Kate is asked to join as a director representing the family. She asks the mission of the foundation, and Maggie responds by saying it's a long and complicated story which is best answered by her reading the manuscript of Arkwright's unfinished autobiography, My Life in the Future.

It is some time before Kate gets around to reading the manuscript. When she does, she finds herself immersed in the Golden Age of science fiction, as her father recounts attending the first World's Science Fiction Convention in New York in 1939. An avid science fiction fan and aspiring writer, Arkwright rubs elbows with figures he'd known only as names in magazines such as Fred Pohl, Don Wollheim, Cyril Kornbluth, Forrest Ackerman, and Isaac Asimov. Quickly learning that at a science fiction convention it isn't just elbows that rub but also egos, he runs afoul of one of the clique wars that are incomprehensible to those outside of fandom and finds himself ejected from the convention, sitting down for a snack at the Automat across the street with fellow banished fans Maggie, Harry, and George. The four discuss their views of the state of science fiction and their ambitions, and pledge to stay in touch. Any group within fandom needs a proper name, and after a brief discussion “The Legion of Tomorrow” was born. It would endure for decades.

The manuscript comes to an end, leaving Kate still in 1939. She then meets in turn with the other three surviving members of the Legion, who carry the story through Arkwright's long life, and describe the events which shaped his view of the future and the foundation he created. Finally, Kate is ready to hear the mission of the foundation—to make the future Arkwright wrote about during his career a reality—to move humanity off the planet and enter the era of space colonisation, and not just the planets but, in time, the stars. And the foundation will be going it alone. As Harry explains (p. 104), “It won't be made public, and there won't be government involvement either. We don't want this to become another NASA project that gets scuttled because Congress can't get off its dead ass and give it decent funding.”

The strategy is bet on the future: invest in the technologies which will be needed for and will profit from humanity's expansion from the home planet, and then reinvest the proceeds in research and development and new generations of technology and enterprises as space development proceeds. Nobody expects this to be a short-term endeavour: decades or generations may be required before the first interstellar craft is launched, but the structure of the foundation is designed to persist for however long it takes. Kate signs on, “Forward the Legion.”

So begins a grand, multi-generation saga chronicling humanity's leap to the stars. Unlike many tales of interstellar flight, no arm-waving about faster than light warp drives or other technologies requiring new physics is invoked. Based upon information presented at the DARPA/NASA 100 Year Starship Symposium in 2011 and the 2013 Starship Century conference, the author uses only technologies based upon well-understood physics which, if economic growth continues on the trajectory of the last century, are plausible for the time in the future at which the story takes place. And lest interstellar travel and colonisation be dismissed as wasteful, no public resources are spent on it: coercive governments have neither the imagination nor the attention span to achieve such grand and long-term goals. And you never know how important the technological spin-offs from such a project may prove in the future.

As noted, the author is scrupulous in using only technologies consistent with our understanding of physics and biology and plausible extrapolations of present capabilities. There are a few goofs, which I'll place behind the curtain since some are plot spoilers.

Spoiler warning: Plot and/or ending details follow.  
On p. 61, a C-53 transport plane is called a Dakota. The C-53 is a troop transport variant of the C-47, referred to as the Skytrooper. But since the planes were externally almost identical, the observer may have confused them. “Dakota” was the RAF designation for the C-47; the U.S. Army Air Forces called it the Skytrain.

On the same page, planes arrive from “Kirtland Air Force Base in Texas”. At the time, the facility would have been called “Kirtland Field”, part of the Albuquerque Army Air Base, which is located in New Mexico, not Texas. It was not renamed Kirtland Air Force Base until 1947.

In the description of the launch of Apollo 17 on p. 71, after the long delay, the count is recycled to T−30 seconds. That isn't how it happened. After the cutoff in the original countdown at thirty seconds, the count was recycled to the T−22 minute mark, and after the problem was resolved, resumed from there. There would have been plenty of time for people who had given up and gone to bed to be awakened when the countdown was resumed and observe the launch.

On p. 214, we're told the Doppler effect of the ship's velocity “caused the stars around and in front of the Galactique to redshift”. In fact, the stars in front of the ship would be blueshifted, while those behind it would be redshifted.

On p. 230, the ship, en route, is struck by a particle of interstellar dust which is described as “not much larger than a piece of gravel”, which knocks out communications with the Earth. Let's assume it wasn't the size of a piece of gravel, but only that of a grain of sand, which is around 20 milligrams. The energy released in the collision with the grain of sand is 278 gigajoules, or 66 tons of TNT. The damage to the ship would have been catastrophic, not something readily repaired.

On the same page, “By the ship's internal chronometer, the repair job probably only took a few days, but time dilation made it seem much longer to observers back on Earth.” Nope—at half the speed of light, time dilation is only 15%. Three days' ship's time would be less than three and a half days on Earth.

On p. 265, “the DNA of its organic molecules was left-handed, which was crucial to the future habitability…”. What's important isn't the handedness of DNA, but rather the chirality of the organic molecules used in cells. The chirality of DNA is many levels above this fundamental property of biochemistry and, in fact, the DNA helix of terrestrial organisms is right-handed. (The chirality of DNA actually depends upon the nucleotide sequence, and there is a form, called Z-DNA, in which the helix is left-handed.)

Spoilers end here.  

This is an inspiring and very human story, with realistic and flawed characters, venal politicians, unanticipated adversities, and a future very different than envisioned by many tales of the great human expansion, even those by the legendary Nathan Arkwright. It is an optimistic tale of the human future, grounded in the achievements of individuals who build it, step by step, in the unbounded vision of the Golden Age of science fiction. It is ours to make reality.

Here is a podcast interview with the author by James Pethokoukis.

Back